Page 1

Linux Magazine - Software RAID, RAID controllers, Linux Mandrake, GNOME 1.2, ...

Pรกgina 1 de 2

Issue 1: October 2000 l l l

Comment: Keep an Open Mind! News: StarOffice, Caldera, SuSE 7.0, Kylix Community: The User Group pages: LUGs

worldwide l


UKUUG Developers' Conference -

London l












l l


l l l

l l

RAID Controllers - 15 SCSI controllers in the Linux labs On Test: AMD vs Intel - 1GHz CPUs compared in the Linux labs On Test: SGI 230 - SGI 230 Visual workstation under review On Test: Journaling filesystems - Four journaling systems tested and explained On Test: Mandrake 7.1 - The latest Mandrake Linux distribution reviewed Feature: Mainframe - Linux on the IBM s/390 mainframe Feature: Virtual worlds - Creating and rendering 3D models Business: Mailbox Internet Ltd - UK ISP with 100% Linux platform case study Business: Media Industry - How the Internet will change the media industry Business: Windows/Linux integration Windows applications on Linux Know how: RAID - Redundant Array of Inexpensive Disks Know how: SoftRAID - Configuring SoftRAID solutions Know how: BIND - DNS server configuration Beginners: File Control - File permissions explained Beginners: FAQ - Where to get free support and help for Linux Beginners: FAQ - MIME cntent types explained Software: Kruiser - File manager for KDE Beginners: FAQ - How to get an application to open a program file type Beginners: Command Line - tar & gzip Beginners: FAQ - Adding items to the KDE Menu On Test:

file:///K:/REVISTAS/INFORMATICA/INTERNACIONAIS/Linux%20Magazine%2010t... 13/04/2012

Linux Magazine - Software RAID, RAID controllers, Linux Mandrake, GNOME 1.2, ...

l l





l l l



l l

Pรกgina 2 de 2

GNOME - GNOME 1.2 Desktop GNOME applications - A tour of some new GNOME application Software: Desktops - EPIwm - A small and fast alternative Window Manager Beginners: Software Installation - Package installation made easy Software: Internet Tools - Four download managers reviewed Software: XEphem - Astronomical ephemeris software under Linux Software: Gnutella - Distributed file sharing Cover CD: Installing Mandrake 7.1 Project: Setting up Freesco - Turning a computer into an Internet gateway Programming: Crystal Space - 3D game creation Programming: Making wizards for GNOME applications Community: Brave GNU World Cover CD: Mandrake 7.1 Software: Software:

<= Previous

Back To Archive

Next =>

Order Back Issues

file:///K:/REVISTAS/INFORMATICA/INTERNACIONAIS/Linux%20Magazine%2010t... 13/04/2012



Keep an Open Mind!

General Contacts General Enquiries Fax

Welcome to this, the first issue of Linux Magazine UK. The magazine aims to address the needs of the entire Linux community, both professionals and enthusiasts alike, as well as appealing to those who simply want to understand just what the Linux fuss is all about. Whoever it was said "from tiny acorns, mighty oaktrees grow" might have been thinking about Linux. Who would have thought that a hobby project started by a Finnish university student in 1991 would grow, in less than ten years, into a worldwide industry worth billions of pounds? Only this month, worldwide IT research company IDC, reported yet further growth of the Linux phenomenen. Of course, the acorn wouldn't have flourished without the right climate and nurturing. For that we have to thank Richard Stallman's vision of "free software" that resulted in the GNU Software Project. Belief in Stallman's ideal was enough to motivate thousands of programmers to spend millions of hours developing not just this free operating system but hundreds of free utilities and applications to run on it. That's why when we talk about Linux we should really be saying "GNU/Linux." But Linux isn't the only important outcome of the GNU Project. It has started another ball rolling: the move to open source. Open source software and GNU software aren't the same thing, but they share one fundamental principle: the program source code - the text that programmers write in order to create software - must be free for all to see and modify. An ever-increasing number of companies are starting to realise the benefits of open source including Sun Microsystems, which will be releasing the entire code of its StarOffice suite very soon. So what are the benefits of open source? For a software company one benefit is the potential for thousands of individual programmers to test and add features to their programs. Another is that customers increasingly want openness. They are starting to see the value of open standards and the disadvantages of closed ones. For you, the user, the benefit of open source is that you can once again take control. You aren't forced to accept what a closed-source software company gives you. You aren't locked into expensive upgrade cycles just to get rid of bugs that shouldn't have been there in the first place. If you find a bug you can fix it. If you need a new feature you can add it. Or, if you don't have the necessary expertise, you can find someone who can. Try doing that without the source code! Open source development directly addresses the problems that users of closed source software experience every day. That's why more and more people are moving to Linux and open source. Linux is now a mighty oak, strong enough to support business-critical applications. At Linux Magazine we campaign for the Linux cause in every corner of the world - both physical and virtual - and guarantee to bring Linux professionals and enthusiasts an unbeatable mixture of high quality editorial and informed opinion. We're at the dawn of a new era in computing - one where the users are back in control! See you next month

Subscriptions E-mail Enquiries Letters

01625 855169 01625 855071

Editor Julian Moss

Staff Writers

Keir Thomas, Dave Cusick , Martyn Carroll, Jorrit Tyberghein


Richard Ibbotson, Adrian Clark, Jono Bacon, Rob Morrison

International Editors

Harald Milz Hans-Georg Esser Bernhard Kuhn

International Contributors

Heike Jurzik, Jo Moskalewski, Andreas Huchler, Andreas Grytz, Stefan Lichtenauer, Thorsten Fischer, Björn Ganslandt, Fritz Reichmann, Manuel Lautenschlager


Renate Ettenberger, Tym Leckey


Hubertus Vogg

Operations Manager

Pam Shore


01625 855169 Neil Dolan Sales Manager Linda Henry Sales Manager

Publishing Publishing Director

Robin Wilkinson Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £73.88 Rest the World: £85.52 Back issues (UK) £6.25


COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE

Linux Magazine is published monthly by Linux New Media UK, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2000 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, e-mails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. ISSN 14715678

Julian Moss Editor

We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it.

Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.

10 · 2000 LINUX MAGAZINE 3



NEWSLETTER StarOffice going open source StarOffice, the cross-platform integrated office suite acquired by Sun Microsystems from developer Star Division last year, is released to the open source community under the GNU General Public License on October 13th. News of the release, which will be the single largest opensource software contribution in GPL history, has been widely welcomed. It is hoped that the move will help establish the software as the leading office productivity suite on all major platforms. StarOffice already runs under Sun’s Solaris, Linux and Microsoft Windows, with a MacOS version due later this year. The prospect of an open office suite based on open standards and running on all major platforms might be enough to encourage more users to switch from Microsoft’s closed, proprietary rival. Central to the open source project will be a new foundation called Modelled on the Apache Foundation, will be managed by Collab.Net which was formed a year ago to provide services for open, collaborative software development. will host the StarOffice source code as well as specifying XML file formats for documents and language-independent APIs. It will also provide Microsoft Office file filters so that other open-source developers can incorporate compatibility with Microsoft’s widely used file formats into their own programs. Access to the source code will be by means of the Collab.Net Sourcecast platform which provides the infrastructure and open source

tools developers will need. The source code base for will be that of StarOffice 6, the next version of the suite which is currently in development. This will use an architecture of separate applications and componentised services, rather than the integrated architecture it has today. Sun will continue to develop and market new versions of StarOffice Suite which will be based on StarOffice 6.0, when released, will be a branded version of the reference implementation.

Copyright Sun will retain copyright to the source code but Sun’s ongoing development work will be done as part of The intention is that all code contributions to the project, including Sun’s contribution of the StarOffice source code, will be made available under the Sun Industry Standards Source License (SISSL) in addition to the GPL. This duallicensing approach is designed to allow all organisations and individuals to use the source code freely and openly as they choose. An important requirement of the SISSL license is that it requires compatibility with the GPL reference implementation of the source code, including APIs and file formats. Copies of both the GPL and the SISSL licenses are available at . Leading open source exponents have

Star Office: soon to be open source

voiced their support for the project. Tim O’Reilly, founder and CEO of O’Reilly and Associates said: “Microsoft’s lock-in on its Office file formats is arguably at least as important to its monopoly position as control of the operating system itself. The availability of StarOffice under the GPL will give Linux a boost on the desktop. But more importantly, the wide availability of StarOffice Suite’s code for reading and writing Microsoft Office formats will allow other open source projects to provide compatible functionality as well. Open data is the other side of the open source coin.” Miguel de Icaza, president and founder of HelixCode Inc. said: “When I started GNOME I had two goals: a 100 per cent free, easy-to-use desktop environment and component architecture for GNU/Linux. With Helix Code and Sun working together, we can finally see this vision realised. The GNOME community is now investigating the very best way to integrate these.”

Info Sun: 01252 399570 developers/openoffice/ ■

InterBase goes open source too As announced earlier this year, Inprise has now open sourced the code for its cross-platform SQL relational database management system InterBase 6.0. However, plans to sell the InterBase product line back to a start-up venture led by Ann Harrison have been terminated. “After careful consideration, we determined that it was not in the best interest of Inprise/Borland stockholders for us to sell InterBase to a start-up entity that would initially be dependent on us for funding,” said interim president and CEO of Inprise/Borland Dale Fuller. The move has caused little reaction from the developer community, some of 6 LINUX MAGAZINE 10 · 2000

whom had felt that the open sourcing of InterBase was intended primarily to generate publicity for a product that had failed to make much impact on the market. InterBase 6.0 is the latest version of the database and introduces several new features including long integer, date, time and datetime data types, extended compliance with SQL92, an open interface for defining new national character sets, plus performance and security enhancements. It runs under Linux, Solaris and Windows. InterBase 6.0 has been released under a variant of the Mozilla Public License (MPL) ver-

sion 1.1. Developers using InterBase under this license will be able to modify the code or develop applications without being required to open source them. Both source and binary versions of InterBase are available for download from the Inprise/Borland website.

Info Inprise/Borland: 0118 932 0022 InterBase: ■



IBM to sponsor European Linux development

SuSE releases Linux for IBM S/390

Computing giant IBM plans to access to IBM System 390, RS/6000, invest more than $200 million over NUMA, AS/400 and Netfinity systems four years in a series of initiatives and IBM software. Performance meaaimed at speeding up the migration surement tools will also be provided of customers’ and key European to test how software performs under independent software vendors’ real-world workloads. Key industry (ISVs) systems to Linux. The initiapartners such as Intel and Logix are tives will include setting up Linux working with IBM in these developdevelopment centres across Europe, ment centres. alliances with other companies “As customers and partners focussing on Linux development recognise the growing importance of and the deployment of around 600 Linux as a key e-business operating Linux consultants, hardware and system there will be a tremendous software specialists and services demand for Linux-ready applications professionals. that can meet the workload needs of IBM has already opened develtoday’s e-business environment,” said Mike Lawrie, General Manager, opment centres at Greenock and Mike Lawrie, General Manager, IBM IBM EMEA Hursley in the UK, as well as Paris EMEA. “With these centres and these and Montpelier in France, Boeblingen in Germany, investments IBM will dramatically speed up this Warsaw in Poland and Budapest in Hungary. IBM is process and start getting applications on Linux-ready providing technical specialists as well as servers, stor- servers into the marketplace aggressively during the age systems and software. The centres will support second half of 2000.” Linux development on IBM platforms from Netfinity Info servers right up to IBM S/390 mainframes. They will be open to all ISVs interested in developing for Linux. IBM: 01256 343000 The development centres will offer a range of facilities including training and technical support ser■ vices. Testing facilities will include on-site or remote

Caldera acquires SCO Caldera’s announcement in early August that it was to acquire the Server Software Division and the Professional Services Division of The Santa Cruz Operation, Inc. (SCO) caused some surprise. Caldera plans to operate the Professional Services Division as a separate business unit of Caldera to provide services that meet the Internet and e-business needs of customers. The new company will offer what it calls an “Open Internet Platform” combining both “the low-cost, developeraccepted Linux operating system” and the “robust scalability” of UNIX services and solutions. Caldera Systems will form a holding company, Caldera Inc., to acquire the SCO assets. In return SCO will receive 28% of Caldera, Inc. “This acquisition is an industry-changing event that puts Caldera front and center as the answer to the enterprise question,” said Ransom Love, President and CEO of Caldera Systems, Inc. “Caldera will further broaden and validate both the Linux and UNIX industries and communities by providing open access to its unified Linux and UNIX technologies, and by offering support, training and professional services to customers worldwide.” David McCrabb, president of the SCO Server Software Division, said: “The new company will be a

very strong entity that we believe will compete successfully on a worldwide basis. Caldera, Inc. will incorporate a worldwide network of sales and support offices, a strong commercial UNIX system business and a rapidly growing open source company. This combination will be a force to contend with in the worldwide market for Internet solutions on high volume platforms.” Industry watchers are still pondering the significance of the acquisition. It’s unlikely to lead to wholesale open sourcing of SCO’s UnixWare source code. However, some UnixWare technology (such as NonStop clustering) may find its way into Caldera Open Linux.

Info Caldera Systems: http://www.caldera SCO: ■

SuSE Linux has released the first distribution of Linux for the IBM S/390 platform. A preliminary version was made available in July and includes the Linux operating system together with more than 400 applications including Apache, Samba, Sendmail and Bind, various development tools and a selection of other applications. By compiling them from source code, most other Linux applications will of course also run on the IBM S/390. The availability of Linux on the IBM S/390 platform offers the benefits of reliability, security, scalability and high availability for businesses running mission-critical applications under Linux. It is also expected to show significant cost savings for ISPs running server farms. The use of VM/ESA makes it easy to add new Linux guests, so a new web server can be created and up and running within minutes. The IBM S/390 architecture allows IBM operating systems such as OS/390, VM/ESA and VSE/ESA to run independently of each other allowing existing IBM mainframe users to use SuSE Linux to run web applications like Apache and Sendmail while at the same time maintaining their existing environment. This new release means that SuSE Linux now runs on all IBM products from ThinkPad to S/390 and establishes SuSE Linux as a major player in the enterprise server market.

Info SuSE Linux: 0208 387 4088 ■

Caldera Systems is to acquire SCO. 10 · 2000 LINUX MAGAZINE 7



Compaq to sponsor open handheld development Compaq has announced the Open Handheld Programme, a new initiative designed to stimulate innovation and research on handheld devices. Derived from Compaq Corporate Research’s “Itsy” pocket computer project, the Open Handheld Programme will enable developers and researchers looking to explore applications for handheld computing to experiment with Compaq’s iPAQ handheld by gaining access to the Linuxbased source code for the device. You can hold Linux in the Compaq is mak- palm of your hand ing available a port of the Linux operating system to the iPAQ handheld. This exploits the iPAQ handheld’s flash memory which allows the unit’s operating system to be easily upgraded. In addition to the core Linux operating system, Compaq is providing other software components including drivers, X-Terminal emulation, handwriting recognition, touch screen and multimedia support. Compaq will also provide hardware specifications for both the iPAQ handheld unit and its innovative Expansion Pack system. Resources available to researchers and open source developers will include a development expansion pack for the iPAQ handheld computer that allows prototyping of application circuitry. This is in addition to the commercially available Compact Flash and PC Card expansion packs.

“The Compaq iPAQ Linux port is designed to encourage the development of novel user interfaces, new applications and innovative research projects for the future,” said Bob Iannucci, vice president of corporate research for Compaq. “Through the Open Handheld Programme we hope to unleash the future potential of handheld and wearable computing and spark invention on the Linux platform.” To encourage innovation and the sharing of information, Compaq also is hosting a website ( ) dedicated to open source handheld development. The website provides development tools, code, executable files and links to other sites including support for hardware. This site is designed to be a vendor-neutral repository of shared information for any handheld and wearable computing devices. For more information about the Open Handheld Programme visit the site.

Info Compaq: 0845 2704222 Open Handheld Programme:

Major new distributions on the way Despite the reluctance of kernel 2.4 to make its appearance, many distribution vendors are preparing major new releases. Corel has announced that the Second Edition of Corel LinuxOS will be previewed at the LinuxWorld show in San Jose, California on August 15, 2000. The new version is claimed to offer new functionality, better compatibility, new features and expanded hardware support. Corel LinuxOS Second Edition will have been available for download from since August 15th, and should be available in retail stores during October. Red Hat has also announced “Pinstripe”, a beta release of the long-awaited Red Hat 7.0. This could be downloaded as we went to press from Core system improvements include kernel 2.2.16, glibc 2.1.91, XFree86 4.0.1 and GNOME 1.2. USB support for mice and keyboards is now included, and there is expanded hardware accelerated 3D support. New packages to be found in the distribution include gphoto, MySQL, AbiWord and dia.

Info Corel: Red Hat

■ ■

Red Hat gets embedded with Ericsson Red Hat and Ericsson have announced an initiative to jointly develop products using industry standards such as Java and open source technologies like Red Hat Embedded Linux. The first product from this partnership will be the Ericsson Cordless Screen Phone, expected to be commercially available by the end of the year. As part of this initiative Ericsson will work with Red Hat to establish open technologies such as the Embedded Red Hat GNU development tools which will be made freely available to First fruit of the partnership between Ericsson and Red Hat 8 LINUX MAGAZINE 10 · 2000

developers through Both Red Hat and Ericsson believe there is huge market of open source and third party developers ready and waiting to design, port and develop new applications for innovative Linux based products such as those being developed by Ericsson Home Communications. Ericsson and Red Hat are also jointly establishing new mechanisms for the creation of web content and certification of applications for these Internet appliances. For operators this will mean new Linux-based infrastructure technologies and services, that will provide easy to manage upgrades across the Internet. For consumers, this will mean a range of easy to use and fun Home Communications appliances. For Linux fans it will mean one more step along the road to world domination and Linux everywhere!

Info Red Hat: Ericsson: ■



SuSE Linux 7.0 released The long awaited SuSE Linux 7.0 has been announced, and should be available by the time you read this. The product will be sold in two versions. The Professional version is aimed at experienced users and those wishing to use Linux as a server operating system. The Personal version is tailored for new Linux users and desktop users. SuSE Linux 7.0 Professional provides IT professionals and advanced home users with a comprehensive collection of over 1500 of the latest Linux tools and software packages. The box contains six CDROMs and one DVD. There are additional software packages on the DVD which would not fit on to the CD-ROMs. The distribution includes software for implementing Internet and intranet solutions including web, proxy, mail, news, print and file servers. It is also suited to use for large database server applications thanks to its enhanced raw device support and the ability to address up to 4GB of main memory. Support for fully automated installation across a network is provided by means of the new ALICE tool (Automatic Linux Installation and Configuration Environment). Installation support for the SuSE Linux 7.0 Professional includes access to the SuSE Hotline and 90 days support. The price is £41.70 + VAT.

SuSE Linux 7.0 Personal targets newcomers to Linux and home users. The package includes three CD-ROMs which contain the core operating system and multiple applications including games, multimedia, imaging and Internet software. It also includes StarOffice 5.2. Installation is largely automatic using the YaST2 set up tool which has extended its hardware recognition to include devices, such as sound cards and printers. Internet access configuration using either a modem or an ISDN TA is now straightforward. Three easy to read manuals plus 60 days installation support should enable any user to get SuSE Linux up and running. The price is £24.70 + VAT. SuSE is also releasing version 7.0 Professional Update for experienced Linux users. The software is the same as for SuSE Linux 7.0 Professional but instead of the full SuSE manual there is only brief information on the most important enhancements.

Info: SuSE Linux: 0208 387 4088 ■

New Teamware Office now shipping Teamware has released Teamware Office 5.3 for Linux. It is a ready-to-run groupware product with a customisable user interface and state-of-the-art communications features. Teamware claims that the product is easy to install and set up. It is supplied in RPM format ready for installation on Caldera Open Linux, Red Hat Linux, SuSE Linux amd TurboLinux systems. No Windows clients are needed. All the functions of the product can be carried out using WebService, which permits access using a web browser such as Netscape. This includes the ability to create groups, libraries and discussion forums. User mailboxes can also be accessed using standard mail clients and the POP3 or IMAP4 protocols. The user interface is customisable using HTML templates and there Teamware Forum provides a group discussion board accessible using a web browser are nine language variations to choose from. Teamware Office 5.3 for Linux could be the ideal intranet solution product can be purchased online and is available as a single server for small or medium sized organisations. The product provides modpackage for up to 1,000 user accounts at a cost of $1,000. ules forfor electronic mail (Teamware Mail), discussion groups Info (Teamware Forum), document storage and retrieval (Teamware Teamware: 01344 472068 Library), meeting scheduling, resource allocation and time management (Teamware Calendar). Automatic meeting reminders can be ■ generated and documents can be attached to appointments. The



Kylix tops the bill at Borland conference

Rapid application development comes to Linux

The sixth annual UK Inprise/Borland Developers’ Conference will be held at the Royal Lancaster Hotel in London on September 24th, 25th and 26th. The conference programme contains much that will be of interest to Linux developers. A pre-conference training session held on the Sunday will include a first look at the long-awaited Delphi for Linux – code-named Kylix – which is expected to be launched at the conference. Other conference topics will cover moving to Kylix and a session on the new cross-platform object-oriented component library called CLX that the new development tool will use. The conference will also feature sessions on

JBuilder, Borland’s Java-based enterprise development tool. Version 3.5 of JBuilder, which has now been available for several months, enables the development of applications which run without modification or recompilation on Linux, Solaris and Windows platforms. Other sessions will cover InterBase, Borland’s database server which is available for Linux and other platforms. Inprise/Borland, which recently announced a return to profitability after years of losses, is rapidly becoming the major player in the Linux development tools market. Its flagship product Delphi has long been regarded as the best rapid application development (RAD) tool for the Windows platform . The launch of a Linux version will be a milestone for developers wishing to deploy Linux in both back and front office applications. According to Borland, Delphi for Linux will support Apache Server applications and facilitate a migration path to Apache from other web servers (including Microsoft Internet Information Server and Netscape servers) for existing applications developed using Delphi for Windows, C++Builder and Microsoft Visual Basic.

Info Conference: 0208 789 2250 Borland: 0118 932 0022 ■

New compact RAID enclosure from VA Linux

The VA Linux 9008: Over half a terabyte of storage in a single 2U rack-mount unit

VA Linux Systems has launched what it claims is the first 8-drive 2U Ultra2 SCSI storage enclosure system. The VA Linux 9008 2U storage system features eight hot-swappable disk drives in an ultradense 3.5in. high enclosure and can provide up to 584GB – over half a terabyte – of storage capacity for customers in Internet co-location environments. Designed for high reliability and availability, the VA Linux 9008 storage system has both redundant power supplies and 10 LINUX MAGAZINE 10 · 2000

redundant cooling modules as standard features. The eight disk drives are hotswappable, and the system uses a cableless design that allows access to fans, power supplies and drives without removing it from a rack. Like the VA Linux 2200 series servers, the 9008 storage system has LEDs that enable enclosure monitoring of temperature, drives, fans and power. The 9008 storage enclosure currently supports hard disk drives of up to 73GB with speeds of up to 10,000 rpm and is designed to support next-generation hard drives when they become available. The driver carrier design is the same as that of the VA Linux 2200 series server so the same spare hard drives can be used in both. Both 1” and 1.6” drives are supported. The VA Linux 9008 storage system extends current storage options for the VA Linux 2200 series and 3500 servers and will be a component of larger networkattached storage (NAS) solutions planned for release later this year. ■

London to host first European Apache Conference The Apache Software Foundation will hold its first European conference ApacheCon 2000 Europe at the Olympia Conference Centre, London on October 23rd - 25th, 2000. The conference is being sponsored by Covalent Technologies, IBM and Sun Microsystems. Billed as Europe’s first open-source software conference for developers, at least 400 developers are expected to attend. Three concurrent lecture streams will run throughout the three-day programme. Topics to be covered will include writing Apache modules, using Python, Tcl and Perl, Java application servers, PHP, SSL, XML and migrating Netscape servers to Apache. Sessions will be presented by leading experts from around the world. An associated trade show will be open on 24th and 25th October.

Info Conference: ■


24/7 support now available from SuSE SuSE Linux has restructured its technical support services in order to provide 24 hour a day, 365 days a year support for its enterprise customers. It will now be possible to choose from two service plans called Basic and Productive. The Basic service is aimed at Linux novices and users who are migrating from other operating systems. Customers receive 60 days of installation support by telephone, fax or email when they purchase a packaged SuSE Linux distribution. If support is needed which goes beyond installation, customers can then purchase Callpacks. A Callpack entitles the user to up to 20 minutes of support time. The Productive service extends the existing SuSE technical support offering to meet the needs of professional Linux users. The service provided can be tailored to individual requirements. Options range from support during office hours only through to the 24 hour a day, 365 days a year availability of specifically assigned SuSE support engineers who are familiar with the customer’s system. The Productive service also includes support of specific applications and the Linux system itself at the source code level. Announcing the new support programme, Rudiger Berlich, Managing Director of SuSE in the UK said, “24/7 support is a crucial prerequisite for the use of Linux in business-critical applications. With 100 support engineers worldwide we have an unrivalled infrastructure to offer support services at this high level, strengthened by our continuing training programmes and world class developer laboratories.”

Info SuSE Linux: 0208 387 4088 ■


New entry-level server from VA Linux VA Linux Systems has introduced the VA Linux 1150 server, a customisable entry-level 1U (45mm, 1.75in. high) Linux server designed for Internet data centres. The VA Linux 1150 is a customisable, low-cost 1U Web server that uses a single Intel Pentium III processor. Options include capacity for up to 2GB of memory and one or two front-accessible SCSI hard drives. Standard features include dual Ethernet network interface cards and a slim CD-ROM and floppy drive. Like all VA Linux systems the 1150 server is supplied configured for optimal Linux compatibility and includes VA’s Total Linux Coverage (TLC) support and service. Pricing starts at around £ 1,700 for a basic configuration. The new server complements VA Linux Systems’ 1000 series of high-performance 1U servers to create what the company says is the industry’s broadest offering of ultra-dense rack-mount Linux servers for Internet infrastructure. “Leading ASPs, e-businesses and Internet hosting providers today are demanding customised Linux server solutions as they build out their Internet infrastructure to support the rapid growth that comes with success,” said Robert Patrick, director of product marketing for VA Linux Systems. “The VA Linux 1150 server offers a cost-effective solution that can reliably scale, backed by expertise and services that move at the speed of Open Source. All of our product designs take full advantage of the flexibility of the Linux operating system, enabling us to combine build-to-order hardware and software in a way that no other vendor can.”

Info VA Linux: 0870 2412813 ■

Cyclades launches entry-level router Cyclades has launched a new entry-level Multiprotocol Access Router aimed at small to medium sized offices and Internet Service Providers (ISPs). The Cyclades PR1000 offers both Ethernet LAN and serial WAN routing capabilities for LAN-to-LAN and LAN-to-Internet applications. Designed to be easy to install and use, the router has a menu-based configuration. Monitoring and management tasks can be carried out using a console port on the device, or from anywhere on the network using SNMP or Telnet. The router software is held in flash memory and can be upgraded over the LAN or WAN, helping to cut the cost of maintenance. The Ethernet 10/100 Mbps LAN interface and V.35/RS-232/X.21 serial WAN interfaces support a variety of network protocols and ensure network

inter-operability. Features such as VPN, NAT, packet and service filtering, password control, RADIUS/ TACACS, PAP and CHAP authentication make the Cyclades PR1000 secure and protect the network from unauthorised access. Support for traffic shaping, dial on demand, and dedicated WAN support allows flexible and reliable remote networking. There are two models: the PR1000/RSV (V.35 or RS-232) and the PR1000/X21 (X.21). Each has a list price of £ 430.

The new router from Cyclades offers easy use and low maintenance costs

Info MPT Data Products: 01724 280814 Cyclades: ■ 10 · 2000 LINUX MAGAZINE 11

The VA Linux 1150 server has a compact size and low cost



System minder arrives in UK A new tool for open-source software end users has recently been launched in the UK by Massachusetts-based Acrylis, Inc. It will be sold by Linux reseller LinuxIT. Whatif is a subscription-based service aimed at making the management of open source software simpler and more reliable. It profiles the software running on subscribers’ servers and then provides the system administrator with information about relevant updates and new releases. “WhatifLinux enables companies to take full advantage of the flexibility and continuous evolution of open-source software, whilst removing the associated headaches for sysadmins” claims Reg Broughton, president of Acrylis. is based on a threetier service architecture that includes Intelligent Agents, a console and a Knowledge Base. Intelligent Agents are small Javabased applications that run on each of the systems being monitored. The agents use policies that are set up for each system to determine when defined thresholds are crossed and whether corrective or diagnostic action is required. The agents constantly check the knowledge base for news and updates. The console is a repository for status information collected from the agents and is accessed using a web browser. Designed specifically for system administrators and

support staff, it warns them of matters requiring attention and allows them to take corrective action such as installing or uninstalling software packages, patches or updates. Administrators may also invoke decision support analysis using the “What if” feature which analyses software dependencies in order to determine the consequences of installing or uninstalling patches and other software. This allows the administrator to be confident that a patch or new program will work on the server before installing it. The Knowledge Base contains information about software dependencies and conflicts, software alerts and problems. Software components tracked by the Knowledge Base include system software, utility software and application software. Information on the packages to be monitored is collected from the agents on each customer’s system. This enables Whatif to send alert information directly to customers who are using the affected software.


Red Hat to provide support for Dell customers Red Hat has extended its strategic alliance with Dell Computer Corporation to provide bundled technical support for buyers of Dell computers. Dell customers will now be able to purchase a range of Red Hat Services direct from Dell, including annual per server support. Customers with large numbers of servers will also be able to purchase a dedicated technical support account. The annual support offering will include unlimited access via telephone or the web for installation, basic and advanced configuration and systems administration plus one year of software updates via priority ftp access. “Dell and Red Hat recognise the importance of being able to offer One Source Alliance customers the highest quality Linux support,” said Ian Cole, Professional Services Director Red Hat Europe. “The Annual Per-Server support is our most flexible, scaleable programme for keeping systems supported and secure. The solution is designed to expand as an organisation grows and is ideal for companies across the spectrum from small businesses to large enterprises. It also provides customers with enterprise support at a fixed price per server.”


LinuxIT 0117 905 8718 Acrylis:

Red Hat: 01483 563170 Dell: 0870 1524699

TurboLinux provides super-computing solution TurboLinux announced that its innovative EnFuzion software has been deployed at leading financial company J.P. Morgan & Co. Incorporated to power the firm’s worldwide risk management system for fixed income derivatives. TurboLinux EnFuzion is software that runs on Linux, all major Unix versions and Windows NT. It turns a computer network into a high speed, high availability, fault tolerant supercomputer. It integrates into an existing IT infrastructure and uses the idle time on workstations and servers to boost computation speeds by up to 100 times. The software works with existing customer applications without requiring modification. It provides node failover and automated job scheduling that takes advantage of idle CPU cycles on cluster nodes. Evaluated at J.P. Morgan’s London office for the past year, EnFuzion is now in produc-

12 LINUX MAGAZINE 10 · 2000

tion use. The firm is now increasing its deployment to more than 1,000 cluster nodes, including an installation at J.P. Morgan’s New York corporate headquarters. “As a market leader in fixed income swaps and derivatives, our firm’s demands for computing power in this field are virtually limitless” said Michael Liberman, head of Global Swaps and Derivatives Technology at J.P. Morgan. “Deploying EnFuzion has allowed us to harness the power of hundreds of powerful desktop workstations during times when they would otherwise be sitting idle. We continue to move more of our critical calculations and processes to this architecture.”

Info TurboLinux: 01752 313190 ■



A report on the UKUUG - GNU/Linux Developers Conference which was held in London on 7th - 9th July


The UK Unix User Group held its annual GNU/Linux Developers Conference in London. Richard Ibbotson from the Sheffield Linux User Group attended and

Photos: Richard Ibbotson

sent us this report.

The Linux 2000 UK GNU/Linux Developers Conference took place recently Hammersmith in London. It was organised by the United Kingdom Unix User’s Group (UKUUG) and sponsored by SuSE Ltd and Sistina Software. The VA Linux team kicked off proceedings with a talk about Source Forge. Tony “fusion 94” Guntharp, one of the original developers, opened with an excellent presentation about Source Forge projects. He explained that developers must find a way to be more organised and more effective in their management of open source software. Tony leads and manages the Source Forge team. He was followed by Sebastian Rahtz from Oxford University with an exposition on XML and the documentation associated with it. Sebastian also covered how TEX and LaTEX could be made to work with XML. When he asked „what use could this be to anyone?« the delegates responded with the expected burst of laughter.

Aisdair Kergon of the UKUUG, organiser of the conference 16 LINUX MAGAZINE 10 · 2000

Things began to liven up with the talk by Miguel de Icaza, CTO and Chairman of Helixcode, about the GNOME project. The development of Unix has stagnated over the years and GNOME is one of the projects that will change this. Many more people are now using Unix because of the GNOME and KDE projects. Miguel gave a very thorough explanation of the GNOME project and even showed how to produce the NT 4 blue screen of death using GTK code. Rik van Rial, who writes kernel code for Conectiva in Brazil gave delegates an interesting insight into his work on memory management and the changes that he thinks will take place. He also discussed the VM changes in kernel 2.4.

Enlightenment The conference was treated to a rare appearance by Carsten Haitzler or “The Rasterman” as he is better known. Carsten is a senior software engineer at VA Linux Systems in Sunnyvale California and the programmer who developed Enlightenment, Electric Eyes, Gtk theme engines, Imlib and much more. His knowledge of X programming is extensive and his presentation was on the topic of performance programming. It began with Enlightenment shown in all its glory on the wide screen. Later, he went into the use of functions and other programming methods used in X-windows. Alan Cox gave chapter and verse on the latest developments in the 2.4 kernel which will be released soon. Delegates were asked what they wanted to hear about and almost everyone wanted to hear about the 2.4 code. USB, PMCIA drivers and security fixes: Alan covered them all with his customary detailed insight. Saturday’s full programme started with Red Hat’s Stephen Tweedie and a more than competent talk about clustering. Stephen worked at DEC for two years on VMS kernels for high availability clustered


filesystems and his presentation was full of authority. Then Stephen Lord, a senior filesystem developer for SGI who has followed GNU/Linux development since 1993, presented XFS Journalling filesystem and caused a great deal of interest. XFS would seem to raise more issues than you might think, and is a topic developers would do well to find out more about. Stephen was followed by Michael Meeks, whose business card just says „hacker«. As he demonstrated, he’s a bit more than that: in fact he is definitely one of the more intellectual and influential open source programmers around just now. Michael gave an inspired talk about the GNOME component model. Adrian Cox, the man who brought us transputers, gave a talk on the sort of thing that changes entire civilisations. His current project is a Beowulf in a box. His main problem, he explained, is getting hold of various types of hardware. However, most of the project is now finished. After his talk he took the lid off his demo machine and invited delegates to look inside. Next, John Edwards from VMWare took the stage and is a fluent advocate and demonstrator of VMWare. He admitted that in its earlier incarnations there were problems, but version 2.0 is much improved. John demonstrated how VM Ware can run Windows NT 4 and all the other versions of Microsoft Windows very well under GNU/Linux. Even IBM OS/2 and different distributions of Linux can be run under GNU/Linux. Among the many users are computer virus labs, which test viruses with the aid of VMWare. Even if Windows is wiped out by the virus, VMWare and the GNU/Linux host system continue to run.

Security Owen le Blanc from Manchester Computing Centre, the man famous for having written the GNU/Linux fdisk program discussed CODA. The security aspects of this aren’t easy to grasp, but Owen’s talk was well presented with clear diagrams. This was followed by a talk by Steve Whitehouse of ChygGwyn Ltd on GFS, a journaled, fault-tolerant clustered file system that gives high performance and great stability. Heinz Mauelshagen gave a presentation about a Logical Volume Manager for Linux. This is a subsystem for online disk storage management: a feature of great value to enterprise computing users that will help GNU/Linux become more widely established in large organisations. It is implemented using an additional layer between the peripherals and the I/O interface in the kernel. Heinz is currently working on version 0.9 version of his logical volume manager. The session by Wichert Akkerman, an MSc computer science student at Leiden University who works part-time for Cistron as a GNU/Linux developer, provoked some lively debate. In January 1999


Adrian Cox (left of centre) demonstrates his Beowulf in a box

he succeeded Ian Jackson as Debian project leader and his talk concerned the future of package management. This is a controversial subject in the GNU/Linux world. The proponents of the Red Hat .rpm package think that nothing else exists, while Debian users see the .deb package to be the only way forward. Debian packages give more information and simple and easy to understand error messages when something goes wrong. Wichert gave a good account of these issues and went on to discuss the possibility of a single package that could be used with both Red Hat and Debian type systems. It was by Linda Walsh, who works for the Trust Technology group at SGI, on the topic of GNU/Linux security policy left delegates in no doubt that GNU/Linux needs a security policy which defines the allowed methods of access by processes to various objects in the system. Sunday’s programme began with Hans Reiser talking about Reiser FS. His explanation of the Reiser filesystem was a masterpiece from beginning to end. He explained that “this is just a very small step in the right direction”. He thinks that everyone should be using systems that are fail-safe. Reiserfs is a journaling filesystem that uses classical balanced tree algorithms. Sponsors include SuSE, and several other organisations. Stephen Tweedie explained ext3 file system is basically ext2 with a few bits added on. There isn’t that much information circulating about ext3 right now so this talk generated many questions. The final talk by Luke Leighton who works for Linuxcare was “Samba the next Generation”. The present round of Samba development is looking into integration with MS Windows 2000. By Luke’s account it’s probably best to sit back and wait for this. All who attended agreed that the future for open source software has never been brighter. ■

Miguel de Icaza gives his talk on GNOME

Wichert Akkerman’s session on package managers provoked lively debate

10 · 2000 LINUX MAGAZINE 17



RAID controllers under the microscope

DOING IT THE HARD WAY Linux supports a

wide range of SCSI

RAID controllers. Linux Magazine


benchtested ten examples, together with two top-class servers. Bernhard Kuhn delivers the verdict.

Fig. 1: IBM ServeRAID

There are a great many RAID-compatible host adapters for Linux. For this test we concentrated solely on current SCSI-3 RAID controllers for PC servers. Besides pure performance, handling was of

18 LINUX MAGAZINE 10 路 2000

particular interest as was what monitoring tools were supplied. However, it often turned out that what was true about one controller could be applied to others in the same family. For testing of the ServeRAID controller from IBM and the competing products from Compaq, our hardware lab was also supplied with high-powered servers from these same manufacturers. Thanks to this, we took the opportunity to have a look at these as well. Any moderately recent Linux distribution should recognise the RAID controllers on installation, provided that the RAID array has previously been prepared using the controller BIOS or support CD as required and the controller BIOS is a sufficiently recent version.



IBM Netfinity 7600 The Netfinity 7600 from IBM comes in a classy black housing that can accommodate up to 8 units, which means that at least two grown men are needed to move it about. The device can, however, be completely dismantled within minutes without the aid of any tools (see below). If necessary it can also be dismantled and transported as individual components. The machine we tested had a XEON 550 processor, 512Mb RAM and three 9.1 Gb hard disks, which leaves ample room for additional expansion. Up to four redundant power supplies provide plenty of current and can obviously be changed while the server is running. As with other manufacturers, and as a general rule, all components marked in bright red can be changed when the server is running. This list of red components includes all the fans in the system, whose accumulated noise level remains, perhaps surprisingly, within limits. However, IBM should have a think about the lack of guide rails for the swappable components. Four out of the six 64-bit PCI slots are hot-swap-capable. This means that slot-in cards can be exchanged while the computer is running. Unfortunately, Linux isn’t yet able to exploit this useful characteristic – at least, we couldn’t find any information about this at IBM itself or anywhere else in the Linux universe. As is usual in a four-Xeon mainboard machine, voltage regulation modules for the processors are present. They bump up the cost and are perhaps ultimately unnecessary. There are a few manufacturers who include more than the absolute minimum – after all, they want to be able to give customers tailormade processor upgrade kits, in return for the usual small change, of course. In combination with the ServeRAID-controller, hot-swapping the SCA hard disks worked as expected, although they were marked light blue and so strictly should only be swapped in the “cold” condition. They were this colour because of the standard and otherwise normal Adaptec 7896 controller which is present onboard. Also on the motherboard is a S3 graphics chip, for which, however, we were unable to find a usable X server configuration. You don’t get the usual BIOS menu with the Netfinity. Instead it has to be initialised using one of the Windows bootable support CDs. During the ServeRAID set-up you’re asked about the operating system you’re going to install – of the selection on offer, we were forced to choose “other”. In all, the Netfinity 7600 left a good impression and proved fully compatible as a Linux server. With IBM’s repeated announcements that it is going to align its products better with Linux it shouldn’t be too long before we get PCI hot plugging features too. ■

The Netfinity 7600 can be completely dismantled within minutes.

Fig. 2: Successful hot-swap: the most colourful monitoring tool at present comes from IBM for its ServeRAID controller – but it demands Java.

IBM ServeRAID Commendably, IBM has some web pages dedicated to Linux ServeRAID support and this makes it considerably easier to get started. We tested the RAID controller in its native IBM environment in the IBM server also reviewed here. The BIOS screen of the host adapter only allows, via the set RAID configuration option, one choice: loading the factory default settings. For the initial set-up the IBM ServeRAID Support CD must be used (see Netfinity 7600.) With the monitoring tools for Linux, IBM leaves it up to the administrator to choose between command line operation (IPSSEND) and the ServeRAID Manager which boasts a GUI (see Figure 2). Using the latter you can remotely administer Linux Netfinity boxes that have the ServeRAID adapter installed. The necessary daemon is automatically installed at the same time as part of the RPM package (which runs to 11 MByte). 10 · 2000 LINUX MAGAZINE 19



Fig. 3: Mylex AcceleRAID 250

Compaq Proliant 6400R The Proliant server comes from the Compaq 6400 series. It seems a case of Compaq in name and compact in nature as the inner workings of the computer are extremely densely packed but don’t fundamentally restrict the functionality. The only limitation we found is that the drive bay can only be fitted with a maximum of four hard disks. The low height of the device may well force you to buy a secondary hard disk unit. All mechanical components give the impression of being solidly built and are fitted with guide rails if they can be swapped during operation (the hot-swap colour is brown-red). The Proliant we tested had only two spaces for redundant power supplies, though most buyers should find this adequate. All six PCI slots (64 bit) are hot plug capable. A little door in the cover of the housing grants easy access to the slotin cards. But it’s a shame that hot swapping can only be done on systems running Motorola’s CompactPCI-Bus You could hardly get more compact than this – but in the Proliant 6400R it’s still not a tight fit. and the corresponding Linux kernel patches. Like the Netfinity, the Compaq server has no BIOS menu and is initialised using the tools on the supplied bootable Windows CD. Here, too, Linux does not appear in the list of supported operating systems. Compaq’s Linux support Web pages advised us to enter “UnixWare”, which did the trick. The onboard SCSI controller from Symbios Logic serves the SCA hard disks as usual. However, our test configuration was also equipped with three Compaq RAID controllers which were able during the test to make use of the three 9.1 Gb media one after another. The compact server makes a splendid platform for web services based on Linux. However, the deafening noise of the fans might make a fleet of ■ servers too loud for many.

20 LINUX MAGAZINE 10 · 2000

Mylex AcceleRAID 250 At present, Mylex is keeping Linux at arm’s length. Linux drivers are available, but this is not down to Mylex but to Leonard Zubkoff, who created the drivers by means of reverse engineering when the hardware was released. As customer demand increased, Zubkoff was commissioned by Mylex to develop the official software. But on the manufacturer’s web pages Linux is still not mentioned anywhere. Nevertheless, Linux is aware of almost all the AcceleRAID- and extremeRAID family and can therefore cope with them in the usual fashion. AcceleRAID 250 has, like its big brother, an easy-to-understand BIOS menu with which you can perform the initial RAID set-up. The configuration which is necessary after that for higher RAID levels occurs in the background. The priority to be used for the synchronisation procedure can be selected as required – 50% is the default setting. After installing and booting the operating system only the Proc-Filesystem is available for monitoring and configuration purposes which is very unsatisfactory. In an emergency few administrators would have the nerve to delve into the README in order to find the magic solution. But, thanks to SAF-TE (SCSI Accessed Fault-Tolerant Enclosure) this should rarely be necessary: all Mylex controllers obviously recognise defective disks and take them out of the group. After changing a disk during operation (assuming there is an appropriate swap cradle) the controller immediately begins the reconstruction. Thus we have to conclude that Mylex controllers can be used under Linux within certain limits. In the near future an easy monitoring and configuration tool for Linux should be available with Mylex’s own Global Array Manager. In our tests, the AccelerRAID 250 refused to perform in the Netfinity test machine (the SCSI drives could not be found). With an old BIOS from 1998 on the extremeRAID 1100, Linux did not know where to start with either test system.


Compaq Smart Array 221/3200/4200


Hot swap with SCA

Initialisation of the RAID stack on the Smart Array was child’s play, thanks to the “SmartStart” support CD supplied. Five minutes later the Linux installation could begin. However, monitoring and reconfiguration on the fly is a bit thin: unfortunately /proc/array/ida0 gives no indication of the status of the RAID system. The only Linux monitoring tools – status and gtkstatus – only make one ioctl() query. They could, it must be said, be compiled very easily, but the hoped-for results (see Figure 7) did not appear. Also, after removing a hard disk during operation the report from status was still “Logical Drive OK”. Nevertheless, the hot-swap took place with no problems. During reconstruction the middle

RAID 5 makes a disk stack safe in the event of a crash but without special swap cradles the server still has to be turned off to replace a defective medium. However, the high signal frequencies present on a SCSI-3 bus cause trouble with the cradle to hard disk connector. For this reason, larger server systems provided by big name manufac- Normally the SCA connector is found only in OEM systems and turers have tacitly started using fast SCSI module recognition is done optically. hard disks with the Single Connection Attachment (SCA) interface. This is an 80-pole Low-Voltage Differential (LVD) connector that includes the voltage supply. The SCSI ID of the disk is also no longer defined using a jumper but is determined automatically from the socket on the SCA backplane in the drive bay.

Fig. 4: Compaq Smart Array 221

Fig. 5: Compaq Smart Array 3200

Table 1: Features Manufacturer Product DPT/Adaptec

SmartRAID VI Decade

Pre-OS configuration BIOS/Floppy-Disk


SmartRAID V Century



MegaRAID 1400 ServeRAID


AcceleRAID 250

BIOS Bootable Support-CD BIOS

IPC Vortex IPC Vortex Compaq

GDT 7538RN GDT 6538RS Smart Array 221


Smart Array 3200


Smart Array 4200

BIOS BIOS Bootable Support-CD Bootable Support-CD Bootable Support-CD

Monitoring/ConfUser Interface iguration-Tool for M/C-Tools Storage Manager Utility Lesstif-GUI and command line Storage Manager Utility Lesstif-GUI and command line MegaRAID Manager Text-UI (Slang) ServeRAID Manager Java-GUI or command line /proc (Global Array Proc-filesystem Manager i.V.) Ipcmon und ICPD Text-UI (Ncurses) Ipcmon und ICPD Text-UI (Ncurses) - (Gtkstatus) Ioctl()-System invocation - (Gtkstatus) Ioctl()-System invocation - (Gtkstatus) Ioctl()-System invocation

RAID Levels 0,1,5,10,50

BlockDevice SCSI

Hardware Support +

PCI Bit 32

LVD/SE intern 2

LVD/SE extern 2

Cache [ MB ] 128








0,1,3,5,10,30,50 0,1,5


+ +

32 64

3 1

3 2

16-64 4








0,1,5,10 0,1,5,10 0,1,4,5,10

SCSI SCSI /dev/ida/

++ ++ -

32 64 32

3 4 1

2 3 1

8-128 8-128 6















10 · 2000 LINUX MAGAZINE 21



[above] Fig. 6: Compaq Smart Array 4200: [right] Fig. 7: At present there are no more monitoring options from Compaq

LED flashes and so the end of the approximately 30 minute synchronisation process (in the 4200 using RAID 5 with 3 times 9.1 Gb drives) could at least be seen.

AMI MegaRAID 1400 AMI’s web pages provide the Linux driver and monitoring software for its MegaRAID family. They’re

Fig. 8: AMI MegaRAID 1400

shown as having equal status with SCO, Solaris and Windows NT – hooray!. The initial set-up of the RAID system occurs in much the same way as usual with an overview menu in the controller BIOS. Synchronisation for a RAID 5configuration consisting of three 9.1 GByte-hard disks took about five minutes in the IBM test system. During the installation of Red Hat 6.2 the controller hesitated a little (“SCSI-input/output error”), but after repeatedly ignoring the error in the popup window, the controller finally gave in. Maybe the synchronisation was not quite perfect, but no more anomalies appeared later. As is the case with the Compaq Smart Array, the MegaRAID-controller also requires a separate boot partition (because of the troublesome 1024 cylinder LILO limits). After the Linux installation the hard disks can be monitored and/or reconfigured using “MegaRAID Manager”, a text-based user interface. Hot swapping, tested with the Compaq machine, was a success. However, the monitoring tool merely reported the status of the system (degraded, see Figure 9). It didn’t say how much longer the rebuild would take. The end can only be determined by observing the activity of the hard disk.

DPT SmartRAID Decade and Century Distributed Processing Technologies, who make the SmartRAID Decade and Century, treat Linux like any other operating system. And that’s been the case for a while. So together with the obligatory device drivers there are also Lesstif-based configuration and monitoring programs as well as boot diskettes for Red Hat Linux ready for downloading. Anyone who won’t tolerate a graphical user interface on his server can opt for the opulently equipped command line instruction dptutil. The BIOS is used for setting up the RAID stack. However, it is only possible to install one maximum size logical device per RAID group. The initialisation of a level 5 configuration is run from the BIOS menu and takes more than two hours for the Decade using three 9.1 Gb disks. Although this is performed in background, the BIOS made three engineeres believe that leaving the Menu will abort the syncronisation. Let’s hope that after the take-over of DPT by Adaptec the promising development of Linux drivers and tools will continue.

IPC Vortex 6538 and 7538

Fig. 9: Limited powers of expression: the MegaRAID Manager

22 LINUX MAGAZINE 10 · 2000

This company started development of the Linux driver on day one. This is why Vortex controllers have far and away the best support for the free operating system. Apart from the comprehensive monitoring/configuration tool ipcmon (formerly gdtmon), you also get ipcd. RAID server systems can be remotely monitored via a network (using TCP/IP) with this daemon. This means that monitoring for an entire fleet of servers



can be performed centrally with just one ipcmon process running on the administrator’s workstation. As with some products from other manufacturers, the controllers from Intelligent Computer Peripherals also have a comprehensive BIOS for the initial configuration of the RAID stack. The flag ship GDT7538RN was reluctant to do its duty at first in either of the two test systems. A tip from the support hotline – they told us to open Jumper S4 – at least allowed us to make it work in the Compaq Proliant. Its little brother GDT6538RS was also unable to make friends with the special BIOS variants from IBM and Compaq. After some tweaking, the initial set-up was extraordinarily fast and easy, but then Linux refused to boot up.

Benchmarks During the performance tests of the RAID controllers, bonnie was used for booting (Option -s 1000). The virtual file system layer with its caching algorithm has a considerable effect on the measured values. This can largely be prevented using the boot option mem=32M. The RAID 5 systems, consisting of three fast SCA-SCSI hard disks with a maximum throughput of 80 Mb/sec, were prepared with the ext2 file system for the 1 Gb test file (approx). Since not all the controllers in both computers wanted to run, the measurement results in Table 2 should be taken with a pinch of salt. The GDT6358, for example, did not want to do its duty in either of the two high-end servers so we had to switch to a third system from a lower performance class (“IBUS”).

The measured values for Adaptec and Symbios Logic relate respectively to the onboard controllers of the two test machines. The test with single hard disk drives (“Single HDD”) shows that the hard disk drives of both systems are approximately equal in speed. This means that the test results of the controllers can be compared with each other, despite the fact they were calculated with different computers. Interestingly, SoftRAID 5 comes out astonishing-

[top] Fig. 10: DPT SmartRAID VI Decade [above] Fig. 11: DPT SmartRAID V Century

Fig. 12: IPC Vortex GDT6538RS

Fig. 13: IPC Vortex GDT7538RS

10 · 2000 LINUX MAGAZINE 23



Table 2: Test results Manufacturer



DPT DPT AMI IBM ServeRAID IPC Vortex IPC Vortex IPC Vortex Compaq Compaq Compaq IBM IBM Compaq Compaq

SmartRAID IV Decade SmartRAID V Century MegaRAID 1400 ServeRAID AcceleRAID 250 GDT 7538RN GDT 7538RN GDT 6538RS Smart Array 221 Smart Array 3200 Smart Array 4200 Adaptec 7896 Adaptec 7896 Symbios 53c876 Symbios 53c876


hdparm MB/sec 11.59 11.59 11.99 14.61 37.21 26.67 34.41

21.77 30.19 22.38 31.68 Weight

Char-out MB/sec 3378 10289 3713 15258 8614 19656 22608 18344 6901 6871 7032 22916 16021 20243 15753 1

Block-out MB/sec 3431 10202 3692 15501 8623 20868 30414 18423 6881 7001 6923 24228 18734 28207 15894 2

Rewrite MB/sec 2504 5349 3604 8441 5045 9459 8939 8845 4730 6076 5714 10967 11512 11825 4249 1

Char-in MB/sec 18892 19924 18708 19981 26777 25765 33310 33143 23752 23050 23278 24612 27511 27004 6048 3

Block-In MB/sec 26804 27913 25399 24479 36786 36244 35918 35877 31606 36011 36499 26104 33343 28822 6509 1

Seeks /sec 231.0 243.6 246.0 206.8 249.9 265.4 272.8 270.3 230.0 246.8 252.0 166.4 250.3 175.4 263.3 1

normal geom. average/1000 5.8 9.4 6.2 10.7 10.0 14.3 16.2 14.1 8.6 9.3 9.3 13.0 13.6 14.0 5.7

RAID 5 Speed Index SmartRAID IV Decade SmartRAID V Century MegaRAID 1400 ServeRAID AcceleRAID 250 GDT 7538RN GDT 6538RS Smart Array 221 Smart Array 3200 Smart Array 4200 Adaptec 7896 Symbios 53c876

5,9 9,5 6,2 10,7 10,1 16,2 14,2 8,6 9,3 9,4 13,7 14,1






10 12 14 16 18 20

Fig. 15:Speed Index: the controller in overview

Fig. 14: the ipcmon for the GDT controller series is teeming - especialy with statistical functions

Info Linux IBM ServeRAID device driver Dandelion Digital’s Linux DAC960 Page Mylex Disk-Array-controller under Linux Linux Driver for Compaq Smart-2 Compaq Linux pages MegaRAID Drivers and Utilities Linux DPT Hardware RAID HOWTO: Driver and Monitoring/Configuration-Tools: IPC-Vortex Linux drivers and Monitoring/Configuration Tools ■ 24 LINUX MAGAZINE 10 · 2000

ly well on the Netfinity but constantly loads the processor to full capacity. This could be counteracted by having more CPUs, although it would be cheaper to acquire a hardware RAID controller than a higher-performance XEON processor. Missing values in the “hdparm” column, by the way, indicate that the corresponding controllers did not register as adapters for storage media in the system. The benchmark findings were created with the co-operation of Dipl.-Ing. Axel Dittrich and Dr.-Ing. Hans Pfefferl, systems administrators at AGFAGevaert Munich, Germany.

Conclusion Apart from one Mylex controller with its old BIOS, all the test products ran on the whole satisfactorily. Only the extremely poor performance of the DPT Decade (which, by the way, had an Ultra 160 interface) left a lot to be desired. The fact that quite a few of the controllers had problems with the test systems may have been due to incompatibilities, in particular with the BIOS of the servers. The GDT controllers in particular were better able to make friends with normal PC servers and demonstrated their full power. With the ServeRAID from IBM and the Compaq Smart Arrays one must rely on the bootable support CD to perform configuration. ■



Gigahertz processors compared

HIGH CLOCK RATES Processors with clock speeds of 1000MHz were announced months ago but have only recently begun appearing in stores. We investigate the effects of the higher clock speeds under Linux. Bernhard Kuhn gets off the starting blocks.

New processors with high clock frequencies seldom work satisfactorily with older motherboards. Thus in addition to the Coppermine processor (Slot 1 with 256 KByte on-chip cache), Intel loaned us a VC820 motherboard with an audio/modem riser slot. Also bundled was a high-grade 128 MByte RDRAM memory module (Rambus DRAM) from Kingston Technologies. A dummy is required for the unused second RIMM slot (Rambus Inline Memory Module) to ensure the whole thing worked – unlike SDRAM, Rambus is extremely sensitive to such things (see Fig. 1). AMD loaned us an entire computer for test purposes. They too have not yet achieved the proclaimed socket changeover – the processor on our test machine was an old K7-based chip with 512 KByte of external level 2 cache on the slot A motherboard. AMD provided 384 MByte of main memory distributed over three slots, which we reduced to that of the Intel system in order to achieve fair conditions for testing. The remaining components on the Irongate 26 LINUX MAGAZINE 10 · 2000

chip set AMD motherboard did not have any significant effect on the test results. With floating-point-intensive benchmarks such as the “Blenchmark2” (Figure 2, Blender Version 1.80a) and the Povray-Skyvase-Test (Figure 4), the Athlon chalked up a clear lead: 15 to 20% better performance compared to Intel. This is fairly remarkable. With the “Developer Benchmark” the result is the other way round. The kernel compilation (see box and Figure 4) was completed by the Intel chip roughly 20% faster than from the AMD. A similar response was seen with Ralph Hlsenbusch’s performance test from the “Shared Services Benchmark Association” (iX-SSBA), version 1.21E. However, in this combined server/application test with a runtime of about ten minutes (564 seconds), a lead of 18 seconds is poor testimonial for the RDRAM, which is supposed to be ideally suited to these tasks. With the stream benchmark we again found no evidence of the alleged two to three times higher transfer rate of RAMBUS compared to conventional SDRAM, and nbench verified AMD’s position as a wonder weapon for being in the black even in the higher memory index (6.0 versus 4.5 with Intel).

Duel with the gladiator An Elsa Gladiac (32 MByte DDR-RAM) was used for the graphics-dependent performance tests with 24 bit colour depth. The combined force of Nvidia’s Geforce 2 GTS chip and Intel’s Pentium III allowed the highest 2D speed index recorded so far (see Figure 5). The Quake3 results in Figure 6 and the SPECviewperf-Suite cannot be compared directly, however. Unfortunately, the Intel equipment was loaned to our lab for only one week and our tests were performed with the then current SGI/Nvidia driver 0.93. A week later when we came to test the AMD chip it proved not to like this driver, so an update was necessary – forcing us to use the better performing version 0.94. As Figure 7 shows, this had scarcely any effect at all for high resolutions: in the Q3 demo even a Pentium III 550 MHz can easily keep pace with the gigahertz systems we had on test.

Fig. 1: To avoid trouble, Rambus requires all the RIMM slots to be filled, using a dummy if necessary





Pentium III 550


Pentium III 1000

Pentium III 550


Athlon 1000

Pentium III 1000

2,72 0




59 28

Athlon 1000 4






Fig. 2: The Athlon’s floating-point units are just in the lead.





Fig. 3: AMD’s Athlon is almost as quick as two 667MHz Alpha 21264 processors combined (22 seconds for pvmpov).

2D Speed Index

Kernel compilation Pentium III 550

Pentium III 550


198,3 300,1

Pentium III 1000

Pentium III 1000


224,5 353,9

Athlon 1000


Athlon 1000

Depth ■ 24bpp ■ 16bpp

218,2 337,9





Fig. 4: For Linux developers Intel’s Pentium III is only slightly more suitable. Fig. 5: An amazing combination: Pentium III and Elsa Gladiac

Quake3 (Demo1)


119,1 115,2

■ Pentium III 550 ■ Pentium III 1000 ■ Athlon 1000



■ Pentium III 550 ■ Pentium III 1000 ■ Athlon 1000


47,5 40,6 33,9 32,4 34,2


2,2 640x480








Fig. 6: During the textures computation even a weaker processor can keep pace with the four pixel pipelines of the Nvidia flagship.

Fig. 7: Unfortunately these figures cannot be compared directly: in the AMD system a newer version of the graphics driver had to be used

Incidentally, for computers with AMD processors, the beta status of the SGI/Nvidia driver frequently made itself apparent with X-server freezes or entire system crashes. It wasn’t possible to reproduce this behaviour with the gigahertz Pentium, suggesting that SGI and Nvidia possibly develop and test their Linux drivers with Intel systems only.

In the end, fast processors are like fast cars. Common sense by and large compels most of us to choose a medium-class product even though we know they will provide less pleasure at the wheel. ■

To sum up

USENET newsgroups frequently refer to the kernel compilation time as a measure of a computer’s performance. Whilst the result reflects perhaps the suitability of a system as a development machine, without specifying the exact configuration along with compilation time, the times people come up with are meaningless as a basis for comparison. Depending on the kernel version, the number of features compiled into it and also the compiler options (and version) used, the time to complete the task can vary enormously. Consequently for the “Developer Benchmark” we used a virgin kernel 2.2.0 with the default configuration. Following a make menuconfig; make dep the time for the compilation is derived from the arithmetic mean of the execution times (elapsed) of three passes of the command time make bzImage.

As was to be expected, both the gigahertz systems are faster than their predecessors. But they were also significantly more expensive. Whilst AMD has an enormous lead over its competitors, there would really need to be compelling reasons to pay such a price – more than three times higher – for a 20% increase in performance compared to a machine running at 800 MHz. On top of this you must consider the cost of RIMMs for Intel’s RAM bus. And applications that fully exploit this type of memory are few and far between.

Sense and nonsense about the kernel compile benchmark

■ 10 · 2000 LINUX MAGAZINE 27


SGI 230

The SGI 230 Visual Workstation

FASTER THAN A SPEEDING BULLET The SGI 230 is a machine any Linux fan would be keen to get their hands on. Dr. Adrian Clark has used SGI systems for about 10 years on and off before being converted to Linux, so was keen to see the two come together in something affordable.

The main board runs at a frequency of 133 MHz.

Let’s look at the hardware first, working from the outside inwards. The box is basically a tower but comes with a purple side panel that disguises the rectangular appearance a bit. Depending on your point of view, this either gives it a stylish appearance or is a pointless adornment: I rather like it myself. The case is easy to open and the internals are well laid out. The system has a 300W ATX power supply. The majority of the components in the review machine were things that could easily be bought off the shelf. The motherboard is a VIA Apollo with the usual ports including an Intel 10/100 Mbps Ethernet interface and a sound device (which seems to be unused by Linux). The heart of the machine is a 735MHz Pentium III with a huge fan and 256Mbytes of ECC RAM. There are five PCI slots, two of which are occupied by an Ensoniq ES1371 sound card and a top-ofthe-range two-channel Adaptec 3950 SCSI controller (Linux’s lspci command identifies it as a 3940). A single 9Gb IBM DNES series hard drive sits in one of the five drive bays. The choice of SCSI card seems a little strange: it can drive up to 30 SCSI devices but surely no-one is going to connect that many to a 230! The 48x CD-ROM drive connects to the UDMA66 IDE controller on the motherboard. (There

28 LINUX MAGAZINE 10 · 2000

appears also to be a version of the 230 with a UDMA HDD.) A standard 3.5” floppy drive hides discreetly behind the front cover. One minor irritation is that it has no drive active light so if you dump tar files directly to floppies you need to wait a few seconds after synching. SGI machines have always had a reputation for their graphics performance and the 230 follows this tradition. The 4x AGP slot carries SGI’s VR3 64Mbyte 3D graphics card, a mid-range VPro card whose design originates from Nvidia. The great plus point about this card is that it is supposed to handle vertex transformations and lighting on the card, which should help when rendering geometrically-complex scenes – this has certainly been a problem area with many of the other PC-class graphics solutions I’ve looked at in the past. On the odds and ends front, the review machine had a badged 19-inch monitor capable of doing 1600x1200 at 75Hz, with on-screen controls for everything under the sun. The keyboard was compact with a soft feel, but the mouse was dire. If I bought a 230, I’d have to replace it straight away with the only Microsoft product worth buying. The powered speakers sound fine when playing Quake but are certainly not what I consider to be hi-fi.

SGI 230

The Linux Installation Of course, the thing that makes the 230 interesting to us is the fact that it runs Linux! On booting, the machine displays a huge SGI logo and then selftests. Lilo’s boot prompt makes an appearance, after which you can choose the regular Linux kernel (2.2.13) or one that incorporates SGI’s patches. I only looked at the latter. The system comes with Red Hat 6.1 pre-installed, with extra goodies like the drivers for the graphics card. SGI also provide their additions on a CD-ROM in RPM format: the accompanying documentation states that they can be installed on top of SuSE as well as RedHat. Apart from the kernel and X server, this seems to be a straight-from-the-box RedHat installation. The review system boots X directly and presents a GDM-type login box. After logging in, you get the GNOME desktop by default. This isn’t my preference (I still use fvwm v1!) but is easily changed. The X server is XFree 4.0 with GLX extensions and defaults to 1280x1024 at 24 bits per pixel, but with only a 60Hz refresh rate. I noticed that there was a kernel module that deals with the graphics card, and that the kernel had the kni patch, which allows users to run Pentium III SSE code (the “floatingpoint MMX”). The SCSI driver doesn’t seem to have tagged command queueing enabled, which I have found improves disk throughput substantially, especially when the system is being thrashed.

Putting it through its paces As usual, the first thing to do is install Quake III, turn all the settings to their maximum and give it a go. Rendering at 1280x1024, the frame rate averaged at about 60 fps, peaking at 80 and never dropping below 40! The rendering is also flawless, with none of the cracked edges or corrupted textures that you sometimes see. The GLX hacks that come with xscreensaver all work; in fact, most of them run too quickly! The same is true of the Mesa 3.1 demos and the GLUT demos that come with SuSE. For those of you who know it, the “gears” Mesa demo gives a staggering 1465 frames per second! (For comparison, my trusty G400 card in a 450MHz Pen-


tium III gives just 312.) I have a VRML model of some buildings with over 44,000 quadrilaterals in it; the 230 was able to render this at between 4 and 11 frames per second, which is stunning. I was interested in the stability of the OpenGL drivers under Linux, so I pushed them quite hard by running several applications at once, including some home-grown ones. Nothing unexpected was found, though killing an OpenGL program can sometimes make the system pause for several seconds before coming back to life: not a major problem but something of an irritation. The raw processing power of the machine is pretty much what you’d expect. Benchmarks aren’t tremendously meaningful in this reviewer’s opinion; but if you’re into them nbench, the Linux version of BYTEmark, is one of the better ones. It reports an integer index of 11.4 and a floating-point one of 10.2, with ECC error correction turned on in the BIOS. In practical terms, encoding a 5-minute audio CD track into an MP3 file took all of 45 seconds. The disk performance was better than expected, giving (curiously) 18.9 Mbyte/s for writing and 16.8 Mbyte/sec for reading. The manuals are up to SGI’s usual high standard. Every port had its pin-outs labelled and the BIOS settings were all explained. There were no Linux documents, though there’s so much information around in one form or another that this won’t concern anyone but a total Linux newbie. It would have been nice if the system had included a boxed version of Red Hat or SuSE for those people who won’t be connecting their machines to the Internet.

[left] The graphics card delivers outstanding performance. [right] A perfect, but rather expensive, games machine

Info Price: £3424 SGI: (0118) 925 7500 ■

Drives can easily be changed thanks to the ingenious plug-in system

Verdict In conclusion, the 230 is a well-engineered piece of hardware with a solid Linux installation and solidlooking drivers for the specialised hardware. The graphics performance is stunning, so if your applications are able to exploit that or you’re a dedicated Quake-player with deep pockets, it’s an excellent buy. If you don’t need the graphics performance there are probably better buys available elsewhere, though the build quality and expansion space remain attractive. I’m already saving up for one! ■ 10 · 2000 LINUX MAGAZINE 29



JFS Comparative Test

ACCOUNTING FOR THE HARD DISK A journaling file system is essential if Linux is to break into the enterprise market. At the moment there are four highly promising approaches, all at various stages of development, from virtually non-existent to ready to go. Bernard Kuhn delves deeper Linux is rock solid in terms of workstation and server functionality. But those of us who simply have to have the latest red-hot kernel patches and hardware drivers, or who are simply involved in kernel development, will be no stranger to system crashes. And of course, not even the best system can keep going if there is a power cut (unless it has a highly expensive UPS system!). No matter what the circumstances are that force Linux to its knees, after rebooting the rule is to do a hard disk check first of all. This inspects all the files and rarely completes inside ten minutes. Depending on the size of the file system and number of hard disks, the procedure may even take several hours. Worse still, in rare cases a manual intervention may even be necessary (fsck). Although it is unlikely, the data SNAFU will have played itself out completely if the file system can no longer be repaired. At this point the only thing that will help is to restore a hopefully up-to-date backup. But this makes things sound worse than they really are. The Extended2-Filesystem has provided sterling service since 1993 for countless Linux servers, whose rare unplanned downtimes put the potential problems into perspective. However, Linux beginners and pros are longing for a file system Table 1: File systems with journaling at a glance Name B-Tree 64-Bit clea Development stage ReiserFS Yes No Ready for everyday use with restrictions ext3 No No Fully-functioning Alpha test version jfs (IBM) Yes Yes Alpha test version xfs (SGI) Yes Yes Beta test version for kernel 2.4-series 30 LINUX MAGAZINE 10 · 2000


which is completely ready to run after a nasty crash, without human assistance and within a few seconds. The magic word for the solution to this problem is journaling.

Journaling The “ordinary” ext2-filesystem sets a flag on sign-on (mount). This flag is only cancelled on an orderly sign-off (unmount). So after a crash the operating system can tell whether the disk has been cleanly unmounted or not: in other words, whether there is potentially inconsistent data on the disk. In order to correct this fault all files must be checked individually, which can be a very tedious procedure (called Recovery). A solution to the problem is to record in a journal which files are being processed at any moment. Then, after a power cut, only the

Fig.. 1: unbalanced (below) vs. full balanced tree (top)


files that were open at the time need be checked. In modern file systems a transaction-oriented approach is used, more often than not as long as any procedure has not been completely executed, the old data from the previous transaction retains its validity. This is especially important if for example a write process has had an unplanned interruption.

Balanced trees Apart from brief recovery times, modern file systems are characterised by greater accessibility. This is achieved by using so-called B-Trees instead of the usual linear arrangement of data blocks. So, for example, in the ext2-filesystem directory entries are made in a linked list (see Fig. 1). If a directory has e.g. 1,000 entries, then on average some 500


search steps are necessary to find a file, while in an ideal balanced tree (binary tree) after just ten steps (ld 1000) the result is brought to light (compare Fig. 1 with four entries). The improvement in performance is, however, obtained at the expense of a considerably more complex (and thus error-prone) program code. In particular, after each new entry the tree has to be “re-balanced”, so that all paths from the root to the most distant leaves remain roughly the same length. Seen like this, linked lists are completely degenerate balanced trees.

Practice So much for dull theory. The complexity of B-Tree and journaling algorithms have so far made conversion into Linux reality difficult. Apart from the ready-

Recipe 1: ext3fs-retrofitting Fitting an existing ext2-file system with journaling capabilities is, thanks to the backwards-compatible ext3-filesystem, almost childsplay for an advanced Linux user. Linux beginners have only to overcome the hurdle of the kernel compilation and installation. Obviously it is essential to back up all important files before carrying out this step, which does have its risks. 1. Firstly, you will need an unmodified kernel and the ext3 patch. cd /tmp wget .13.tar.gz wget 0.2c.tar.gz There already exists a ext3-0.0.2f version, but this one only applies to a prepatched Red**Hat kernel (2.2.16-3). 2. Now the kernel has to be unpacked, patched, configured and installed. Don’t forget: during kernel configuration in the section File systems the option Second extended fs development code must be activated for ext3. After installing the kernel you should first ensure by doing a reboot that the system still starts up as usual. cd /usr/src rm linux # delete old link tar -xzf /tmp/linux-2.2.13.tar.gz tar -xzf /tmp/ext3-0.0.2c.tar.gz cd linux patch -p1 <\<> ../ext3-0.0.2c/linux-2.2.13-ext3.diff make menuconfig make clean && make dep && make bzImage make modules && make modules_install # after /boot over write kernel and instal by LILO 3. Now all non-root-partitions can be converted to the ext3filesystem. To do this the user must manually install and initialise a journal file on the partition. Depending on the activity the journal should have a capacity of about 10 to 30 MB. For the initialisation the inode number, which the journal on the partition represents, is needed. This number can be found using the command “ls” with the option “-i”. In the following example /usr is a correctly mounted ext2-formatted partition (/dev/hda4).

# in /etc/fstab for the /usr-input, replace the # file system identifier “ext2” by “ext3” vi /etc/fstab # prepare /usr unmount (otherwise “busy”) init 1 # install journal (30MB) dd if=/development/zero of=/usr/journal.dat bs=1k count=30000 # determine inode number (here e.g. 666) ls -i /usr/journal.dat 666 /usr/journal.dat # mount /usr as ext3-fs and initialise journal with # calculated inode number umount /usr mount -t ext3 /dev/hda4 /usr -o journal=666 So far, so good. Unfortunately the above method cannot be used on the root partition, since this cannot be unmounted during operation. The chicken-and-egg problem can be solved by performing the journal initialisation as a kernel boot option. So to do everything in sequence: 4. As with the above example, the computer has to be informed in /etc/fstab for future system starts that the root file system will henceforth be an ext3-filesystem (replace ext2 in the “/”entry by ext3). 5. The journal is (as above) installed by hand on the root partition and the inode number (in this case, 7777) of the journal must be assigned as a kernel parameter dd if=/dev/zero of=/journal.dat bs=1k count=30000 ls -i /journal.dat 7777 /journal.dat reboot The computer now starts up again. When the LILO prompt appears a couple of additional kernel options including the inode number of the journals must be added to the initialisation: LILO: linux ext3 rw rootflags=journal=7777 The root partition will now be available after a hard reset within a few seconds recovery time (or at least, it should be). The whole procedure can also be cancelled by again replacing ext3 in /etc/fstab by ext2.

10 · 2000 LINUX MAGAZINE 31



Fig. 2: The structure of the ReiserFS (simplified) in UML notation

made, free open-source ReiserFS, IBM and SGI are now rushing to ship their tried-and-tested and robust implementations JFS and XFS to Linux. But for anyone who was already satisfied with the ext2-file system and is only interested in short recovery times, a closer look at the ext3-fs will be worthwhile.


Info [1] Ext3-Download: [2] ReiserFS-Homepage: iserfs [3] XFS-Homepage: [4] JFS-Homepage: veloperworks/opensource/jfs/i ndex.html ■

The ext3-filesystem is merely an expansion of the well-known ext2-filesystem with journaling functionality and has no performance-boosting balanced trees. This means that existing Linux installations can continue to be used immediately on an ext2 base without reinstallation or time- and spacewasting copying actions, since ext3 is built on the basis of the existing structures [1]. On top of this, for advanced Linux users, installation and getting started are not especially complicated (see method 1). However, ext3fs is, according to the chief developer Stephen Tweedie, only in the alpha test phase and a long way from being suitable for everyday use. Nevertheless, there is a lot of positive feedback being gathered in news groups and other Internet forums. Also, a short test in our hardware lab did not find any weaknesses. But at the same time you must not forget that alpha test versions in Linux would be regarded by the ‘marketing’ department as the equivalent of Version 1.0 in many other operating systems.

ReiserFS What began as a private study by the file system specialist Hans Reiser has now developed into a powerful file system which is suitable for everyday use [2]. Tests and experiments are however not yet completed and research is continuing into possible improvements - now at the request of SuSE GmbH. The ReiserFS arranges files and directory entries into balanced trees. Small files or remnants of files

32 LINUX MAGAZINE 10 · 2000

(tail ends), directory entries and references to normal 4K file blocks (Unformatted Nodes) are all accommodated in 4K blocks (Formatted Nodes) in order to make best use of the available disk space (cf. Figure 2). A beneficial side effect of this arrangement is that you get more data in the buffer cache and therefore fewer disk accesses are necessary. With ReiserFS a watch is kept at all times to ensure that the data is kept close to its references and directory entries so that large movements of the write/read heads are avoided. All these refinements have meant that the source code has grown five-fold compared with the ext2 file system. Nevertheless (or even because of this) there are currently still some restrictions imposed on ReiserFS: only 4k blocks are allowed and the use of SoftRAID is completely prohibited. Hardware platforms other than the x86 are also unsupported. Unfortunately it is considerably more complicated to start up ReiserFS than it is with ext3 (see Recipe 2). As an alternative to time-consuming manual installation you may install Mandrake Linux 7.1 or SuSE Linux 6.4: both distributions offer ReiserFS as an alternative filesystem even on the level of the graphical installer. After intensive tests by SuSE, some kernel developers consider ReiserFS is still not ready for mission critical use. In day-to-day work this file system has, however, already proven itself ideal for more than six months on the workstations of the author of this article. Daily backup of all important data on an NFS server (with tried and trusted ext2fs and a tape drive) is nevertheless vital in case of a full crash.

XFS More than a year ago, SGI announced their “jewel in the crown” to be made available under GPL conditions for Linux. Unlike the other numerous and successful Open Source Projects from SGI, the XFS has got off to a sluggish start - the reasons for this being, among other things, that it just wasn’t “open” for a while. SGI’s programmers were in the process of removing foreign intellectual property from the source code and replaced it with their own re-implementations. First impression of the alpha test version proved that these radical measures didn’t take down the robustness of the code with it. Currently, XFS for Linux is in Beta test stage and according to SGI, a production stable version for the kernel 2.4 series will be available soon [3].

JFS IBM’s Journaling File System for Linux was announced, surprisingly, at this year’s Linux World Expo in New York. The currently available version (0.0.9) however is still at a very early stage of development. The robust, tried and tested source code for this is available as drop in replacement for the



Method 2: ReiserFS conversion Anyone wanting to convert their computer to ReiserFS has at present got their work cut out. Just as in the case of the ext3 retrofit, this procedure is not without hazards. However, since the existing system has to be copied across in the course of the retrofit, there is no need for a backup - provided no errors are made during repartitioning and there is a suitable boot diskette available in the case of a reconfigured LILO. As part of the preparation a free partition is required, which has to be big enough to be able to accommodate the existing Linux installation (obviously the system can still consist of several partitions). In addition you will need an approx. 30 MB /boot partition (with ext2 file system), since LILO will not work with a kernel on a ReiserFS. /boot is mounted as read-only in normal operation, so that after an abrupt interruption to operations there is no need for fsck. But now, step by step: 1. First the kernel sources and the patch for the journaling ReiserFS are needed. Warning: there is also a ReiserFS without journaling! cd /tmp wget .16.tar.gz wget .5.24-patch.gz

/boot), /dev/hda6 is the future (journalled) root partition and /dev/hda5 the future /boot partition (ext2, r/o). The virgin journaling ReiserFS requires, after formatting, as much as 30 MB for the journal. # Set system to “back-up” mode init 1 # back up root partition mkdir /tmp/newroot mkreiserfs /dev/hda6 mount /dev/hda6 /tmp/newroot (cd && tar cplf - . —exclude boot) | (cd /tmp/newroot && tar xU pf -) # copy over /boot mkdir /tmp/newboot mke2fs /deb/hda5 mount /dev/hda5 /tmp/newboot (cd /boot && tar cpf - . ) | (cd /tmp/newboot && tar xpf -) 5. Adapt fstab. Instead of ext2 for root, reiserfs must be substituted. Also, the root partition has now moved (hda2 after hda5). And don’t forget the entry for the new /boot partition. So: instead of the old /etc/fstab entry for the above example /dev/hda2 /


defaults 1 1

Unpack, patch, configure and install the kernel (Warning: Don’t forget the option Filesystems/ReiserFS in configuration)

the relevant part of the new /tmp/newroot/etc/fstab must look something like this:

cd /usr/src rm linux # delete old link tar -xzf /tmp/linux-2.2.16.tar.gz cd linux gzip -cd /tmp/linux-2.2.16-reiserfs-3.5.24-patch.gz | patch -p1 make menuconfig make clean && make dep && make bzImage make modules && make modules_install # copy kernel over after /boot and install via LILO

/dev/hda5 / reiserfs defaults 1 1 /dev/hda6 /boot ext2 ro 0 0

3. After rebooting the tools (especially mkreiserfs) can now be prepared:

6. The best way to check whether this comprehensive move has worked, risk-free, is with a boot diskette. This means the delicate Master Boot Record will be unaffected for now: # Create boot diskette dd if=/usr/src/linux/arch/i386/boot/bzImage of=/dev/fd0 rdev /dev/fd0 /dev/hda6 # define new root partition sync && reboot

cd /usr/src/linux/fs/reiserfs/utils make cp bin/reiserfs /sbin

Once the computer (hopefully) has booted up in the copied system, all that remains is to modify /etc/lilo.conf for the new environment. Before calling up LILO, however, the /boot partition has to be mounted writeable, since otherwise “lilo” will

4. Setting up the new file systems and copying across data: in the following example /dev/hda2 is the current root partition (inc.

mount -o remount,rw /boot

kernel source tree. Unfortunately the roughly 1.3 Megabyte tgz package [4] contains only sparse documentation. Nevertheless a glance at the source code reveals that the JFS also makes intensive use of balanced trees and appears to be 64bit-clean.

Conclusion Four highly promising approaches for journalling raise great hopes that Linux will shortly be ascending into higher spheres. This feature is important, not only for enterprise servers, but also for the embedded Linux market, which is growing like wildfire. (In this application it is quite common for com-

puters to be switched off without shutting down.) With XFS and JFS, two projects which have arisen out of commercial products have entered the race. Their existing and robust code is currently being brought up to scratch for Linux by the developers. But the easily-installed ext3 and in particular the ReiserFS are already there. The latter can even be choosen as alternative to ext2 within the graphical installers of the latest SuSE and Mandrake distributions (SuSE encourages their customers to do so). Although there are rumours that tell that ReiserFS isn’t production stable, at least the author spent six month of daily work on ReiserFS-enhanced workstations – without any data loss! ■ 10 · 2000 LINUX MAGAZINE 33



Linux Mandrake 7.1 reviewed


In little more than a year, Linux Mandrake has gone from being an easysetup version of Linux intended mainly for newbies to the distribution of choice for many gurus. It isn’t surprising. Linux Mandrake is based on Red Hat’s familiar distribution but MandrakeSoft has made significant improvements with each new version.

[below] Mandrake’s graphical installer makes setup easy even for complete beginners. [below right] The KDE desktop comes preconfigured with useful icons.

This is a distribution designed with modern hardware in mind. The precompiled kernels and most of the applications are Pentium optimized giving better performance than you’ll get from Red Hat - which is designed to run on everything from a 386 up. The kernels (version 2.2.15 in this release) also have USB support built in. Mandrake has one of the best graphical installers there is, ensuring that almost everyone will get Linux up and running quickly. The result is a distribution that should please all but those who still believe that installing Linux and locating software to run under it isn’t supposed to be easy. Mandrake 7.1 comes in two distinct retail packages (plus a GPL version available for download or from places like The Linux Emporium.) Linux Mandrake 7.1 Complete is the cheapest version and contains three CDs. There’s also the seven-disc Linux Mandrake 7.1 Deluxe. You pays your money and you takes your choice, which isn’t as simple as you might think. In both the Complete and the Deluxe versions the first two discs contain the installation files and source code, while a third Applications CD contains non-open source applications. These include Acrobat Reader 4.0.4, StarOffice 5.2, Borland’s InterBase relational database, CompuPic: a nice graphics file manager that’s free for non-commercial use , an evaluation copy of AC3D: a 3D object modeller, RealPlayer 7 Basic, the Macromedia Flash plug-in for Netscape, an MPEG TV Player (shareware), a trial

34 LINUX MAGAZINE 10 · 2000

version of the Open Sound System (with a 3 hour evaluation period!) and an evaluation of IglooFTP Pro, a graphical FTP clientbased on the free gFTP that timed out after one day. In the Complete version the Applications disc also includes PowerQuest’s PartitionMagic 5.0 plus Adobe Acrobat editions of five books: Special Edition - Using Linux, Teach Yourself Linux in 24 Hours, Teach Yourself KDE in 24 Hours, Teach Yourself Gimp in 24 Hours and Red Hat Linux 6 Unleashed. The more expensive Mandrake 7.1 Deluxe version doesn’t contain these items; however, there’s an extra applications CD-ROM containing more than 20 other packages including the Sun Java JDK 2.2, evaluation copies of VMWare, Executor (a MacOS emulator), Citrix ICA Client for Linux, Arkeia Network Backup and VariCAD. You also get AVP AntiVirus - a Linux program that checks files for Windows viruses - and the IBM ViaVoice SDK. The Deluxe version of the distribution includes XFree86 4.0 among other goodies and the installation set spills over on to a third CD containing both binaries and their source code. On top of this there are two Contributors CDs containing over 500 extra RPMs of applications, documentation and source code. In fact, the Deluxe package is more complete than the Complete version! Mandrakesoft claims that the Deluxe version as a whole contains over 1,800 different applications making it probably the most comprehensive collection of Linux software you can buy.


Existing Mandrake users will want to upgrade as the changes since version 7.0 are considerable. Video cards based on the i810 chipset are now supported, wheel mice now work in many more applications, power management has been improved and there is now support for USB modems, printers and Zip drives as well as UDMA 66 hard drives. A version of GNOME is included which works with the Sawmill window manager, while an updated version of Qt adds support for Chinese characters to KDE. XFree86 4.0 has also been included as a option in the Deluxe version for those who like to be at the leading edge (and whose graphics cards are supported by it.) On a more mundane level, packages have been reorganized into more logical (or at least smaller) groups so that instead of searching through a few big menus looking for an application you must now hunt through a lot of smaller ones. Menus are updated automatically in all graphical environments when packages are added or removed, as long as you use Mandrake’s own package installer RPMDrake. Mandrake 7.1 may have a lot of appeal for experienced Linux users but it still has many features that continue to make it the best choice for first-timers. The installation CD auto-runs under Windows to give a choice of setup options including Lnx4Win. This is a setup that installs into a Windows partition so that no repartitioning of the drive is needed, and which can be uninstalled as easily as removing a Windows application. If Linux co-resides with Windows, Setup installs a TrueType font server that can pick up the user’s fonts straight from the Windows fonts directory. This will help smooth the transition path by helping to ensure the new user’s documents look the same under Linux as they do under Windows.

Painless Thanks to Mandrake’s excellent installer first-time users are likely to find their introduction to Linux a painless experience. There’s a fully automated install option that requires the user to make a minimum of choices. Gurus can choose the Expert option while the rest of us will select Customized which gives you control over the disk partitioning if you want it and offers a choice of setups for office use, development or use as a server. These choices affect the packages that are installed on the system by default, but they are a bit too broad for convenience. Inevitably, whatever you select, you end up spending time removing packages you don’t want and adding those you do, but which didn’t get installed automatically. Package selection apart, the Mandrake installer does a first-class job of creating a fully working system. All the system’s users, not just root, can be set up during the installation. Network configuration is thorough: there’s even an option to configure dialup Internet access if your computer has a modem. Printer configuration is comprehensive too, even handling the setup of remote Unix, NetWare or SMB printers and concluding by printing a test page to


kruiser, a file manager that will seem familiar to Windows users, is included.

verify that the printer works. Mandrake offers you a choice of GNOME or KDE graphical environments. It exhibits no particular preference although the user guide includes a chapter on using KDE but none on GNOME. Mandrake is one of the few distributions to include kruiser, a file manager for KDE similar to the Midnight Commander of GNOME or the Windows 95 Explorer. The latest version, 0.4, boasts features like a graphics file preview and the ability to bookmark folders and add external drives and FTP sites to the directory tree. These enhancements make it one of the most versatile tools of its type, though we found some of the new features to be a little fragile. A welcome improvement in this version of Mandrake is the use of grub as the default boot loader. Grub allows users to pick a boot option from a colourful menu instead of the cryptic LILO prompt: it is also unaffected by the 1,024-cylinder limit of lilo, a stumbling block for many new users. However, despite its use of three methods to attempt to determine the available RAM you may still need to edit the kernel parameters to get Linux using all of your system’s memory. Grub is configurable using the DrakBoot graphical configuration tool but, confusingly, the klilo graphical boot configuration tool is also present which could result in the inadvertent overwriting of grub by lilo. One reason for paying good money for a packaged Linux distribution is the manuals that come with it. Mandrake includes two manuals: an Installation and User Guide and a so-called Reference manual. They are quite useful, falling somewhere between those of Corel (well presented but too basic to be useful) and SuSE (full of useful information but a bit daunting for a beginner.) Like most Linux distributions, the support you can expect from Mandrake only covers installation. So which version should you buy? Enthusiasts should certainly consider buying the Deluxe version. Beginners would be better off buying the Complete version for the additional books supplied as Acrobat files on the CD. But neither choice is likely to cause disappointment. Both are easy to install and both contain plenty of useful software. When everything is considered it’s fair to say that Linux Mandrake 7.1 is one of the top Linux distributions available. ■ 10 · 2000 LINUX MAGAZINE 35



Linux running on the IBM S/390

BIG BLUE PENGUIN There must be few Linux developments that have excited more attention at the moment than the Linux/390 port. Ulrich Wolf shows us what is especially interesting about it is not just the power of the exotic architecture but also the number of new application options which the mainframe opens up for Linux.

36 LINUX MAGAZINE 10 · 2000


ar from our humdrum IT world, far from the overhyped markets and the poorly written software that gets debugged by the paying customer and the “Broken by Design” hardware architecture lies the kingdom of the mainframe. There, almost everything is different from what we are used to. So different, that for example hard disks are not called hard disks, but DASD, pronounced “Daz-dee”, which stands for Digital Access Storage Device. There are computers living in this kingdom which are only ever run up once in their lives and whose address space can be occupied by several different operating systems. If, in the manner of the king in the fairytale, you ask who owns all the nice, stable hardware and software, you almost always get the same answer: it’s IBM.



The Mainframe Market The S/390 architecture from IBM and its predecessors S/370 and S/360 plays, in the field of mainframes, the role of a standard in the same way as the architecture of “IBM-compatible” PCs at the other end of the scale. Except that in choosing the operating system of the “big iron” IBM has not let itself be taken for a ride by a second-class garage firm, but has retained control over the system software. Not least for this reason: the mainframe division is regarded by many in the know as a cash cow for the company. But even if IBM dominates in this market segment, it is not an autocrat. When it comes to S/390compatible machines, Hitachi, HDS, Amdahl and others are active world-wide, while those with regional importance include Olivetti, Comparex and PQ Data. According to an analysis by the GartnerGroup in 1999, S/390-compatibles account for some 85% of the mainframe market. In Europe, Siemens with its BS/2000-based business server series is also important, even if this has reduced somewhat in recent times. Apart from its extremely high reliability, there are two main advantages of a mainframe architecture. These are the I/O performance and the high number-crunching power of the processor. The former is achieved through the design of the architecture. Each device comes with its own high-performance controller which completely takes over the administration of physical addressing. For this reason, the S/390 could also be seen as an asymmetrical parallel computer.

Management information is, to put it bluntly in a single phrase, an attempt to distil from the totality of the business data of a company, information that is valuable to its management. The process of data creation and the extraction of information and ultimately knowledge is described by terms such as Data Warehousing, Data Mining or Online Analytical Processing at a very high level of abstraction. These technologies are currently being implememted using databases and middleware on the S/390 architecture. The second application area which IBM is hothousing for mainframes has now, unlike MIS, itself become a byword known even by the man in the street: “”. Much of the data hoarded on the “big box” is, of course, also highly suitable for use as the basis of “e-commercialisation” of the company.

Inside S/390 Enterprise Server. The IBM flagships is available in 24 versions.

Rise or fall of the big iron? The decline of the mainframe world has often been heralded, for various reasons, yet it keeps being postponed. In recent years it has become apparent that highly accessible Unix servers represent only a limited threat to mainframes. On the whole it is now being assumed that there will be a return to moderate medium-term growth of the mainframe sector. In particular, in this respect, new tasks are being created for the “old iron” in the domain of e-business. This is why IBM is currently staking two principal claims, especially by means of marketing activities, for mainframes. Firstly, everything which comes under the heading of “Management Information Systems” (MIS). Many companies have been using S/3x0-systems for decades, on which their entire mission-critical data and processes are hoarded over a period during which the world of Microsoft, but to some extent also that of Unix, has been stumbling due to incompatibility with its neighbours. So what could be a better idea than letting loose MIS tools on these gigantic, consistent databases? And why should a Sun server or perhaps something Intel-based with Linux or NT be necessary for applications? 10 · 2000 LINUX MAGAZINE 37



A few terms from the S/390 environment Channel Processor-controlled unit that permits the transfer of data between main memory and peripherals. CMS Conversational Monitor System The native operating system for virtual machines under VM CTC DASD Direct Access Storage Device In the broadest sense, any main memory, but more precisely hard disks including the controller unit EBCDIC Extended binary-coded decimal interchange code A set of symbols similar to ASCII, but coded differently. ESCON Enterprise Systems Connection A sort of IBM in-house network standard. There are special ESCON channels, processors etc. IPL Initial Program Load Synonym for the boot process under Unix/Linux, also used for loading a configuration file into the main memory in order to restore a working environment. Is often used as a verb: “to IPL”. LPAR logical partition Logical partition of the complete hardware. Parts of all resources are assigned to an LPAR - CPU, RAM, I/O - and after that it is a fully autonomous computer inside the mainframe. Once installed, LPARs run for a long time. MVS Multiple Virtual Storage Operating system for the S/390. Predecessor of the OS/390. Also used as a name for the MVS part of the OS/390 or, not quite correctly, for the OS/390 as a whole. Open Edition The Unix Interface of OS/390 OS/390 Current standard operating system for S/390 mainframes SNA IBM-specific network architecture with layer structure. Defines own logic structures, protocols and formats. VM Virtual Machine The virtual CPU, virtual memory and virtual I/O channels available to the user of a guest operating system under VM – Thus, the virtual hardware. VM/ESA Virtual Machine/Enterprise Systems Architecture The software which makes it possible to create virtual machines. The “enterprise system” Sysplex System Complex A cluster of MVS- or OS/390 operating systems on one or more real machines. Minidisk under VM, a DASD or a logical part of a DASD with its own virtual device number and virtual cylinders.

But the Internet and the mainframe world are two universes which have long since been developing alongside each other. Internet technology is traditionally Unix technology and a stalwart MVS developer has little interest in Unix. On the other hand, in the Unix world there is an untold number of established applications, protocols and tools available which are fine-tuned to each other, and which are also mostly still free software. So it would be a shame not to be able to use them. In fact, the new OS/390 does “talk” to a Posixcompatible Unix, but many developers are not really happy with this mixture from two worlds. It is in this connection that the initiative of IBM to make Linux available for the S/390 architecture must be seen. This will in fact make it possible to run all applications available for the “de facto standard” Unix, on extremely high availability hardware.

Nothing like home In principle there are three ways in which an operating system can run on an S/390. The first way is direct on the hardware, in which case it takes control of the complete resources of the system. 38 LINUX MAGAZINE 10 · 2000

More usually, however, the hardware available is partitioned. Note that this does not mean the same thing at all as partitioning a hard disk under Linux. On the S/390, all resources (such as CPUs, RAM, IO channels) can be assigned to different logical partitions (LPARs). This allocation may be static or dynamic, depending on the resource. This means, for example, that a CPU can belong to two partitions and can be made available as required to one or other of the partitions (floating CPU). A large S/390 machine from Generation 6 (G6) has 16 CPUs. Two of these are dedicated IO CPUs and are not directly available to the OSs. Another two CPUs are “reserves” in case one or more CPUs of the remaining 12 fail. The same technology which allows floating CPUs also makes the reserve CPUs available transparently for the operating systems (i.e. without the operating systems even noticing that a CPU has failed). Each LPAR represents a separate and complete system within the physical machine. The OS mounted on it has full control over the assigned resources. The S/390 architecture permits up to 15 such LPARs.

Guest system The third option for operating a “foreign” OS, is as a “guest system” within the VM (Virtual Machine) operating system. VM multiplies the resources of a complete system or an LPAR by a time-slicing procedure almost as many times as it likes. The individual duplicates of the hardware are called VM guests. By means of a log-in process very similar to that used in the world of Unix, a user can now obtain access to the system console of this virtual 390 system. Using CP (Control Program) the user can configure the hardware of this virtual system and boot up operating systems located on the disks or tapes accessible to it. Booting, in the 390 world, is known as IPL-ing. IPL stands in this case for “Initial Program Load”. In this way more than 40,000 different Linux kernels have been run in parallel on a ten-processor machine. Originally VM/ESA was conceived by IBM as an interactive multi-user operating system. VM takes over the multi-user section, providing each user with their own little 390 machine. Each interactive session then runs a small single-user operating system specially written for it, which is started in the VM guest. This system is known as CMS, which stands for “Conversational Monitor System”. The predominant operating system on the S/390 is OS/390, which has a long line of direct predecessors going back 35 years. The last of these predecessor versions, MVS, is to a large extent compatible with the current OS/390. Developers and administrators are still fond of using MVS as a synonym for OS/390. On the other hand MVS can also currently be regarded as a subset and/or lower layer of OS/390. This operating system is best suited to the special requirements of mainframe hardware.


Among other things, it also contains UNIX Services, which provide a Posix-compatible interface to MVS functions. MVS was originally only intended to execute batch jobs entered using punched cards. These punched cards were still in use well into the 1980s. Batch processing is still one of the primary tasks of computers in this size class, and the JCL (Job Control Language) developed for this is one of the most polished languages available for non-interactive applications. Besides this, VM is often run for interactive applications and for software development. VM is useful for software development because its simulation of S/390 hardware is so good that any operating system which runs on the S/390 architecture will also run there. This makes it possible to make available to every software developer his own S/390. For operating system development this is especially handy as the developer will not hurt anyone else if there is a system crash and it is not necessary to provide each developer with his own real 390 machine, which would probably be too expensive, even for IBM. One of the great advantages of VM in development (including, of course, the development of Linux for S/390) is the debugging options available under CP. It is possible to track accesses to memory areas and to step through the program running in the guest instruction by instruction. By doing this it is possible to watch how contents of registers and the real or virtual memory alter.


generate faster code. This difference is, however, so fundamental that no part of Vepstas’ code could be included. The competing project with the official name of “Linux for S/390” was started at IBM Germany in Bˆblingen and was developed in secret until the kernel and Binutils were finished. On 15th December 1999 the almost-complete port was demonstrated for the first time. The result can be seen: A Linux which runs on “bare iron”, on LPARs and under VM. The latter is probably the most potentially useful method of operation since with this it is possible to start literally hundreds, even thousands of kernels, which run in completely separate address areas. But running it in an LPAR is also useful because of the cost advantage. In August IBM offered licensing models with dramatically reduced costs for Linux, compared to OS/390, both for Logical Partitions and Virtual Machines.

VM is the fastest The installation of a Linux for S/390 under VM requires only a little knowledge of VM and a VM guest account. The account must have access to at least two minidisks of a specific size and two dedi-

One of the S/390’s processors. This multi chip carrier is the most complex one in the whole world produced in series.

Enter Linux The idea of transferring Linux onto mainframes is not new, and nor was it born in the IBM lab. An earlier project going by the name of Bigfoot had the aim of supporting the earlier version S/370 too. Bigfoot was (or is) a normal free software project, whose initiator Linas Vepstas worked for IBM. Linas Vepstas is also famous for such things as GnuCash and is involved in various projects in the field of Linux Enterprise Computing. Most readers will, however, certainly know him as the author of Linux Software RAID Howtos. For various reasons, the S/370 project he initiated was at first put on ice, and rumours say that IBM did not exactly welcome the Bigfoot people. The background, from the subjective viewpoint of Linas Vepstas, can be read on Had it come about, this project would have made it possible to make Linux run on older hardware, too. However, it is doubtful whether this would have been useful, since older hardware designed using bipolar technology is a power-guzzler par excellence and is therefore now scarcely used. The developers at IBM did, however, ensure that the IBM port uses instructions which are only available on relatively new machines (G2 onwards). These instructions make it possible for a new compiler to function very much more simply in some points than that of Linas Vepstas and in addition to

Info Official Linux/390 page: S/390-Hardware: Linas Vepstas’ pages on Linux for mainframes: Think Blue Linux, the distribution for Linux/390: Hercules-Homepage: Linux/S390 under Hercules: Private Homepage with gigantic collection of links: http://os390-mvs.hyper ■ 10 · 2000 LINUX MAGAZINE 39



Ds were still referred to as /dev/dd<letter>, which was changed on the advice of Alan Cox). The problem that, due to this naming system, the DASDs can only address 26 disks will presumably be resolved by the device file system in 2.4, because up to 65536 devices can be connected to an S/390 machine. There are now several Linux distributions based on this port.. One of the first is mounted on the server of Marist College, which co-operates closely with IBM. On the basis of this, the German company Thinking Objects has developed its own distribution based on RPM packages named “Think Blue” But soon SuSE entered the bandwagon with their own mainframe distribution. Red Hat CEO Matt Szulik, however, told “Linux Magazine” that his company had no such plans.

Hercules - a “giant emulator”

A little too big for the desktop, but just fine for big business

cated device addresses for the network connection. All this has to be provided by the VM administrator. Once these requirements have been fulfilled the installation process proceeds in a manner familiar to anyone who has installed Linux. One interesting feature under VM is the socalled Virtual Reader. (Reader in this case stands for punched card reader.) This behaves in a similar way to magnetic tape. When Linux is booted up for the first time the Linux kernel, the parameter line and the image for the RAM disk in the virtual reader are copied under VM. Then you give the instruction to make an IPL (initial program load = boot up) from the virtual reader. In this way the content of the reader is loaded into the main memory and executed from a defined address. When Linux has been successfully run up using a file system in the RAM disk, the hard disks (minidisks or dedicated DASDs) can be mounted, formatted and provided with the necessary content (root file system). Now the system is in a condition to allowing booting without a RAM disk. After this there are two options for booting up Linux for S/390 under VM. The first is to continue loading and IPL-ing the kernel and the parameter line in the Virtual Reader. The second is to cancel the IPL instruction for a specified hard disk. In this case, a bootloader must be installed on the corresponding disk. Similar to the LILO loader used to boot Linux on Intel hardware, the bootloader for Linux for S/390 is known as SILO. Naturally there are some differences of principle in an architecture which deviates so far from the PC, the “home planet” of Linux, which are visible from the outside. Since the DASDs of the S/390 are a special kind of hard disks, they are not addressed via the device names for SCSI or IDE disks. The device names for DASDs are /dev/dasda, /dev/dasdb, etc. (In the previous versions of Linux for S/390 the DAS-

40 LINUX MAGAZINE 10 · 2000

Hercules was around even before Linux/390 was born. This is an emulation of the S/390 instruction set including the channel program under Linux 2.2x. Together with Linux/390 it enables, on a home PC (with at least a Pentium processor) one of what must be the wildest “emulation orgies” currently possible. When Hercules is up and running (it is reported that it takes some time before the first prompt appears), it is possible to install Linux/390 on this emulated mainframe. So everyone whose appetite has been whetted by this article for the world of the mainframe can at least get a taste of this on the PC. Anyone who would rather have a “real” mainframe OS can also use the since-released OS/360. For anything else a licence is needed. VM, in principle, does not run with Hercules: the ultimate goal of running several Linux kernels in the virtual area of an emulated mainframe thus remains a dream. The emulator is released under a special licence, which the author refers to as the Hercules Public Licence. This allows use only for “educational and hobby use” and prohibits among other things the distribution of modified versions or the use of parts of the code in other programs. But anyone who is simply curious about how an S/390 feels should be able to live with this. ■



Creating and Rendering 3D Models

LIGHTS! CAMERA! ACTION! Many people find creating their own three-dimensional worlds a fascinating idea. It’s no wonder, then, that there are numerous high-performance tools for modelling and rendering that run under Linux and require no professional experience. Manuel Lautenschlager sheds some light

Anyone who talks about 3D modelling usually thinks of powerful workstations from SGI or clusters of a few hundred computers sharing the load. And Linux has seen a great deal of use running these kind of systems. But someone who wants to open up virtual worlds on their home PC can use the free operating system too, allowing at least a taster of virtual reality with no worries and without having to pay a penny.

3D programs perform two tasks: they have to create three-dimensional scenes and models, and they have to render these scenes (computing the ‘coloured in’ three-dimensional image from the models). The two tasks are fundamentally different and also impose different demands on the hardware. This is why professionals use separate programs. A modeller program on a graphics workstation is used to construct scenes and a software 10 · 2000 LINUX MAGAZINE 41


[left] Comparison of render quality: a cylinder, first processed with an integrated renderer (Maya Version 2.3) ... [right] ... and again with a dedicated program (Blue Moon Rendering Tools Version 2.5). The difference in quality is clear, especially on the edges.


package called a renderer, specially developed for the purpose, builds up an image or an animation from the descriptive data. Rendering is an enormously calculation-intensive process and takes a lot of time on low-powered machines. For demanding tasks, such as the work required for the film Titanic, entire clusters of machines were put together, forming so-called render farms. The virtual ship in Titanic came about with the aid of a specially written modelling program and was rendered on a cluster of 160 Alpha computers, 55 of them using Windows NT and 105 of them running Linux. Because artists usually like to get an idea, while still drawing, of roughly how the finished scene will look, all current modeller programs include a render engine. The results of these come nowhere near those of dedicated renderers but are adequate for home use.

Layers and Filters [left] Here is a normal two-dimensional image... [right] ... and the depth data on it.

In the case of large projects, on the other hand, studios push dedicated program development even further. It’s fairly common to have each part of an area, such as the water or the landscape, calculated by specialised programs. They’re all then edited into a single image at the end. In order to be able to perform this task, data also has to be stored with an

42 LINUX MAGAZINE 10 · 2000

extra characteristic which isn’t normally contained in a normal, two-dimensional image – depth data in the form of the Z-Buffer. This indicates how far the point shown is from the viewer. Each pixel of the image holds one of these values, which enables different scenes to be matched-up on the 3D stage. Instead of depth data, “alpha layers” are often used, which can make an image transparent wherever you want. So for example a program which specialises in water areas and landscapes such as arete Digital Nature ( can render the ocean, but mask the Titanic ship with an alpha layer so that it can be produced by a different program. Advantages of such layers include specific effects, which can also be masked by an alpha layer. Then a normal image-processing program like Gimp can add the effect again after rendering. This process is applied at the point when computer-generated images are to be combined with real filmed scenes, since it allows the light conditions to be taken into account as well. The basic principles are the same in all modelling programs. The main differences are in userfriendliness and in the range of functions. Almost anyone can render their own movie and get up and running using free tools.

Blender Blender is perhaps the most popular modelling program under Linux. It is a commercial program. There is a free useable version with limited functionality. This can be turned into the full version on payment of a fee of £57. The program is fully scriptable with Python, controlling radiosity, environment mapping and other advanced functions. There are numerous plug-ins, some good tutorials and a lively user group, which recently met at the offices of NotANumber (the company behind the program) for a one-week conference. The program originally stems from the Commodore Amiga, believe it or not, but has since been ported onto a number of platforms.



Interface The interface shows signs of its Amiga history. At first glance it looks a bit crowded and jumbled. There are no menus. Instead, you must press certain buttons to display tools and alternative work screens. It isn’t immediately apparent which buttons do what. In the full version the whole interface can be customised using a Python script. This effectively lets the program be restyled and altered with a few commands in order to adapt it to the current task. In allowing this the program is following a popular trend because many high-end programs now have a similar option for workflow optimisation. Since the interface can use OpenGL, it is even possible to have 3D elements in it. As mentioned, Blender doesn’t actually provide any standard menu structures. However, the interface can be made more interactive than in many more expensive programs. Anyone who can get the hang of adapting the workspace will be able to work surprisingly quickly with this program.

creating objects and storing scenarios, pops up under the mouse cursor. The buttons in the control window then allow fine adjustments. This space bar menu is also found in Maya. By pressing [Tab] you can toggle back and forth between Edit and Move mode (or rather Transformation mode). No matter what mode you are in you can always move the object. The crucial difference is that the centre point of the object in Move mode is shifted along with it. This doesn’t happen in Edit mode.

A scene from the Film: “The Fifth Element”: The surface of the water was rendered with arete and the ship with Maya.

Use The numeric keypad and the space bar are important controls. The viewer can use the number keys to change the view of the scene, such as “walking” or “flying” around it. When the space bar is pressed a menu, containing important functions such as

Starting Blender properly Anyone using a window manager with several virtual image screens (not to be confused with “multiple desktops” which are found for example in KDE) will come across a problem with Blender when switching to another screen. Blender simply crashes after a few minutes of messing around this way because it is not allowed to draw directly on the visible screen. For this reason, it is best to run Blender on an X-Server of its own - the whole thing can be automated as follows:

Processing The modelling process, like the interface, can take some getting used to. Functions like “move”, “rotate” and “scale” can be activated not only using keystrokes but via drawing certain symbols in the viewing window. So drawing a rotated L switches into rotate mode and a pointed V switches into scaling mode. Mouse users might not think much of this but it will delight users with a graphics tablet. However, mouse users aren’t left in the cold – by pressing the mouse button simultaneously with “Ctrl” and/or “Shift”, the view can be turned, moved and zoomed.

The interface can be customised in almost any way but unfortunately it’s not an intuitive procedure

#!/bin/bash set -x MAXDSP=$(ls /tmp/.X*-lock|tail U -1|cut -c 8) NEWDSP=”$(( $MAXDSP + 1 ))” DISPLAY=”:$NEWDSP” X $DISPLAY & while [ ! -f /tmp/.X”$NEWDSP”-lock ] ; U do sleep 1 ; done exec /usr/local/bin/blender This script, however, only functions correctly if the server is running as “root”, so either the suid bit has to be set or an Xwrapper mechanism must be installed.

10 · 2000 LINUX MAGAZINE 43



Modelling and animating

Some user interfaces in Blender appear truly chaotic; these are the rendering options.

The main functions can once again be quickly called up using the space bar menu. Additionally there are aids such as the 3D cursor, which can define the creation position. This can prove itself highly useful. For any view it is possible to define five levels of detail at which the processing view should be rendered or displayed respectively.

There are a large number of functions such as “Nurbs modelling”, “Meshsmooth” (a rounded off and weighted Quad-Poly-Object) and Metaballs, which can create nicely rounded objects. The processing of Nurbs objects functions smoothly, but the successful creation of the Nurbs curves necessary for this does depend somewhat on the circumstances. The important functions for moving images, like animation paths, keyframes, inverse kinematics and object hierarchies are integrated fully. The keys can be processed with the built-in graph editor. All other characteristics of the objects can be entered and edited at this point as Bezier curves.

Glossary Modelling Primitive Nurbs Trims Polygon QuadFace /Quad-Poly Metaballs Pivot Subdivision Surfaces

Construction History Materials Shader Volumetric materials Weighted materials Texture Mapping Procedural textures Filters Animation Skeleton Skinning Inverse kinematics

The most basic elements of a model. All other, more complex, objects are assembled from these. A Nurbs object is not described by polygons but by Bezier curves. This makes the surface appear more rounded. Methods by which the holes in Nurbs areas can be cut. An object made out of triangles. Two triangular polygons which are combined into a quadrangle. The objects resulting from this are required for clean subdivision surfaces. Objects made up from balls that merge into each other. The properties of Metaballs can be weighted. The point of origin within the object matrix to which changes of translation, rotation and scaling relate. A new type of object for modelling in which a rudimentary control object such as a cube controls the actual object. Lines and points of this control object can also be weighted and thereby refine the appearance of the target object. The working steps by which an object was created can be reprocessed with this at a later stage; alterations to an object then have an effect on objects built using it. A generic term for render options. Shaders define and calculate the properties of materials. Procedurally created material, comparable with mist from the point of view of properties. This enables the inside of an object (such as a cloud) to be realistic. Refers to the attraction or the effect of a point on an object. A two-dimensional image which when imposed on a 3D-body via Term that describes wrapping a texture around an object. There are various types, the most important being a normal texture, when an object is to be coloured in or painted. Another type is called Textures produced with algorithms, such as shadings, wood and marble textures and textures produced with a noise filter. Texture filters, similar to those in an image processing program; these are used to improve textures. Basic frame on which objects can be fixed. If the skeleton moves, the added objects also move accordingly. Geometry which is distorted with the aid of a basic skeleton. The end points of the skeleton can be freely moved and animated; the position of the joints is calculated by the program. Each joint can be moved individually. A curve or line along which objects move. Creates a smooth transition between objects; the control points of the objects are interpolated between various positions. Capture of motion data using cameras and sensors placed over the clothing of actors. In this way, their movements can be digitised and a natural-looking movement of an animation is generated from this. Key points which define the animation process. Keys can relate to all parameters.

Forward cinematic Path animation Morph Motion capture Keys Lights Point Light Spotlight Area Light Shadow map Raytrace Shadows Caustic Light Rendering Diffuse to Diffuse Light transfer Specular to Diffuse Light transfer Radiosity

Point-shaped source of light Cone of light An certain area defined as a light source, such as a neon tube. Enables creation of soft ray-trace shadows The shadow is calculated from the position of the light source. Its precision depends on the size of the Shadow map. This process never produces a 100% accurate result. Each ray of light is traced back. In this way, the result is absolutely precise, but resultant shadows appear very hard. Extended light computation: the light can be concentrated as with a magnifying glass. A computing method in which coloured areas reflect coloured light. In general, this refers to the reflected rays of light such as the light reflected back from a mirror.

With this method of computing each object is assigned an energy value which is a function of the energy radiated by the surrounding object.This process, unlike Raytracing, produces very soft light conditions. Raytracing All rays of light are traced back to their source. This makes exact reflections possible with reflective surfaces and refractions in the case of transparent objects. Oversampling Refers to the number of render passes, where the quality of the rendered image improves with each additional pass. Render farm / Many computers, networked with each other, undertake a single computing task - this shortens. Distributed Rendering the processing time accordingly

44 LINUX MAGAZINE 10 · 2000

Materials, colours and lights There are only two shaders directly built into the program: a standard shader and the halo shader. Others are available as plug-ins, however, it isn’t possible to nest the characteristics of entire materials. Instead, you get an option for combining various textures. To apply texture to a frame there are several mapping types including bump maps (which can to a certain extent “dent” the object, in order to give it some structure.) Other mapping options affect the transparency of an element (the “Opacity” button) or its luminosity (“Self Illum”.) The program provides the most important procedural textures such as wood, clouds, noise and colour shading right from the start. There aren’t any volume maps at this stage (these can, for example, make clouds look more real from the inside.) In order to refine the textures, filters loaded as bitmaps can be used on the images. So-called vertex colours can be used in Blender to paint directly onto the object with a brush. This works in a similar way to Artisan in Maya, but isn’t quite as intuitive to use. All textures, including the procedural ones, are then also displayed in the modelling view. There are two types of light sources: pointshaped lights and spotlights. The standard options such as shadows and colours of lamps can be set separately. Effects like halos, rings and so on can be set in the material editor. The light conditions can also be previewed in the OpenGL-view.

Rendering Rendering is done very quickly. The renderer is capable of 16 times oversampling to improve quality. One interesting feature is that it is possible to render directly in the processing view. Integrated into the Render menu are effects like particle systems. This means that smaller objects such as particles of smoke are assigned reciprocal physical interactions, with the result that they act as a unit. These options are not found in any other programs since particles are really allocated to



geometry and the effects attributed to the particles are in any case amalgamated two-dimensionally in the renderer. The chief developer of Blender, Ton Roosendaal, has announced version 2 of the software and released a beta version (1.8) which is said to have all the features of the full version.

RenderMan, the modern classic Like Pixar, the manufacturing company behind it, RenderMan has an interesting history. It evolved from a collection of 3D tools which were used in movies such as “Toy Story” and “Bugs”. Since then, RenderMan has become a commercial product and costs £5,655. The program is nevertheless being briefly discussed here because the RenderMan file format (rib) is slowly turning into a quasi-standard. RenderMan is available as an RPM package for Red Hat Linux. A rib-file contains all the scene data in an ASCII text file. This includes stored objects, references to shading lists, lights and scene parameters, if necessary for each individual frame of the animation. A light definition, for example, looks like this: LightSource “pointlight” 1 “intensity” [9.5U ] “lightcolor” [1 1 1] \ “from” [-1.5198 7.76837 -3.877] Shading lists – ASCII files which describe nested and programmable materials – must first be compiled. They can then be installed in the rib file like a programming library in a C program in the following way: Surface “Nice_new_surface” There are graphical front-ends for both the creation of the rib file and also the shading list files. They can also be exported from many programs. However, exporting requires reprocessing, which is often time-consuming. Pixar uses, among others, the program MTOR to create objects. The advantages of RenderMan lie in the quality, flexibility and high processing rate. Another great advantage is the sub-division of surfaces wherein it is possible to create complicated organic objects without having to work at higher resolutions. The objects created are converted optimally by RenderMan into polygons. The main advantage of this method compared with Nurbs is that surfaces can

The surface of mops enters, in the guise of Tcl/Tk

be refined and weighted better. This is shown to great effect in the eye parts of the figures in “Toystory II”, for example. In addition there is a Shader language which makes it possible to create complex materials. Unfortunately RenderMan has no radiosity and no proper ray-tracing. Nor is there an area light or caustic light. To compensate for these drawbacks, the rather remarkable “Blue Moon Rendering Toolkit” (BMRT) was created.

An under water panorama, produced and rendered in Blender.

Blue Moon Rendering Toolkit BMRT costs considerably less then RenderMan. To be precise, it costs nothing at all. It produces better results but is slower. However, it has additional controls such as radiosity, extended light source methods and many other effects. Apart from that the two are compatible. It is even possible to combine the fast processing performance of RM with the quality features of BMRT. RM calculates the geometry whilst BMRT takes care of reflection, refraction and radiosity. BMRT can be found as a binary for all normal Linux distributions at the following website: .

The simplest view - the bounding box defines only the scale of the object

Mops Mops is a modeller for RenderMan-compatible renderers and was written in Tcl. The program works solely with Nurbs objects but offers an adequate number of functions. For example, you can draw a line or Bezier curve and then use the revolve function to produce a rotating object. With the trim function parts of a Nurbs object can be cut out and a skin object stretched over it. The program is operated by a combination of mouse and keyboard, though the program is designed primarily for keyboard use. All key combinations can easily be found in the comprehensive documentation. In one part of the window the program offers an overview of all the objects present in the scene and their characteristics, which correspond to those of the rib specification. In addition there is another console in the window, providing access to the Tcl interface. This means that the program can to some extent be scripted, although it is a long way from being as flexible as Blender.

Wireframe view - here it is also possible to see that the object is made up of quad-polygons

This OpenGL view shows the object with a surface but without any other effects

In this view the light conditions can now also be seen – the spotlight over the aircraft produces realistic shadows.

10 · 2000 LINUX MAGAZINE 45



Listing 1: interface.c 01 #include < Lightflow.h > 02 03 int main (int argc, char* argv[]) 04 { 05 LfLocalSceneProxy* scene = 06 new LfLocalSceneProxy(); 07 LfArgList list; 08 09 list.Reset(); 10 list <<”position” <<LfPoint(5.0, -5.0, 4.0); 11 list <<”color” <<LfColor(1.0, 1.0, 1.0)*3e2; 12 scene->LightOn(scene->NewLight(“point”, list)); 13 14 15 list.Reset(); 16 list <<”ka” <<LfColor( 0.0, 0.0, 0.5 ); 17 list <<”kc” <<LfColor( 1.0, 0.5, 0.5 ); 18 list <<”kd” <<0.5; 19 list <<”km” <<0.1; 20 LfInt plastic = 21 scene->NewMaterial(“standard”, list); 22 23 24 scene->MaterialBegin( plastic ); 25 26 list.Reset(); 27 list <<”radius” <<1.0; 28 scene->AddObject 29 (scene->NewObject(“sphere”, list)); 30 31 scene->MaterialEnd(); 32 33 34 list.Reset(); 35 list <<”file” <<”ball1.txt”; 36 LfInt saver = 37 scene->NewImager(“saver”, list); 38 39 40 scene->ImagerBegin( saver ); 41 42 list.Reset(); 43 list <<”eye” <<LfPoint(0.0, -4.0, 0.0); 44 list <<”aim” <<LfPoint(0.0, 0.0, 0.0); 45 LfInt camera = 46 scene->NewCamera(“pinhole”, list); 47 48 scene->ImagerEnd(); 49 50 51 scene->Render( camera, 400, 300 ); 52 53 delete scene; 54 }

[left] When performing scientific visualisation, realistic depiction of surfaces and light effects matters. [right] Clock with striker: Lightflow makes this kind thing possible, too …

Depending on the pattern used the same model can produce either an ocean with sunset or …

… a cloudscape with thundery atmosphere.

All the options correspond to the settings which are available in the RenderMan format. User-defined parameters can also be added. This means that it is possible to achieve very high quality in combination with RenderMan or BMRT. The light source functions are a real weakness, however. It would also be nice to be able to import files in the RenderMan format.

46 LINUX MAGAZINE 10 · 2000

Lightflow Rendering Tools Version 2.0 The author of Lightflow, Jacopo Pantaleoni, is just 21 years old. He has been involved in programming 3D effects since he was 12. Lightflow is more an object-oriented C++- and Python library than a program in its own right, and is designed to be expanded. The example in Listing 2 shows how simple scenes can be programmed and visualised. The strength of Lightflow lies in the description of shadings of light in three-dimensional space. This is also its chief advantage: it can undertake procedural definition of surfaces, volumetric patterns and materials, lighting systems or camera positions. Both programming interfaces are freely available for non-commercial use. Plus, there is a comprehensive set of tools for volumetric rendering or radiosity, which allow the conversion of simple scenes into truly impressive images. This can be




This close-up of a face rendered with Realsoft 4D still looks rather plastic.

But Realsoft*4D has done a really good job with a water surface and waves.

Blender links: Mops homepage: RenderMan specification: enderman/toolkit/RISpec/ Product info on Pixar’s RenderMan: enderman/ Lightflow Tools: Realsoft 4D: ■

seen to great effect in the sample images. Both ocean scenes are based on the same model, a surface and a sky, but using different multifractal patterns – in one case waves are created and in the other it is clouds. In a similar way, motifs such as a glass object with near-perfect light conditions or an alarm clock with hammer striker effect are created

Realsoft 4D Unfortunately there is very little information on Realsoft 4D [7] to be found on the Web. It’s the successor to Real 3D and has been available for nearly ten years. It is primarily a commercial product from the company Realsoft, Inc. for the Windows NT platform, but it is now also available in a beta for Linux. There are a few tutorials on the website under “Links”, along with the obligatory galleries. But there are only vague references to the technical innards which would be of interest to specialists. The results appear primitive compared to other packages too. It is Lightflow that sets the standards in the quality of lighting conditions on surfaces.

Not just for hobbyists Linux is the perfect platform for learning how to handle 3D graphics – the available tools are just what you need and won’t plunder your bank account. Through Renderman-compatible software you even have a path into professional modelling. For truly large-scale professional projects, however, a pure Linux-based solution is not yet practicable. The modellers available for Linux are not yet setting any standards, even if the actual render program has long been among those at the very top. Nevertheless, for the pro or the semi-professional user modelling under Linux is a serious alternative, so long as the size of the project stays within limits. ■

Overview: Renderers and Modellers Modeller Polygon editing Nurbs subdivisions Metaballs Skripting Materials Import Export Renderer Homepage Modeller Polygon editing Nurbs subdivisions Metaballs Skripting Materials Import Export Renderer Homepage Renderer global illumination Volume Materials Nurbs displacementmaps subdivisions Field of Vision Motion blur Binarys available for speciality Freeware Compatibility Homepage Renderer global illumination Volume Materials Nurbs displacementmaps subdivisions Field of Vision Motion blur Binarys available for speciality Freeware Compatibility Homepage

Blender yes yes yes yes yes built in Inventor, Blender RenderMan, VRML1, VRML2, dxf, VideoScape built in ac3d yes no no no no very few 3ds, dxf, lightwave, triangles, vectors, vrml1, triangles vrml1, vrml2, pov, r3enderman, dive, massive, dvs, Triangle RenderMan, povray Renderman no yes yes yes yes yes yes Red Hat rather fast, subdevision surfaces no Maya Mental Ray yes yes yes yes yes yes yes Intel Linux 2.2, SuSE 6.1, Red Hat 6.0 subdivision surfaces, global illumination Integrated in Modeller »Softimage« no Softimage, maya, RenderMan

Mops no yes no no yes RenderMan/Shadereditor 3DMF 3DMF, RenderMan RenderMan Realsoft4D yes yes yes yes Visual Shading Language Viele rpl rpl, dxf built in, Raytracing only Blue Moon Rendering Tools (BMRT) yes yes yes yes no yes yes i386 (libc/glibc) yes RenderMan, maya Lightflow yes yes yes yes no yes no Red Hat 6.1, Debian 2.1 additional engines for lighting (i.e. Arealights) Multithreaded, C++ API with Python-Wrapper yes 3dsmax

10 · 2000 LINUX MAGAZINE 47



Internet migration to Linux

100% LINUX

In 1999, Mailbox

Internet was the first UK Internet Service Provider to successfully migrate

100% to a Linux platform. Joel Rowbottom tells the story…

Technician Shish Batal installs a new server at Telehouse-1

Although relatively unknown at the time, Mailbox Internet was one of the first independent Internet providers in the UK. The company was established in 1992 to provide Internet services to public relations agencies as an offshoot of parent company Mailbox, which supplies services to the PR and fulfilment industries. To this day the company is still a wholly owned network, with no outsourcing of software or hardware requirements so common to the virtual Internet providers of today. The story starts in 1992. In the beginning, we were primarily useing Solaris machines (a batch of Sun SparcStation 20 servers provided the core services such as web, mail and DNS), and a sprinkling of Apple kit. The reasons for this were that the hardware was readily available at the time and the equipment was familiar to the staff contracted to set up the ISP from the outset. In the fullness of time however, the scaleability and security of such machines and the cost of running them began to cause problems: a replacement

48 LINUX MAGAZINE 10 · 2000

part for a SparcStation at that point was quite expensive, not to mention the cost of licences for the only operating system available at the time, Solaris. The in-house administration system exported details every so often to a Perl script called “SysSetup”, whose task it was to compile the relevant bits of DNS zone file, email alias file, authentication etc. and throw it out to the relevant servers. It was referred to quite frequently in the office as “that bit of voodoo which makes things work!” I must admit I feel somewhat responsible for Mailbox Internet’s migration to Linux. In 1998 I joined the company and found a network which was creaking at the seams. Personally I’d been using Linux since around 1993: a fan of the Slackware distribution introduced to me by a friend. Additionally I’d replaced the core systems at three separate companies with Linux servers by the time I got to Mailbox, and it proved to be quite a speedy way of fixing problems for good. A question which was asked of me by the company Chairman several times throughout the reimplementation was “why Linux?”: why indeed? Well, mostly it was the reliability of the operating system itself. We’d been using Solaris on most of the servers and the frequent need for patches has been well-documented - it was a mix of versions 2.5.1 and 2.6 with OpenWindows and CDE, which didn’t seem all that efficient on the limited hardware we had available. A second issue was cost: for the price of a second-hand SparcStation 20 we could have purchased a brand new multi-processor x86 server, without any software licencing charges or ongoing support costs (99% of problems we could fix ourselves in any case). A final issue – and certainly one which bore thinking about – was security. There are all sorts of horror stories floating around the Net about most operating systems (including Linux) but it all comes down to how you configure your server. Indeed, a properly configured Solaris can be locked down, but it takes all sorts of security patches and non-stan-



dard tweaks. Linux can be tightened up quickly and easily with the aid of “ipchains” and taking a long hard look at what you really need to run on your machine (do you really need to run “inetd” if you’re running a mail server?).

Keeping the Net alive The main issue of course was that we had to do all the replacement work without impacting the thousands of customers whom we already serviced, so it was going to be a lot of testing and a lot of late nights! Next we put together a schedule of what we need to provide to our users. First on the list was the news server, which for a typical full newsfeed of the time (May 1998) would eat up to 10Gb of disk space a day. We’d recently found a manufacturer who was willing to part with a stack of 18Gb SCSI disks and installed these into the dual Intel PII-400 motherboard. SMP was still in its infancy so we only put one CPU on the board to start with and added the other as soon as kernel 2.2 was released. A brief installation of our slimmed-down version of Linux, use software RAID to “glue” the partitions together into one big mount, install INN2.2 and we were ready to go. The machine performed admirably (first time!) and there was much rejoicing: we’d put the first Linux box onto our network and the users had a reliable Usenet feed. No problem! After that, it was quite easy: we replaced the DNS servers very quickly utilising a copy of ISC’s “bind” package, and our dialup authentication system was provided by a copy of Cistron RADIUS. Intel Celeron boxes proved to be great servers. It’s worth pointing out by the way that where space is a consideration there are some fantastic 1U-high cases out there if you look around, which will usually work quite happily with standard all-onboard motherboards.

Mail system Email proved to be a problem: we knew we’d need failsafe machines, but there was the problem of legacy UUCP and all sorts of problems rebuilding sendmail configuration files from SysSetup. Sendmail at that point didn’t have any of the advanced spam filters, ETRN or monitoring hooks which we needed, and after a series of discussions with peers at other ISPs we decided to plump for Phillip Hazel’s “exim” package instead. Exim is a drop-in replacement for sendmail that doesn’t have a configuration file which looks like an explosion in a punctuation factory. It has got its drawbacks, but it’s reliable and simple to fault-find in the event of a glitch. We had a steep learning curve to climb but there’s an experienced Exim support network which proved very useful. The email system itself finally consisted of four machines, with a clustering method designed to make addition of extra machines simple: one

machine would run a POP3 daemon and two machines would act as redirections for the virtual hosts. The POP3 daemon machine would access a filestore via a networked filesystem. We patched a version of CuciPOP to allow us to authenticate from an alternative password file in order to create virtual POP3 users and enable locking for a networked filesystem rather than the scoreboard-style approach which it already used.

Exhibiting at Linux Expo 2000

Core Web server The system flew, first time as well – it’s very important that it’s controlled quite closely when you migrate an email system, as one single bounced email is enough to let the cat out of the bag and make your users annoyed. The final chief service to migrate to Linux was the core web server. This had the potential to be the iceberg under the Titanic: the previous system administrator allowed incoming telnet and FTP access to the entiremfilesystem so the possibility existed of hidden architecture-dependent code within customer user areas. In any event, we decided to stop telnet access since almost all users were accessing the system via FTP. The solution we eventually went for involved moving blocks of sites to one of three separate servers and running a mount to a network filesystem. Users were locked into their own directories thanks to the security features in the ProFTPd daemon and a copy of Apache appropriately configured gave us the capability for serving webpages. Admittedly, that one didn’t work completely first time round. We had a lot of older customers coming out of the woodwork within a few weeks: it turns out they’d been accessing through absolute paths, or had been uploading compiled Solaris programs! A brief chat about security solved that one. 10 · 2000 LINUX MAGAZINE 49



After that final server we’d completed the process of migrating all core services from Solaris. It wasn’t that difficult. Certainly the biggest pitfalls encountered were converting the Solaris-standard paths to the Linux-standard paths and making sure that custom-written software was recompiled. Because of the SysSetup script (now rewritten into our in-house Intranet admin system using mod_perl and Apache) it’s a quick job to rebuild the entire ISP from scratch: nobody can manage that much subscription data manually!

Administration Systems Info Mailbox Internet: Joel: ■

I mentioned earlier the “SysSetup” script that we had written. In the early days the administration machine was an old Altos running via a UUCP link to a Sun SparcStation clone which fed the textbased report to a Perl script! This might sound all very convoluted but it was certainly an imaginative way of setting things up automatically. The time came though when the Altos was ready to go and we had to replace it. There were two alternatives: either we found a shrink-wrap system or we built one. There were no shrink-wrap systems for Linux which were documented or complete at the time so it was up to us to write one. We tried quite a few approaches to creating the user interfaces: an ncurses interface, using GTK widgets, Java applications, and so forth. The final approach we chose was the obvious one - a web-based system with client-side verification using Javascript. Server-side operations were written in Perl, powered in the first instance via a MySQL database. Apache with SSL meant that we could write several different interfaces for customers and resellers while still retaining a “master console” system for ourselves. Nowadays it all links together with the support ticket database, billing system, reporting system, and event logging. It even talks to our Panasonic DBS phone system, automatically logging when a customer calls, who they were passed to and how long they were on the call. More recently, we’ve been taking a look at running SuSE on the desktops of all the adminis-

The Fulham low-cost colocation facility: a bazaar of server hardware

Linux on a Sparc


So what happened to the SparcStations? We tried originally to build our own distribution: it was a struggle and eventually we gave up. A year later support for Linux on Sparc was a lot more visible and after a brief brush with the Red Hat Sparc distribution we settled on Debian. The servers are still quite happily ticking over as backup machines for mail and DNS.

50 LINUX MAGAZINE 10 · 2000

trative staff at Mailbox. It saves us a lot of hassle on maintenance since one of the technical staff can always telnet in and fix it, kill an errant process, or whatever it takes. Additionally it’s giving us that extra bit of security which you can’t lock down if you’re using Windows, although invariably we get the “where’s Microsoft Office?” type questions! The receptionists have been happy with it for the past eight months and we’ve started running it for the marketing staff. The techies, of course, have been running Linux for years… During 1999 we launched our £35 per month colocation deal and a sudden boom meant that we had to expand our network. We also ran into a problem with logging and network monitoring. Of course, when a server or a router dies you need to get straight on to fixing it (even if it’s the middle of the night) so we had to sort out some sort of monitoring software. On that note I can thoroughly recommend “NetSaint” as a monitoring application: combined with a copy of “sms_client” it has the ability to page you out of hours if things go wrong. NetSaint will try to relay an email, carry out a DNS query, ping a router or check a webpage. If anything is amiss it’ll try again, only alerting you when something is really broken. We liked NetSaint so much in the end that we began using it to monitor our entire network: while it can be hard getting out of bed, it’s better than having a reputation for being a broken ISP!

Bandwidth The other issue with providing colo is that of bandwidth. We use managed switches at Mailbox which have SNMP capability: handy for querying with MRTG - the Multi-Router Traffic Grapher. We also occasionally use the network monitor Ethereal to pinpoint troublesome servers though we’ve only ever had to do that twice. Out of bizarre mix of servers which we colocated at our Fulham facility, I can safely say that a substantial portion of them run Linux: there’s an Apple Network Server (fondly referred to as “Erik’s Fridge”), an Acorn Archimedes, a whole raft of x86 machines, various ex-University Sparc boxes; the list goes on. It’s good for those customers to be able to email us and say “I don’t suppose you know how to tweak Exim to do this, do you…?”. The Mailbox network is one to be proud of: it’s expandable, reliable, and received a lot of favourable comments at the recent Linux Expo show at Olympia. All our core servers are sited at Telehouse-1, a lights-out facility in Docklands: we rarely need to visit it to do upgrades. Now we’re working in preparation for the roll-out of business and domestic ADSL, upgrading the core routing kit to deal with the demand of our colocation services. It rarely stops round here! ■



Meeting the challenges of the global media industry in the 21st century


The merger between AOL For all of those reasons, it’s clear that there are plenty of organisational and technological challenges ahead. The first hurdle is the very real imperative to create digital content and to ensure that existing material is digitised for broadcasting over the Internet and by digital service broadcasters across the world. Whatever delivery mechanism is involved, the broadcast media are increasingly content hungry.

and Time Warner announced in January made history as one of the biggest mergers ever. Worth around $340 billion, the combined ‘clicks and mortar’ giant

Reducing the cost

also signifies massive change in the media business. It is the

By Rob Morrison

strongest indicator ever that entertainment and the media – like every other business sector – is about to be revolutionised by the Internet.

The last decade of the 20th century saw a wave of uncertainty in the media business. The key question was who would own the two main stages of digital entertainment: production and distribution. As a result of that turmoil, a number of different companies have converged on the media industry. Broadcast and content providers such as Sky and onDigital began competing for digital TV subscribers. Telecommunication companies such as BT and Cable & Wireless promised to deliver video on demand. Services such as pay per view and on-line shopping added a new dimension to digital broadcasting. But those services also created new complexities for media companies. The bar is quickly being raised for companies that want to succeed in the world of media commerce. Consumers have been shown a glimpse of a more flexible future and their expectations in terms of choice, convenience and quality are higher today than ever before.

52 LINUX MAGAZINE 10 · 2000

Companies also need to consider the fact that the digitisation of media reduces the cost of entry to media production. A piece of film that took days of work and hundreds of dollars to produce ten years ago can now be shot digitally, stored and broadcast for a fraction of the cost – and within much shorter time-scales. The net effect is that smaller players are able to move much more quickly into the digital media production space just as the desktop publishing (DTP) revolution of the 1980s moved the production of camera ready text and pictures out of the sole preserve of typesetters and designers. The pressures that these trends create are not trivial. Organisations need to move quickly to meet new competitors head on and to ensure that the assets that they own in the form of film and TV programmes are available in a digital format. They need to reduce the complexity of managing that material so that they can provide a cost-effective, reliable service to customers and partners. Technology clearly plays a vital role. Once content is in a digital format, it can be stored, managed and distributed as just another infor-


mation resource. The challenge is that it is a resource that places massive demands on technology platforms. To meet the expectations of consumers, broadcasters need to provide services of the highest possible quality and availability. They also need platforms that can be re-sized and extended to meet future requirements as well as current needs. A third imperative is that systems can be installed and used quickly by new start-ups moving into the digital broadcasting industry: they must not take organisations into a technological backwater.

Enter Linux The overall requirement is for a hardware/software platform that is based around open IT standards and flexible technology. Out in front is Linux, an operating environment that has been developed not by an individual software company but by a community of open-source developers. Many major IT companies are engaged in aspects of introducing Linux. For example SGI is supporting Linux as a mainstream operating system on its Intel-based platforms. The main benefit of Linux is that it is a truly open system. Unlike Unix, which was developed and extended by individual companies to the extent that scores of different flavours were introduced to the marketplace, Linux can only be changed with the full agreement of the open-source community. The ability of Linux to handle large volumes of complex website and server traffic has already made it an obvious choice for organizations that are running e-commerce organizations via web sites. It is estimated that Linux servers are at the heart of more than half of the world’s websites and that a growing number of e-businesses are choosing Linux as the way forward. Media companies face similar challenges to ebusinesses. Both need to handle an unpredictable number of enquiries and requests for information and digital products/services over the internet. Both need to manage on-line payments and have access to multiple databases. Both are running


their businesses on IT, and require zero interruptions to service.

The SGI 1200 Internet Server

Internet Server To meet the needs of the growing Internet market SGI developed the SGI Internet Server, designed specifically for ISPs, application service providers (ASPs) and colocation facilities. Based upon the SGI 1200, the Internet Server includes Internet-specific management, monitoring and security tools with integrated basic services for Web serving and messaging. SGI will also introduce mainstream support for cluster based systems via the Advanced Cluster environment (ACE) product. Clustering brings together a number of standard computer processors that perform as a single, fast, highly powerful, highly reliable resource. Clusters can handle multiple jobs or applications simultaneously and can be scaled up quickly when the requirements of the business change. Working with the Linux community, SGI has developed and tested best of breed clustering software and management tools. It has also introduced a range of managed services that help companies to get highly reliable, ready to run Linux clusters up to speed in a matter of hours. Overall, SGI Linux Advanced Cluster Environment and its Internet server environment bring together SGI’’s heritage in scalable computing, its knowledge of the global media industry and its requirements from technology, plus the latest developments from the opensource Linux community.

Conclusion Pressures on the global media industry are set to grow more quickly in the next decade than ever before. As content, infrastructure and communications companies jostle to find their place in the new media world, the importance of implementing reliable technology that supports digital content production, management and delivery cannot be overstated. ■ 10 · 2000 LINUX MAGAZINE 53



Running Windows applications under Linux


The need to continue to run key Windows applications for which there are no suitable alternatives can often be an obstacle that prevents the adoption of Linux from being considered. In this article we look at three possible solutions

It’s unfortunate, but true, that the success of an operating system depends not on its own excellence but on the quality and quantity of the applications that run on it. This is one reason why Microsoft Windows dominates the PC market despite its flaws. New users choosing a computing platform for the first time will tend to choose Windows because of the wide range of applications that are available for it. Once they have made that decision it becomes hard for them to change because the applications they are using lock them in to the original platform. Any benefits that could be gained by switching to a superior operating system such as Linux must 54 LINUX MAGAZINE 10 · 2000

be offset against the money and time that will have to be spent converting to alternative applications. In some cases there may be no obvious alternatives available. Therefore many companies or individuals who wish to adopt Linux will only be able to do so if a way is found to run these legacy Windows applications for as long as they are needed. Fortunately, as is almost always the case in the Linux world, where there is a problem there is a solution. In fact, as is also often the case, there is more than one solution. In this article we will look at three ways in which users may run - or appear to run - Windows applications on their Linux worksta-



tion. These three methods may not be the only way to integrate Windows systems with Linux but they are commonly used, inexpensive and capable of working well.

Virtual Network Computing Virtual Network Computing, usually known by its acronym VNC, is a technology that allows you to view the desktop of one computer system on another. Readers with Windows experience may have used similar technology in remote access products like PC Anywhere. It is the easiest of the solutions described here to set up, but the most expensive in terms of hardware, because the Windows software must run under its own copy of Windows on its own computer, just as it did before [Fig. 1] right. VNC is a simple protocol whose implementation consists of two components: a server and a viewer. The server runs on the computer whose desktop you want to be able to access remotely. You use the viewer on a different computer to view it. Currently there are implementations of both servers and viewers for Unix (Solaris, DEC Alpha), Linux , Windows (32-bit) and Macintosh (PPC and M64K.) There is also a beta version of a viewer that runs under Windows CE and a Java viewer that will run on any platform that supports Java. VNC is free and is released under the GNU General Public License so you can download the source code as well as binaries from the developer’s web site. VNC’s developers originally foresaw VNC being used to allow users to access Unix applications running on a big Unix system in the computer room from the PCs on their desks. However, with the growing popularity of Linux many people are now using VNC to view Windows screens from their Linux desktops. Unlike some proprietary remote access products the VNC software is small and needs no complicated installation. The Windows version comes with a setup program which installs the server in the Registry so that it starts automatically whenever you start the PC. For Linux there is a precompiled binary version that will run under any distribution that uses glibc. First, you need to make sure that the VNC server is running on the remote PC. Then, to view its desktop on your screen you simply start the viewer, enter the remote system’s name or IP address and its access password. Once the desktop is visible you can interact with it as if you were actually sitting at that computer. You can copy and paste between Linux and Windows. If you need to share files between Windows and Linux you can enable file sharing on the Windows box and use smbmount to mount the Windows drives from Linux. You have full control over the Windows computer, including the ability to shut down or reboot it, so it is perfectly possible for the Windows box to run without its own keyboard, mouse and monitor.

VNC allows you to view the remote desktop on a system on which the VNC viewer has not been installed. Launch Netscape, enter the address http://remote:5800/ (where remote is the name or IP address of the remote computer) and, after the password has been entered, the Windows desktop will appear in the web browser. This is possible because each VNC server is also a web server that accepts connections on port 5800 which serves the Java viewer to any web browser that connects to it. The VNC server uses a fair bit of processor time on the host machine and response times can be a little bit sluggish even over a 10MB/s Ethernet connection with no other traffic. You may experience problems with parts of Windows not being updated in the viewer display: in that case there are settings which can be changed on the VNC server. VNC may not be the most efficient way to provide access to Windows programs when you’re running Linux but it provides a method that is very easy to set up and use.

Fig 1: VNC puts a Windows desktop in a window on your Linux desktop

Using VMWare If you want a solution to the problem of running Windows applications from Linux that doesn’t require a separate computer to run Windows you could try using VMWare. As the name implies, VMWare creates a virtual machine under which you can run another operating system. You can get VMWare for Linux which allows you to run Windows under Linux, or you can use VMWare for Windows NT and Windows 2000 which enables you to run Linux under Windows. The latter solution may be attractive if you are dipping a tentative toe into Linux waters and would like to be able to switch between operating systems without the inconvenience of dual-booting. VMWare is a commercial product and costs £199 US for a licence, or £66 for home use, but it is available for download and 30-day evaluation [Fig. 2]. 10 · 2000 LINUX MAGAZINE 55


Fig 2: VMWare lets you run Windows under Linux in a virtual machine


You’ll need a reasonably powerful PC to use VMWare, since in effect it will be running two operating systems at once. VMWare recommends a system with a minimum 400MHz processor and 128MB of RAM for one virtual machine with applications active on both the guest and the host. We’d go along with that. The product is supported under recent versions of Red Hat, Caldera OpenLinux, TurboLinux and SuSE Linux and requires XFree86 version 3.3.4 or higher. It will run on other distributions: for the conditions check VMWare’s web site. Setting up a Windows guest operating system on a Linux host involves the following steps: installing VMWare, configuring a virtual machine using the VMWare Configuration Wizard; installing Windows and installing VMWare Tools for Windows. The process is very straightforward and takes about a couple of hours, most of which is spent installing Windows. The virtual machine is created in its own directory under the Linux file system, along with a virtual hard disk which is created in a file under the Linux file system. You specify its size when you run the Configuration Wizard, and it must be partitioned and formatted before you can install Windows. The operating system running in the virtual machine cannot see the real hard disk, so even if a Windows virus activates it can’t possibly harm your Linux system. In fact, anti-virus companies use VMWare in their labs to see what happens when viruses activate. To restore a clean system they simply restore a copy of the virtual machine from a backup. VMWare allows you to install Windows 95, Windows 98, Windows NT 4.0 or Windows 2000 as well as various distributions of Linux. Using VMWare you can create multiple virtual machines and install multiple operating systems within them. You can even run them all at the same time if the PC is powerful enough. One of the most popular uses of the

56 LINUX MAGAZINE 10 · 2000

product is by developers and technical support staff who need access to different setups for test purposes. Windows developers also benefit from the fact that when their program crashes and brings the Microsoft operating system down with it only the virtual machine is affected, not the host. With Windows running under VMWare you can see the Windows desktop in a window on your Linux desktop and use it just as if it was a real Windows machine. You can copy and paste between Linux and Windows. If you need to share files between Linux and the virtual Windows machine this must be done using Samba, since the virtual machine’s hard disk is not a real hard disk and cannot be mounted in the usual way. VMWare has two networking options: host-only networking and bridged networking. Using hostonly networking only the host can communicate with the virtual machine’s drives. Using bridged networking the virtual machine appears as if on a virtual network connected to your real network and it can see and be seen by other systems that have the appropriate access privileges. VMWare 2.0 has a feature called Suspend and Instant Restore. This allows you to save a snapshot of the virtual machine with all the windows and files that are currently open. You can then instantly restore from this image. This is a big time-saver as it avoids having to boot up the guest operating system from scratch each time you need it. Compared to VNC, VMWare is less expensive as it avoids the need for a separate Windows PC, though a commercial license for VNC itself is quite costly. Because of its hardware requirements your system might need upgrading and you’ll still need a Windows software license. However, if you want to run Linux but still need to run Windows VMWare may be the best technical solution to your needs.

Using Wine The trouble with both of the solutions described so far is that they still require you to have a Windows software license, so depriving you of the cost benefit of moving to free software. A better method still would be to use Wine, an open source free software project that aims to allow programs written for Microsoft Windows to run under Linux without any Microsoft software being present at all. However, Wine is still under development. In its current state there’s no guarantee that the Windows programs you need to use will run under Wine. But it is getting better all the time. It’s certainly worth a try. The name Wine is said to be a recursive acronym derived from “Wine Is Not an Emulator.” If so, it’s a little disingenuous, because what Wine does is provide an environment that looks to a Windows program like Windows even though it isn’t, thereby allowing the program to run under Linux. It does this by providing functional equivalents to all the Windows Application Program Interfaces (APIs)


the program may use. That seems to me a lot like emulation. Importantly, Wine doesn’t have the performance penalty usually associated with emulators that need to simulate a different hardware environment in software. All Wine has to do is implement each Windows API function under the environment of X Windows running on Linux: converting Windows graphics operations to X calls, making the Linux file system look like a DOS drive and so on. Wine is an interface between the Windows program and Linux and X. Because of the greater efficiency of Linux some applications might actually run faster than they would under Windows on the same computer! Windows is not a monolithic whole so much as a collection of dynamically linked libraries (DLLs) working together in approximate harmony. Since Microsoft often adds to these DLLs (as well as periodically modifying them to make them incompatible with one another) the Wine developers have a moving target to hit. The likelihood that a particular program will run under Wine depends on whether it uses any APIs that haven’t yet been fully implemented. Most older Windows programs written in C, and many programs developed using Borland C++ or Delphi, run well under Wine since they use few APIs other than those provided by the Windows core components. These basic APIs have long been supported by Wine and are pretty thoroughly tested. Programs developed using Visual C++ or Visual Basic, which are dependent on the Microsoft Foundation Classes and COM / ActiveX components, generally won’t run, or behave unacceptably when they do. This means, for example, that Wine won’t run Microsoft Office, The only way to find out whether any given program will work is to try it. As an interim measure, some programs that don’t work well under Wine can be made to work better by letting them use the original Windows DLLs instead of the Wine replacements. This can be done by setting up a Windows directory on the Linux file system, copying over the relevant Windows DLLs and making some changes to the Wine configuration file. Of course, if you use even one Microsoft DLL in order to get your program to work of course you are legally obliged to have a Microsoft Windows software license for the system. A good example of what is possible using Wine is Corel’s WordPerfect Office 2000 for Linux [Fig. 3]. Corel did not port the applications in this office suite to Linux. Instead, it invested in the Wine Project to accelerate development to the point at which the Windows executables in the suite would run. This work has undoubtedly helped Wine to run many more applications than WordPerfect Office but there is still much work to be done. Nevertheless, the fact that a major office suite now runs under Linux using Wine is a feather in the cap of the Wine developers. Using Wine is a matter of downloading and installing the latest version of the software and then


editing the configuration file /etc/wine.conf to map Windows drive letters on to directories in your Linux file system. You must also set the location of the directories that will substitute for the Windows, system and temporary files directories and set a path. You can point Wine at real Windows directories mounted on your PC, but if you do, you won’t know whether your application is only running courtesy of files on the Windows PC. To run a Windows program type “wine myprog.EXE params” into a console window and see what happens. You may see messages appear in the console window as the program runs. They can provide useful diagnostic information if the program doesn’t run properly or crashes. Don’t think you are home and dry if the program’s main window comes up. It’s important to test every feature of the program thoroughly to be sure that it all works as it would under Windows itself.

INFO VMWare: VNC Home Page: m/vnc/ The Wine HQ: ■

Conclusion The best way to run Windows programs under Linux, if you can, is to run them using Wine. It uses the least resources, performance is better and you should not need a Windows software license. But Wine may not yet support your program, particularly if it was developed using Microsoft tools. Wine is an open source project, however, so if you have the skills consider trying to solve the problems that prevent your application from running and so contribute to the project. If you can’t get your Windows programs to run using Wine, running them under Windows using VMWare or on a separate PC running VNC will almost certainly solve the problem. Whichever solution you adopt, the fact is that the need to run certain Windows programs need not be an obstacle to migrating from Windows to Linux. ■

Fig 3: Corel WordPerfect Office runs under Linux using Wine

10 · 2000 LINUX MAGAZINE 57



Redundant Array of Inexpensive Disks


When a single hard disk

isn’t fast enough, or its


storage capacity is

insufficient, one solution is to connect several drives together. As an added benefit, this can be done in a way that increases reliability by allowing individual drives to fail without losing data.

Three scientists at the University of Berkeley first hit on the idea more than 13 years ago of making a resilient and high performance storage medium out of separate hard disks: They defined five variants of this design and called it a “Redundant Array of Inexpensive Drives”, RAID for short. This acronym is often also said to stand for “Redundant Array of Independent Disks”. In RAID levels 1 to 5 one drive can fail without the system having to stop working. Later, two more configurations were added: RAID 0 with no error tolerance and RAID 6 with additional fault tolerance.

Hard or soft? Big corporations are the main users of RAID technology. This isn’t surprising: the hardware isn’t cheap since apart from the bus controllers (PCI/SCSI) it must include a complete processor unit and a few 58 LINUX MAGAZINE 10 · 2000

megabytes of buffer memory (see Fig.1). A RAID controller acts just like an ordinary hard disk controller, although special drivers are often needed by the operating system. For information about specific controllers see the test report on hardware RAID controllers with Linux support in this issue on page18. As the performance of processors and the complexity of operating systems has increased, it has also become possible to implement error correction using redundant disks in the server itself. This variant, known as “Software RAID” (or “SoftRAID” for short) is enjoying ever-increasing popularity, especially with the home user who is looking for a useful and cheap way to use any old hard disks that may be lying around. (Software RAID is also dealt with in more detail in another article in this issue on page 62). At the other extreme, an external SCSI-to-RAID bridge can be used without the need for any special



device drivers. From the point of view of the SCSI adapter in the server this behaves like an ordinary SCSI drive. Figure 2 shows a RAID array with integrated SCSI converter. A RAID array owes its fault tolerance to the fact that it contains at least one extra hard disk which, by a variety of methods, allows the data on a failed drive to be recovered. If a drive fails it should nevertheless be replaced as soon as possible since if a second drive fails all the data will probably be lost.

Fail safe

[above] Fig. 1: a multi-channel RAID controller

According to the laws of probability a redundant disk array, when used correctly, should only be out of action for a brief period about once every twenty thousand years. However, leaving aside for a moment the symptoms of ageing of the other components, it’s possible for a defective hard disk to cripple the whole (SCSI or IDE) bus (for example, turning it into a “babbling idiot!”) so that other drives are also temporarily unable to function. If this happens it will cause the entire system to stop working. It’s true that SCSI hard disks usually die quietly: they just fall silent. But to play it completely safe, it’s [above] Fig. 2: SCSI-to-RAID bridge based on BSD: configuration is done via a serial interface

Fig. 3: Special cartridges are used to allow drives to be hot-swapped

best to devote a separate channel to each hard disk. This will also avoid any bottlenecks in slower bus systems, but the improvement obviously comes at greater cost. In order to be able to exchange faulty media during operation (a process known as “hot swapping”) hard disks are mounted into special cartridges, which slot into a cage. These cartridges ensure that destructive electrical potentials are discharged on insertion and that the power supply to the drive starts cleanly on insertion and is cut off before removal. The RAID controller software must also be able to correct any transfer errors that might occur due to signal interference during the swap procedure, for example by repeating the read or write cycles affected. When a defective drive is replaced, reconstruction of the data or error correction codes is per-

formed. Because this can involve examining every bit of data in the RAID system the process can take several hours. During this time, use of the server may be subject to a few restrictions on performance, although the reconstruction should only run when no data read or write operations are pending. If a disk fails on a Saturday, which is the administrator’s day off but a day when the system’s users are very busy, the weekend can be saved for everyone by using a “hot spare” hard disk. With this, if a drive fails the data reconstruction on to the spare drive starts automatically. Replacing the defective medium is then not quite so urgent. The price to pay for this is that the capacity of the spare disk remains unused during normal operation. For this reason, this solution is only deployed in mission-critical applications. In all there are more than a dozen different RAID levels, each involving descendants or combiTable 1:RAID Level for servers at a glance Level 0 1 minimum hard disks 2 2 data hard disk+ n+0 1+1 error code carrier Reading performance n 1 to 2n in normal operation (Factor) Ideal reading performance 0 1 in case of disk failure Write performance n 1 Fail-safe -++ Performance/Price ratio ++ 0

2-4 3 n+1

5 3 n+1

6 4 n+2

10 4 n+n




n to 2*n




n to 1.5*n

n + -

n + +

n +++ --

n ++ 0

10 · 2000 LINUX MAGAZINE 59



[left] Fig. 4: Not really a RAID: RAID increases transfer speed at the cost of reliability [right] Fig. 5: Redundancy and high transfer performance are achieved by combining RAID with an error correction process.

Info D. A. Patterson, G. Gibson, and R. H. Katz, „A Case for Redundant Arrays of Inexpensive Disks (RAID)”, Report No. UCB/CSD 87/391, University of California, Berkeley, CA 1987. Nick Sabine, “An Introduction to RAID”: http://www-student.furman. edu/users/n/nsabine/cs25/ Storage Technology Corporation: http://www. ■

nations of the basic forms. An administrator should spend some time thinking about precisely which level is best suited to the needs of the applications that will use it. The overview in Table 1 should be taken with a pinch of salt: depending on the application, it could look completely different.

‘Striptease’ with RAID 0 At the lowest RAID level data is stored without any redundancy. There is therefore no resilience or fault tolerance. Data is written in blocks or “chunks”: the first block to the first drive in the array, the second block to the second drive and so on. For this reason, RAID 0 is often referred to as “data striping”. The benefit of RAID 0 is not automatic error recovery but improved performance. It is possible to achieve almost n times the performance of a single hard disk, where n is the number of drives in the array. This is achieved because n read or write operations can take place simultaneously instead of sequentially. However, the probability of failure also increases n-fold. Since a RAID 0 subsystem has no redundancy, if there is a fault the data is normally lost. Files of a size smaller than the block size – depending on the file system used – do have a certain chance of survival, but restoring them manually is tiresome and time-consuming. RAID 0 is thus certainly not a Redundant Array of Inexpensive Disks and is suitable only for applications in which large amounts of data must be recorded very quickly only to be discarded after a short processing period, such as in compressionless non-linear video editing.

Mirror on the wall RAID Level 1 is the simplest form of RAID, and is also known as “disk mirroring.” It creates redundancy very simply by writing all data twice: once to each of two disks. If a hard disk goes down, the data is still there, intact, on the second drive. Since each block of data is synchronously duplicated on the two disks there is no performance increase (or decrease) compared to using a single hard disk. Reading small files also isn’t faster, but big files can be read from the two disks in parallel (if

60 LINUX MAGAZINE 10 · 2000

the bandwidths of the busses allow such a thing). ie. Chunks 1, 3 and 5 from disk 1 can be read along with chunks 2, 4 and 6 from the other disk. However, the blocks have to be re-interleaved. RAID 1 can be useful in applications like web servers, file servers or news servers, where some fault tolerance is needed and data tends to be read more often than it is written. However, the disadvantage of it is that you are giving away half your dearly bought storage capacity.

RAID 2/3/4: One more dosen’t hurt If a striping array (RAID 0 with n drives) is provided with an additional drive that is used to store error correction and checking (ECC) codes, higher transfer rates and a lower risk of unrecoverable errors are combined. If one disk from the stripe array goes down, the lost data can be completely restored from the contents of the remaining drives plus the error correction information. The transfer rate during write operations (and the speed of restoring) is a function of the processing power of the ECC calculation unit. RAID levels 2 and 3 both use an algorithm developed in 1950 by R W Hamming to calculate the ECC codes; they differ only in the chunk size that is used. RAID 2 uses a chunk size of just one bit: its benefits are more theoretical than anything else and you won’t find any RAID 2 arrays in real life. There are commercial implementations of RAID 3 (with small chunk sizes) but they are seldom used. Higher RAID levels are preferred. RAID Level 4 uses considerably larger chunks than its predecessors, (usually 4 to 128KB) and uses a simple exclusive-OR operation to generate the error correction codes and to restore data. Figure 5 shows an example with a chunk size of four bits.

The compromise If data and error codes are distributed equally over the N+1 hard drives according to Fig 6, then they can read n+1 data blocks at once. For example to get the first six data blocks, the RAID-solution reads the boocks 1 and 6 from the first, 2 and 3 from the



second and 4 and 5 from the third drive (two block operations per drive). With RAID 2/3/4, the blocks 1, 3 and 5 would be read from the first drive and 2, 4 and 6 from the second (three blcok operations per drive being necessary). The redundancy information is not used for read operations in normal situations. The amount of space used for error correction purposes is the same as for RAID 4 so, given the benefits, it is hardly surprising that RAID 5 is the preferred level used in practical applications.

No worries! In especially critical applications provision must be made for the simultaneous loss of two disks. RAID 5 isn’t up to this and so to meet this requirement we have RAID 6. RAID 6 calculates two different error correction values from n data chunks and, as in RAID 5, distributes these evenly on to all hard disks. The Reed-Solomon error correction code is frequently used. Calculating this requires considerable computing power: consequently RAID 6 systems are not exactly cheap.

Other configurations A duplicated disk stripe with at least four media as shown in Fig. 8 is also often referred to as RAID 10 (0+1). The hardware RAID controllers needed to implement this are relatively cheap, which helps to offset the cost of providing twice the storage capacity that would otherwise be needed. This solution is usually implemented using ordinary disk controllers with the operating system taking over the RAID function, so in fact it is really a cleverly-disguised software RAID solution. Other RAID derivatives are RAID 30 or 50. In RAID 50, for example, three RAID 0 arrays are used as data storage for a RAID 5 configuration. Other RAID levels are also defined, though they are rarely used in practice. RAID 7 works in a similar way to level four, but requires a microcontroller which processes all I/O activities asynchronously, sorts them appropriately

and buffers data. All current RAID controllers (and software solutions) have this ability built into them anyway, so this RAID level is obsolete. However it is still sometimes used in marketing to make a product appear to have something special. The term “RAID 100” refers to parallel accesses to a RAID 1 system. This is also only possible with the aid of a dedicated microcontroller and is now rarely used. Software RAID 0 evenly distributes data chunks over all the available hard disks. The same effect can be achieved using the Logical Volume Manager by specifying the “strip” parameter. The Linux LVM, incidentally, is planned to include support for equivalents of RAID 1 and RAID 5.

Fig. 6:RAID**5 is now state of the art in industry

Conclusion A RAID for all seasons does not exist! Each RAID level has its own advantages and disadvantages. There is usually a price to be paid for high performance and fail-safe features and so the final decision will often be subject to budgetary constraints. RAID Level 5 is an outstanding compromise and for this reason it is widely used. Depending on the application, however, adequate protection for the data on a server can be economically obtained using the “poor man’s RAID” – RAID 1. ■

[left] Fig. 7:dataflow under RAID 5 in normal operation and reconstruction. [right] Fig. 8: RAID 0+1: Parallel accesses as with RAID**1 using low-cost controllers

10 · 2000 LINUX MAGAZINE 61



Configuring software RAID


High performance hardware RAID controllers are expensive. But there is a cheaper alternative. Software RAID is also available for Linux. However, before you can use it the hurdle of installation and configuration has to be overcome. Software RAID is not an option favoured by every server administrator. It steals valuable processor time from the server. However, nowadays even inexpensive computers such as you can buy at your local superstore have computing power that would make the super-computer of a decade ago look puny. At the same time, small workgroup or intranet servers often only have to serve a couple of web pages or files via the bottleneck of Fast Ethernet, and perhaps distribute the occasional email. These tasks on their own are not enough to make the processor break out into a sweat. In this situation it is possible to bypass the usual hardware solution and save a tidy bit of money by using a software alternative instead. SoftRAID isn’t the right solution if you are considering a heavyweight multiprocessor server with lots of capacity and hot-swap capable components. In that situation the cost of a true RAID controller will add an insignificant amount to the cost. The simpler commissioning and maintenance of the 62 LINUX MAGAZINE 10 · 2000

hardware disk jugglers also has advantages in mission-critical domains. Where the demand for peak performance is vital a hardware-based system is the only possible choice. But if your demands are more modest and you are prepared to compromise on commissioning and maintenance you can manage perfectly well with this free software solution.

Obtaining the software Since the official release of the first SoftRAID implementation in Linux kernel 2.0 a lot has happened. Originally only RAID Level 0 was supported. Also, anyone who wanted to install a bootable root file system on it had to patch the kernel and grapple with the Initial RAM disk. However, from kernel version 2.2 hard disk configurations using RAID 0, 1 and 5 can be recognised automatically by the (patched) kernel. Using an (also modified) LILO one can also boot up without any problems from RAID 1.



No need for backups? Among home computer users SoftRAID is enjoying increasing popularity. Home users don’t tend to be very diligent about making backups and are reluctant to fork out for an expensive but rarely used tape drive. Writeable CD-ROMs aren’t the answer, since a full backup won’t usually fit the storage medium. For this reason it often seems a good idea to back up by making a copy of the system on a second hard disk. Hard disks are relatively inexpensive and markedly faster than either CD-RW or tape. Using a hard disk for backup isn’t very practical if you need to install and then remove the drive from the system each time. But if the second drive is fitted permanently inside the computer it is no trouble at all. In that case, as long as the two drives are identical, they can easily be run as a software RAID 1 array to make the most of the potential increase in performance (see the article “Raid Basics”, also in this issue.) If you consider it important to keep your backup separate from the computer you can still do so. To perform a backup all you need to do is connect the disk and wait for synchronisation to occur in the background (see the box “Background Rebuild”) After an hour or so the mirror disk will contain a copy of the system and can be removed from the computer and put back in a safe place. If your main drive fails this will save you the extremely tedious business of restoring from a backup. Instead, you simply replace the failed drive with the backup drive and switch on. This may sound too good to be true. And there is a small catch: if a voltage spike or some other disaster were to zap both drives during the synchronisation process it is likely that all your valuable data would be lost. However, the chances of that happening are, as you can imagine, very slight. Nevertheless, for anyone who installs SoftRAID to increase failsafety and improve transfer performance in a commercial environment a traditional backup strategy remains an absolute must.

Readers who are put off by the description of installation and configuration that follows would be well advised to consider the latest Red Hat distribution. From version 6.1 on, the graphical installer supports the option of bootable RAID 1 and thereby saves a great deal of challenging work.

Configuration After installation comes the configuration of the RAID array. You will be spoilt for choice here as to which RAID level is best suited to your needs. (To help you choose, see the article “Raid Basics” in this issue.) Configuration is equally easy whichever variant you choose. In the file /etc/raidtab the RAID drives must be defined and then initialised just once with mkraid. Later, the kernel starts the RAID configuration automatically, so any leftovers in /etc/rc.d/boot.local should be removed. Listing 1 shows a simple RAID 1 configuration. The options are largely self-explanatory: this is a level 1 RAID device /dev/md0 consisting of two partitions (/dev/sda4 and /dev/sdb4) using a chunk size of 8 KByte. The statement “1” in persistentsuperblock is needed so that the RAID configuration is automatically recognisable by the kernel right from boot-up. All RAID partitions must also possess the ID “fd” using fdisk (“Linux raid autodetect”). md stands for “Multiple Device”. With this type of device, besides a RAID array hard disks can be arranged in a linear fashion with respect to each other so as to make what looks like one big hard disk.

Listing 1: RAID 1 raiddev /dev/md0 raid-level nr-raid-disks persistent-superblock chunk-size device raid-disk device raid-disk

Info RAID-Patches Software-RAID HOWTO howto/Software-RAID-HOWTO.html ■

1 2 1 8

/dev/sda4 0 /dev/sdb4 1

Listing 2 shows a RAID 5 configuration consisting of three hard disks. Defective disks are removed

Background Rebuild Defective or replaced hard disks are not allowed into the SoftRAID array until they have been manually integrated using raidhotadd. After that, background reconstruction can commence. The operating system and the applications running on it are largely unaffected by this: recovery of data will take place only when no other I/O transfers are pending. [root@bee /root]# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [translucent] read_ahead 1024 sectors md0 : active raid1 hda2[0] 2297216 blocks [2/1] [U_] unused devices: <none> [root@bee /root]# raidhotadd /dev/md0 /dev/hda3 [root@bee /root]# cat /proc/mdstat # disk reconstruction Personalities : [linear] [raid0] [raid1] [raid5] [translucent] read_ahead 1024 sectors md0 : active raid1 hda3[2] hda2[0] 2297216 blocks [2/1] [U_] recovery=6% finish=23.3min unused devices: <none>

10 · 2000 LINUX MAGAZINE 63



RAID kernel installation The following installation instructions have been tested with Red Hat 6.1 and Kernel-2.2.14. But the process should work just as well with other distributions and more recent kernel versions. A normal version 2.2 Linux kernel does in fact already know about RAID levels 0, 1 and 5, but it has trouble automatically recognising the partitions being handled by them. Because of this, some assistance in the form of a kernel patch is required. Also, it won’t hurt to install the latest RAID tools. Both of these can be found on the Red Hat web site. In addition, the original kernel sources will be needed. cd /tmp wget wget 00116.tar.gz wget In the first step of installation, the kernel has to be unpacked, patched, configured and installed. When performing the last two steps you may find the manual that came with your Linux distribution to be helpful. RAID-specific options in the kernel configuration are used, and all RAID levels are compiled in. cd /usr/src && mv linux linux.old tar -xzf /tmp/linux-2.2.14.tar.gz cd linux patch -p1 < /tmp/raid-2.2.14-B1 make menuconfig make clean && make dep && make bzImage make modules && make modules_install cp arch/i386/boot/bzImage /boot/zImage-raid vi /etc/lilo.conf lilo && reboot After rebooting, the command cat /proc/mdstat will shows whether the previous procedure was successful. Now the latest RAID tools should be installed. Then the RAID configuration can begin. cd /usr/src tar -xzf /tmp/raidtools-dangerous-0.90-20000116.tar.gz cd raidtools-0.90 && ./configure && make && make install

Table 1: RAID-Tools 0.90 mkraid one-off installation and boot up of a SoftRAID configuration raidhotadd insert replacement or spare disk raidhotremove remove defective disk from the group raidstart start multiple device — only needed in exceptional cases raidstop stop multiple device

from the RAID system using raidhotremove and raidhotadd and after the swap, reconnected. If removable cradles are used, defective SCSI hard disks can even be swapped while operations continue or “hot spare” disks added later. The kernel interface is used for this: echo “scsi [add-single-device|remove-single-U device] <controller> <bus> <target> <lun>” > U /proc/scsi/scsi In this way, SCSI disks can be entered into or removed from the device table. (This can be used to simulate crashes.) Unfortunately hot-swapping does not work with IDE drives yet. Also, with IDE-based RAID it is also only possible to connect a maximum of eight devices. When an IDE channel is occupied 64 LINUX MAGAZINE 10 · 2000

by two hard disks the transfer rates sometimes reduces considerably. However, the performance is still adequate for many applications. configuration 2: RAID 5 raiddev /dev/md0 raid-level nr-raid-disks nr-spare-disks parity-algorithm persistent-superblock chunk-size device raid-disk device raid-disk device raid-disk device raid-disk

5 4 0 left-symmetric 1 32 /dev/sda4 0 /dev/sdb4 1 /dev/sdc4 2 /dev/sdd4 3

RAID 10 can be obtained simply by creating a RAID 1 array consisting of two RAID 0 configurations. For device the corresponding /dev/md* is entered instead of an actual partition.

Booting up ReiserFS-RAID At present, SoftRAID only works well with LILO using RAID 1. For all other variants it is still necessary to struggle with initrd and other tedious matters. In the meantime RAID works with Root-ReiserFS which not long ago would not have been possible: ... [test@lab1 test]$ mount /dev/md0 on / type reiserfs (rw) ... When formatting an ext2 partition it is also possible to tell the file system driver by means of the “stride” parameter what chunk sizes the device under it prefers to process. The choice of chunk size for the file system and multiple device has a significant effect on the performance of the SoftRAID system.

Kernel 2.4 Unfortunately the RAID implementation in the upcoming 2.4 kernel series is not yet ready to go. But since the new kernel is still awaited (what’s more, everyone is cordially invited to help out!), we can hope that the RAID support will be completed by the time it comes out.

More productive Linux Software RAID is already bomb-proof and has been in use for some time in mission-critical applications. For example, the web server of Linux New Media AG (publisher of “Linux-Magazin” and “Linux-User” in Germany) has been running a RAID 1 configuration without any problems since it was set up almost a year ago. ■



Configuration and operation of DNS servers


Comments (RFCs), including RFC 1035 (Domain Names – Implementation and Specification) and RFC 1034 (Domain Names – Concepts and Facilities).

The Domain Name System

The nameserver BIND is practically standard under Unix and Linux. Unfortunately, it is very sparsely documented. The man pages for example are at best useful as a reference. And yet wellmaintained name servers are essential for users of all Internet services in any organisation.

Name allocation and resolution in the Internet and other IP-based networks has a long history. Since 32-bit addresses, by which network nodes are actually addressed, are hard to remember, computers quickly began to be given names. At first people made do by setting up a file, HOSTS.TXT, which allocated a name to each IP address in the network. This file, which is still in use today (and is called in Linux /etc/hosts), contains an IP address, an allocated host name and optional alternative aliases by which this computer can also be accessed. In the early days of the ARPANET, with a few dozen computers, that was adequate. Even today in some simple intranets this is still a workable solution, but not in the modern hierarchical structure of the Internet (the successor to ARPANET). So pretty soon the search was on for a solution which, firstly, makes the most of the advantages of this hierarchy and secondly, makes it unnecessary to maintain separate, but nevertheless consistent, host files on each computer. This search resulted in several Requests For

66 LINUX MAGAZINE 10 · 2000

To make it possible to manage the millions of computers of the Internet a hierarchical name structure was introduced. The root of this name is a period ”.”, followed by one of the global top level domains laid down by the IANA (Internet Assigned Numbers Authority), for example com, edu, org, uk or is. For each of these name domains, in turn, various organisations assign subordinate domains. Thus, for example, Nominet is responsible for all names in the uk domain. If you register a name, you can yourself create a hierarchy with as many additional subdomains as you like, i.e. subordinate names. The ”.”, which specifies the root, is left out in everyday use, which means that for example uniquely identifies a computer called penguin, which is part of the subdomain production, which in turn are subordinate to jaguar and the top level domain com. When TCP/IP has been installed on a computer (which is the case for all computers running Unix or Linux computers) then at least one name server must be specified. This name server will resolve the host names into IP addresses. Often, particularly in the case of dial-up connections using the PPP protocol, the name server is assigned dynamically. Either way, a name server must be known to the computer because only a name server can convert host names into IP addresses, and only IP addresses can be used for communication across the network. If an application wants to resolve a host name to determine the associated address, the procedure that is followed is governed by the file /etc/host.conf, in which the search sequence is defined. Normally, the file /etc/hosts will be searched first. After that, if no



matching name has been found there, the name server is contacted. The name server then either processes the enquiry itself, if its database holds the data for the name domain in question, or it passes it on to the next server in the hierarchy. Let’s look at an example. If the server responsible for receives an enquiry for, it will pass on the enquiry to the server for the entire top level domain com. On the other hand it could resolve by itself. The com server knows the address of the name server and delivers this to the enquirer, which can then repeat the enquiry to it.

The BIND8 IP-Name server packet The Internet Software Consortium (ISC) designed and implemented the domain name system which has been the standard used to date. This system is called BIND (Berkeley Internet Demon). Although Berkeley is actually using BSD as its operating system, BIND is now used on practically all important platforms and included in almost all Unix systems and Linux distributions. With the change from BIND4 to BIND8, the current version, a major alteration (resulting in a certain amount of simplification) of the configuration occurred. In BIND slang a domain is referred to as a zone. The server responsible for the zone has the database containing the master data. Any available secondary servers, which intervene in the event of a failure or overload of the primary master server, have a copy of this data (the slave zone.) When there is any change in the configuration of the zone the slaves are automatically provided with the new domain data by the master. The server consists of a daemon process called named, which is usually started or stopped on configuration in System V style by a boot script (usually /sbin/init.d/named.) If the configuration is changed, the daemon process must be persuaded using kill HUP to do a new read-in of the files. Any error messages and the results of communication with the other name servers, which must be informed of the changes, are found in the system log. So that these messages are not overlooked, it is recommended that this file is always displayed on a console using the command tail -f /var/log/messages.

Configuration of named BIND8 has a central configuration file /etc/named.conf, which, apart from general parameters, determines the zones that are controlled and their associated zone configuration files (Listing 2).

Slave zones Operating a secondary name server – in other words, controlling a slave zone – is not difficult.

Once the brief zone definition has been made in /etc/named.conf, which defines the master server, and entering the name of a file in which BIND is to store the data, the job is done. All necessary database updates are fetched automatically by named, as long as the associated primary server is correctly configured.

Fig. 1: The position of in the name hierarchy of the Internet

Zone files For those domains for which your name server is configured as the primary server, however, more care is needed. In particular the divided, hierarchical architecture of the Internet domain concept does not permit trial and error methods of configuration. You must bear in mind that you have no access whatsoever to most servers which have zone data defined by you, whether it is a secondary server (for example your ISP’s name server), or some other server which after a query has temporarily stored your data in its cache. You yourself can to a large extent define the lifetime of such invalid details through the time-to-live of the records defined by you (we will show you how later.) Usually, you will define the root zone and at least two master zones. For each of the master zones there are two database files; one for resolution of names into addresses and one for the reverse procedure of resolving addresses into associated names (reverse lookup). Apart from this there is a file with the addresses of the root name servers which control the data on the top level domains and which your name server may need to contact as a last resort. This file should be checked at regular intervals to make sure it is up to date; it can be obtained from Listing 1: HOSTS.TXT-Example file mysql ftp mail www 10 · 2000 LINUX MAGAZINE 67


BIND Copy this file into the directory with your zone files (usually – and as stated in the sample configuration file – this is /var/named) and give it the name which you have also specified in named.conf (here, root.hint). As the starting point for your own zone files, if

your /etc/hosts file is large, you can use the tool h2n. This tool converts /etc/hosts files into BIND zone databases. You can execute this program at regular intervals if your /etc/hosts file always contains the latest data. Usually, however, zone databanks are managed by hand.

Listing 2: Example of a /etc/named.conf file. /* Sample configuration for BIND 8.1 or new * install as /etc/named.conf * * Author: Stephan Lichtenauer * Note: All IP addresses/host names have been found */ # # General server parameters # options { # Directory in which the zone databanks are stored directory "/var/named"; # by default in case of errors in the master zone files # the server will be stopped check-names master warn;

# # Defining the root zone # zone "." IN { type hint; file "root.hint"; }; # # Defining the »localhost« zone # zone »localhost« IN { type master; file ""; check-names fail; // errors here would be fatal allow-update { none; }; // of local interest only };

pid-file "/var/named/slave/"; datasize default; stacksize default; coresize default; files unlimited; recursion yes; multiple-cnames no; # by default at Port 53 there is a listen-out for all available # interfaces, following commands could # specify this more precisely: #listen-on {; }; #listen-on port 1234 { !; 1.2/16; }; query-source port 53; }; # # Logging options for various problems: # logging { category lame-servers { null; }; category cname { null; }; }; # # Pre-defined "Access Control Lists" (ACL): # "any" Lets any hosts in # "none" Prohibits all hosts # "localhost" Allows connections from this computer # "localnets" Allows connections from LANs ( # # Define own ACL: acl secondaries {;; }; # # With the »server« instruction, other servers can be assigned # certain properties. # # A server marked as »bogus« is never queried server { bogus yes; } # if the other server has also installed at least BIND 8.1, # zones can be transferred more compactly. server { transfer-format many-answers; }

68 LINUX MAGAZINE 10 · 2000

# # Defining of reverse lookup for local host (addresses into names) # zone »« IN { type master; file ""; check-names fail; allow-update { none; }; }; # # Defining reverse lookup for an address zone # zone "" IN { type master; file ""; check-names fail; allow-update { none; }; allow-query { any; }; allow-transfer { secondaries; }; notify yes; }; # # A master zone # zone "" IN { type master; file ""; # Restrict zone transfers, to make work harder for # spies allow-transfer { secondaries; }; allow-update { none; }; allow-query { any; }; notify yes; }; # # A slave zone # zone "" IN { type slave; file "slave/"; masters {; }; };


As an example, let’s set up the file for the domain According to our details in /etc/named.conf this must be stored under /var/named/ (Listing 3). The ”SOA” record represents the start of the database file. defines the described domain. Take note at this point of the dot on the end, which stands for the root name domain. You must always write this dot afetr all fully qualified names, otherwise named assumes the name has yet to be completed and appends the current domain. (again with a dot at the end) stands for the current computer, on which named is running. root.poseidon gives the email address of the DNS administrator, with the dot standing for the otherwise usual ”@”. Since this time the name does not end with a dot, BIND completes the entry, making it, which represents the mailing address ””. So that the other name servers storing your data (either as secondaries or in their cache) can check that they are up to date, you must specify a serial number for the data record, which you increment with each amendment. The concrete format is up to you; often the current date is used (as in this case 7 Jan 2000 is represented as 20000107.) The refresh value states in seconds how often the secondary name servers should ask for updates (in this instance, ten hours). If the primary server should fail to answer this request in retry (in this case: 1800) seconds a new attempt will start. If within the period defined by expire no response is received from the primaries, the secondary server stops answering requests for this domain on the basis that no answer is still better than a wrong one. TTL (time to live) is sent with all answers and shows how long the data record will remain valid and can remain in the cache. Choose this value with care, as with large values changes (and corrections for typing errors) take a very long time to spread through the network. The following data records are each named according to the third column of the zone file (Listing 3). The two lines following the SOA record list the name servers (NS) for the domain. The first is the computer on which the master zone is located, then follow all the secondaries (just one in this case.) The file then continues with the MX records. These state the addresses of the MaileXchanges, in other words, the mail servers. The number before the address is the priority value, representing a sort of inverse priority of this server. An SMTP server which wants to send an email first tries to connect to the server with the lowest priority value. Only if this fails will it look further down the list according to the nearest priorities. A-records define the mapping of the host names onto IP addresses. Thus poseidon for example is completed, making it into If the request matches this


Listing 3: The zone file /var/named/ IN SOA root.poseidon ( 20000107 ; serial 36000 ; refresh 1800 ; retry 3600000 ; expire 86400 ) ; time to live IN NS IN NS


MX 1 MX 2

localhost poseidon phoenix venus



ftp www ns news irc



IN SOA root.poseidon ( 20000107 ; serial 36000 ; refresh 1800 ; retry 3600000 ; expire 86400 ) ; time to live

name, is returned as the associated address. CNAME data records make aliases available. ”news” – since, without a dot at the end after completion it becomes – is translated into and the A-record associated with this host name is searched for and evaluated. The zone for the localhost (Listing 4), which has to be included in every configuration, corresponds to the same syntax as the file for, except that the scope is considerably more manageable. However, a few small abbreviations are used: With $ORIGIN, localhost. is named as a macro for the current domain, to which the @ symbols then refer.

Reverse Look-ups Some programs, such as for example telnet, try to find out the host names associated with IP addresses. These reverse lookups are resolved by BIND using zone files (Listing 5). In our file /etc/named.conf we have defined a zone IN … , containing the addresses (thus as a maximum in the domains to For historical reasons, IP-addresses for reverse-lookups are also written backwards (so no printing errors…) and must end in (and in the zone file, of course, with The SOA-Record has the usual format (where one can also see the possible, self-explanatory and very practical abbreviations for units of time), only here the reverse lookup is defined. For this reason the zone name is With NS, again, the primary and secondary name servers 10 · 2000 LINUX MAGAZINE 69



Listing 4: The zone file /var/named/ # /var/named/ contains the allocation # of the loopback names and addresses $ORIGIN localhost. 42 3H 15M 1W 1D ) 1D IN NS 1D IN A

; ; ; ; ;

serial refresh retry expiry minimum


Listing 5: The zone file /var/named/ # /var/named/ contains the allocation # of host names to IP-addresses (

IN SOA root.poseidon 20000107 3H 15M 1W 1D )


; ; ; ; ;

serial refresh retry expiry TTL


Listing 6: The zone file /var/named/ # /var/named/ contains the allocation # of local host to the address (

IN SOA root.poseidon 43 3H 15M 1W 1D )


; ; ; ; ;

serial refresh retry expiry minimum

IN NS IN PTR localhost. are defined, then come the PTR data records. These are the counterpart to the A-Records of forward resolution and allocate the host names to the IP addresses. Great care must be taken here to ensure consistency between the A- and the associated PTRrecords. Together with the reverse-mapping file for the zone, this makes configuration complete. Since the 1 in the last line is not fully qualified and does not end with a dot, it is automatically completed to make

Troubleshooting As already mentioned, the fact that data is distributed to all possible configuration files and in the next step to all possible computers does not make it easy to find and to correct any errors. The most frequent error – apart from not rebooting the nameddaemon – is forgetting, after making modifications, to increment the serial numbers of the zones, so that the connected computers do not notice that 70 LINUX MAGAZINE 10 · 2000

something has changed. You must also bear in mind that due to caching it may take some time before your amendments spread through the network (so at this point think of a reasonable TTL value.) In the event of problems with other zones the best thing to do is use whois or finger to find the contact information on the administrator responsible and speak to them. You will find many errors as soon as you look at the system log (/var/log/messages) after a new readin of the configuration. Syntax errors are also normal, if named quits in such a situation (this should not be the case if you have specified in /etc/named.conf check-names master warn). Check whether your fully qualified names in the zone files end in a dot (thus for example If you write, this will be completed as, which is probably not what you want). If applications such as telnet, which perform reverse look-ups, run very slowly, reverse look-up is probably not correctly configured. Test this with the tool nslookup, found in both Unix and Linux, and which acts as an all-purpose tool in the toolbox of the BIND administrator. In the following example the allocation of addresses to names does function, but the reverse is not true ( is the address of the name server tested): ~ # nslookup com Server: Address: Name: Address: ~ # nslookup Server: Address: ** can’t find NoU n-existent host/domain nslookup will be able to help you in most cases. There is also dnswalk, which searches configurations for common errors such as inconsistent A- and PTR-data records. Don’t forget to notify changes to the IP-address of your name server to the competent authority (Nominet, INTERNIC etc.). Lame or Missing Delegations are also very common: In the first case a name server which is higher up in the hierarchy, when queried, delivers the address of the server which is supposedly responsible, but which is in fact completely ignorant of this good fotune. In the latter, reverse case, the server simply does not bring back the address of the one responsible. In order to avoid this it is necessary to have good co-operation with your ISP. And don’t forget to check from time to time that your root file is up to date (in the example in this article this is root.hint). You can, of course, automate this with cron (but then make sure that whatever happens, you don’t mail the log outputs from named). ■



Getting to grips with Linux Permissions

DO IT WITH PERMISSION With any operating system it is important to ensure that users remain in control of their files and directories and are prevented from tampering with those belonging to other users, or the system. This is what the Linux permissions system is all about, as Jono Bacon explains.

Fig 1: An example directory list

Permissions are at the heart of how Linux works. Some operating systems (such as MS-DOS and some variants of Microsoft Windows) treat all files in the same way. This means that any user can change any file. Usually, there is only one kind of user. Linux does things in a completely different way. Under Linux many different users may be using the system at once (that’s why you need to log in when your system has booted.) Each user has their own slice of the system that they can use to store their files (the path /home/username). When you have more than one user using the system you need to ensure that other users cannot modify your files if you don’t want them to, and that users cannot look into files that you wish to be private. Examples of this are the system configuration files in directory /etc which cannot be viewed by users other than root. (This is because those files may contain details that a malicious user could use to attack the system.) You may be wondering how this is useful if you are the only person using your system. You may even be thinking that this is just a time-consuming irritation. Many new users think this at first. But you will come to realise that permissions offer many advantages. An example of this is on my own system. I run the latest developer version of KDE, but sometimes I need to use KDE1.x and GNOME. So

that I don’t have to constantly move files around to revert to these systems I simply have a different login for each, and a general directory where all users can share files. This saves a lot of time over changing the settings the manual way. Linux permissions are basically split into two areas. These are: • File ownership • File access permissions Every file has an owner. This is usually the user who created the file, although this can be changed. Users can also be classed into groups, so similar users can be grouped together. The other element is the access permissions for the file. These are split into three areas: • Who can read (view) the file • Who can write to the file • Who can run the file (this only applies to files that can be run) Each file has three sets of permissions: permissions for the owner of the file, permissions for the group the file is assigned to, and permissions for all other users on the system. Let’s look an example. Open a console and do a directory listing by typing: ls -al at the command line. The result on my system can be seen in Fig 1. This is simply a directory listing, but lets look at one line as an example: -rw-r r 1 jono 15:23 nickmail.txt


1701 Jul 13 U

A lot of information is given. Reading the information from left to right, this is what it means: s normal file) rw-r r jono jono 1701 Jul 12 15:23 nickmail.txt

72 LINUX MAGAZINE 10 · 2000

File type indicator (- meanU File permissions Owner Group File size File creation time and date Filename



[left] Fig 2: The KDE file permissions tab [right] Fig 3: The GNOME file permissions tab

The parts we are interested in primarily are the first four elements; file type indicator, file permissions, owner and group. The file type indicator tells us what type of file it is. Everything in Linux is treated as a file, even things such as devices (just do an ls -al in /dev to see what I mean). With everything being treated as a file, it is handy to have something to say whether it is a normal file, a directory, a device or whatever. Below are some examples of different uses: drwxr-xr-x 4 jono 7 00:36 .kde


1024 Jul 2U

This line shows that .kde is a directory as the file type indicator is set to ‘d’. brw———- 1 jono 1998 fd0



0 May 5 U

This line shows that fd0 (the floppy drive) is a block device by the ‘b’ in the file type indicator. Now look at the permissions part that is next to the file type indicator for nickmail.txt. This reads as:rw-r–r–. The permissions part is split into three sections, each having three characters. Each section reflects a type of user on the system, and whether they can read (r), write (w) or execute (x) the file. On this file each section breaks down as: rw- The owners section. The owner can (r)ead and (w)rite to the file. The owner (see below) cannot execute the file, but this can be changed as the owner can (w)rite to the file. r— The group section. The group (see below) can just (r)ead the file but cannot write to it. r— The ‘all other users on the system’ section. All other users who are not in the same group and do not own the file cannot write or execute it. To find out who the owner of the the group the file belongs to, we need to look at the next two pieces of information in the file’s listing. For nickmail.txt we can

see that the owner is ‘jono’ and the group is ‘jono’. Now we know what the different bits of a file listing mean and how we stop certain users using or accessing files. How do we change the permissions?

Changing a files permissions in KDE To change permissions using KDE is very simple. 1. Start the KDE file manager (kfm for KDE1.x and Konqueror for KDE2.x). 2. Right-click the file whose permissions you would like to change. 3. Select Properties from the menu. Click on the Permissions tab (Fig 2). 4. To change the file’s permissions, click the relevant check boxes to select the Owner, Group and Other permissions that you need. Bear in mind that to change permissions on a file you need to have permission! You must have write access to the file. You will notice that there are three additional boxes you can select; Set UID, Set GID and Sticky. These are explained in the box »Special Permissions.«

Changing a file’s permissions in GNOME Changing permissions in GNOME is virtually the same as in KDE. 1. Start the GNOME file manager (gmc). 2. Right-click the file whose permissions you wish to change. Select Properties from the menu. Click the Permissions tab. 3. A box like Fig. 3 will appear. As you can see it looks very similar to the one in KDE, and functions in exactly the same way.

Changing a file’s permission in the command line In Linux, virtually every operation that you can do in a GUI such as KDE or GNOME can be done using the command line. Changing permissions is no dif10 · 2000 LINUX MAGAZINE 73



ferent. To change the permissions on a file we use the chmod command, which has the format: chmod <permission(s)> <filename(s)>

Special Permissions Linux has some special permission settings. You won’t often need to use them, but this is what they are:

Set UID This setting causes the process that is executed by the file to run with as if the file owner was running it. This can be useful in cases where you need root access to do something (such as using a device). Be careful when using this setting as it could infringe some security on your system if used incorrectly.

Set GID This is similar in many ways to Set UID except that the process will execute with the same group ID as the owner.

Sticky This unusual bit will save the image of the program into the system’s swap memory for increased performance. Check the chmod man page (run man chmod) for more details on using these special bits if you ever need to. ■

The chmod command is a very versatile command, and can change the permissions in a number of ways. Probably the easiest way to remember is by changing the permissions using the same letters to set them as they are displayed (r, w and x). To do this you must first specify the section that you want to change (owner (u), group (g) or other (o)). You must then specify + or - to indicate whether you are giving (+) permissions or removing (-) them. Suppose that you would like to change the file nickmail.txt so that all other users on the system can read and write to it. You would use: chmod o+rw nickmail.txt

bit actually. The first thing that it is useful for solving problems. Many problems that you can encounter on Linux are simply down to the fact that a particular user does not have permission to do something. A common example of this is when mounting disks. Traditionally, only root can mount a disk (such as a CDROM or floppy) but there are many cases when a normal user needs to do so as well. Another use of permissions is to make shell scripts executable. This is done by setting the ‘x’ permissions bit. To demonstrate this create a plain text file (call it diskfree for the sake of example) using your favourite text editor containing the following text: echo echo »Hello...I am now going to list your harU d disk space:« echo df echo echo »There we go...all done. :-) » echo

This command basically says »give every other user on the system (o) addition permissions (+) that are read (r) and write (w) on the file nickmail.txt«. Now let’s assume we had a change of heart and wanted to let any other user read the file, but not write to it. To take away write access we would use:

Once you have created it, set the execute bit by either changing it in as described above, or by typing:

chmod o-w nickmail.txt

chmod a+x diskfree

Here are some more examples of changing permissions: chmod u+x file.txt Lets the owner execute the file. chmod g+wx file.txt Lets the group write and execute the file. chmod o-r mydir Stops any other user seeing what is in the directory mydir (e.g - when doing an ‘ls -al’). As we said, permissions are based around who owns the file and what group the file belongs to naturally need commands to change the owner of the file and their group. Those commands are chown and chgrp. The chown command is very simple. Let’s assume we want to make Bob the new owner of the file nickmail:

You can now run the file by typing:

chown bob nickmail.txt It’s as simple as that. When you list the file now with a ‘ls -al’ you’ll see that the owner section of the information has changed to Bob. Luckily, the chgrp command is just as easy to use as the chown command. Suppose we want to change the group of nickmail.txt to bob as well. We would type: chgrp bob nickmail.txt Again, if you look at the file by doing an ‘ls -al’ you will see that the group section of the information has changed to Bob. However, this can only be done by root.

Are permissions useful? After all this, you may be wondering what all of this has got to do with anything. Well… quite a 74 LINUX MAGAZINE 10 · 2000

./diskfree As you can see some text is printed and your disk space is shown. When you created the file it was simply text, although the text contained commands understood by your command shell. By setting the executable bit the file can be run so that the commands are executed. This is called a Shell Script. Although the example was pretty trivial, shell scripts can be used to do some amazing things.

Conclusion Permissions are not just an important part of Linux, but an essential part. A good understanding of how permissions work and how to deal with problems with permissions is important if your system is to work well. Like many things in Linux, we can always delve deeper into the subject. For further information on working with permissions look at the following resources: • the chmod, chown, chgrp man pages • the Linux Documentation Project manuals and documents • IRC Chatrooms (#linuxuk on or, #linux on If I can offer one final piece of advice it is: »Assume nothing«. Don’t assume that your system is going to work the ways you think it will, and don’t assume your security is tight enough for a networked environment. Permissions are there to protect you both as a user and a system administrator, and if you are the only user of one Linux computer you get to wear both hats! ■



Where to get free support and help with your Linux problems


Graphical interfaces like GNOME and KDE may make Linux look similar to other operating systems but it’s an illusion. Linux has its own way of doing things. If you’re an experienced user of another operating system this can become a source of frustration. Some new users eventually give up because they can’t find out how to do something they could do easily using the operating system they are used to. So for a successful first experience with Linux one of the most important things you should do is find out where to go for help when you need it. It may seem obvious, but if you bought a packaged distribution the first place to look for help is the manual that came with it. A good manual is one way in which distribution vendors can add value and make their product better than their competitors’, so the manuals are getting better and better and you’ll find a lot of useful information within their pages. Every distribution also includes online documentation, usually in the form of HTML pages. The content varies, but usually the online help includes distribution-specific FAQs. If you have a common problem this could be the quickest route to finding a solution to it. If you can’t find the answer there, what you’ll probably be tempted to do is call your distribution vendor. Unfortunately, the support that’s provided with packaged distributions is generally limited to installation . Once Linux is up and running the vendor won’t be obliged to help you. But that’s no reason to despair. You have the means to access a wealth of help resources. And although your distribution vendor may not answer your question by phone, email or fax you can still visit their website. There, you’ll find a lot of extra support material including the latest FAQs, patches and software updates to download. The fact that you may not qualify for free technical support isn’t a disaster because Linux is probably the best documented operating system on the planet. It is certainly the operating system with the most accessible documentation. The open-source nature of Linux means that all the documentation is freely available too. If there is any cause for complaint, it is that there is so much documentation it can be hard to find what

you’re looking for. If you have no idea what might be causing your problem you can waste a lot of time reading the wrong documentation. Every Linux user should know what documentation is available, and a good place to start finding out is – the Linux Home Page at Linux Online. Amongst other things, it is a portal to just about everything there is to know about Linux. Click the Support link and you’ll see all the different kinds of information resource that are available. If you haven’t tried Linux yet you’ll even find information about what it is, whether it will run on your computer and how to choose a distribution. Top of the list of resources you’ll find here are links to the Linux Documentation Project (LDP.) This is a project with the aim of creating the “official” documentation of Linux. The documentation includes frequently asked questions (FAQs), how-tos covering specific topics and man pages, the help for individual

Linux is a sophisticated and powerful operating system and there’s a lot to learn before you can make it do everything you want. If you’re a beginner that learning curve can look impossibly steep. If you hit a problem it can seem insuperable. Fortunately when you use Linux you’re not merely a customer of a software supplier: you’re part of a community. There’s a lot of information, resources and groups of people out there ready to help you out. And like Linux itself, it’s all free!

Linux Online Support Centre, with links to all known Linux information resources

10 · 2000 LINUX MAGAZINE 75



[left] The Linux Documentation Product – the “official” documentation of Linux [right]’s Usenet search engine will locate the answer to most common problems

INFO The Linux Home Page at Linux Online – The Linux Documentation Project – Usenet search – Linux Support – http://www. LinPeople: Linux Internet Support Cooperative – http://www.linpeople. org/ LinuxHelp on the Undernet – http://linuxhelp. – Linux Winmodem Support – Winmodems Linux information page – ~gromitkc/winmodem.html Virtual Dr – Linux Support Services – Linux Free Support – http:// ■

commands which you can read by typing “man command” in a console window. If you installed Linux from a CD-ROM there’s a fair chance that much of this documentation is already present on your system. The LDP also includes guides such as the Linux Installation and Getting Started Guide by Matt Welsh, the Linux System Administrator’s Guide by Lars Wirzenius and the Linux Network Administrator’s Guide by Olaf Kirch. The Guides are complete books that provide in-depth coverage of a topic. Like all the LDP material they are available for download in a variety of formats: text files, html pages or pdf documents. It’s also possible to buy printed copies of most of this documentation in books like Linux, the Complete Reference published by Walnut Creek CD-ROM. One advantage of reading the documentation online,of course, is that you can be sure you are seeing the latest, most up-to-date version. This type of documentation is all very well but it may not have the exact answer to your specific problem. When you are just trying to get something to work it’s tempting just to ask somebody else for the solution. Newsgroups are ideal for this, but for the sake of everyone involved they should not be your first resort. Many of the people who regularly frequent newsgroups get tired of answering the same old questions and if you ask something that they know is written down somewhere you may be told to go and read the ****** manual (RTFM.) You’ll have a much better chance of receiving a helpful or sympathetic response if it’s obvious from your question that you’ve done your homework first. If you’ve not visited a newsgroup before, it’s a good idea to check previous messages to see if your problem has been brought up before and been answered. A good way to do this is to use’s Usenet search tool at . Just type in some keywords that you’d expect to appear in discussions relating to your problem and the search engine will pull out the most recent messages that include them. There’s a wealth of valuable information contained in Usenet and browsing the mes-

76 LINUX MAGAZINE 10 · 2000

sage archives will more often than not turn up a solution to your problem. If it doesn’t, even allows you to post a message to a newsgroup online, which is great if you haven’t set up a newsreader. Even if a search at doesn’t produce the answer to your question it will certainly help you to identify the most appropriate newsgroup to post your question in. There are a great many newsgroups devoted to discussion about Linux including a complete hierarchy of groups with names starting with “linux”. Some of the groups that will be useful to new users with a problem are listed in the panel. For the full list see the Linux Home Page or search the list of groups displayed by your newsreader.

IRC channels A more immediate way to communicate with other Linux users is through Internet Relay Chat (IRC). Many IRC channels are used by members of the Linux developer community to discuss their work but there are a few that are intended for newcomers and which try to create a friendly environment where you can get help if you need it. These channels now have their own web sites where you can find out more about them, and which are being developed into repositories of useful information for people who are getting going under Linux. They are worth a look. For a start try the Linux Support Project ( This project started as an IRC support channel for beginners, advanced users and system administrators, but it is now developing into a major online resource. A search engine provides access to over 12,000 documents and new sections of the site are opening all the time. Also worth a visit is the Linux Internet Support Cooperative ( This group provides 24-hour support via IRC on channel #LinPeople, for new and experienced users alike. Another IRC channel worth trying is #linuxhelp on the Undernet, whose website is at


Yet another way to contact other Linux users and even meet up with them in person is to join a user group. There’s a user group section elsewhere in this magazine which contains information about how to contact local groups in the UK. Many Linux user groups hold regular meetings, but all have a web page and those that don’t have meetings enable members to communicate by means of mailing lists. Besides those run by local user groups there are many other mailing lists devoted to various aspects of Linux, although few are intended to be used by beginners looking for technical help. If you like the idea of using a mailing list to seek advice and information check your distribution vendor’s website as there are a number of mailing lists that cater for users of a specific distribution. There’s an unofficial support mailing list for Red Hat Linux users at (http://www. Mailing lists run by egroups are convenient if you’re concerned about receiving dozens of mail messages a day because you can have the discussion sent to your mailbox in digest form or even browse it online at egroups’ website. If you don’t want to plough through reams of documentation and aren’t keen on participating in discussions via newsgroups, mailing lists or IRC channels you might like to try some of the free support sites that are available on the web. If you’re using the popular Linux-Mandrake distribution try visiting ( which promises to answer questions about Linux-Mandrake and questions about GNU/Linux in general. Some support sites cater for very specific problem areas. If you’re trying to get a Windows software modem such as a Winmodem to work under Linux there are two websites you should definitely visit. The Linux Winmodem Support site is at Another good site packed with useful information is the snappilynamed “Winmodems are not modems; Linux information page” which can be found at html. These sites are testimony to the fact that where there is a need, Open Source developers find a way. There are free support websites that are less specialised in nature and which are worth trying with more general problems. One that has been


Some helpful Linux newsgroups comp.os.linux.alpha – discussion about Linux on Digital Alpha machines. comp.os.linux.answers – disseminates the latest Linux FAQs. HOWTOs, and READMEs. comp.os.linux.apps – discussion about Linux applications. comp.os.linux.hardware – discussion of Linux hardware compatibility issues. comp.os.linux.networking – discussion about networking and communications issues. comp.os.linux.powerpc – discussion relating to Linux on Power PC. comp.os.linux.setup – discussion about Linux installation and setup issues. comp.os.linux.x – discussion relating to the X Windows System. alt.os.linux.caldera – discussion about Caldera’s Open Linux distribution. alt.os.linux.slackware – discussion about the Slackware distribution. linux.debian.user – discussion about the Debian distribution linux.redhat.* – a set of newsgroups related to Red Hat’s distribution linux.samba – discussion related to using Samba – discussion for users of WINE uk.comp.os.linux – discussion group for Linux users in the UK ■ around for quite a long time and has a good record for coming up with appropriate solutions is the Virtual Dr at . Some free support websites are advertising-funded or exist to draw attention to a paid-for business-oriented support service. An example is Linux Support Services at This site is operated by a company that provides commercial support for US companies using Linux. The free support is provided by volunteers, and comes with no guarantees. For UK residents one to try might be Linux Free Support, which you’ll find at . A service provided by Linuxsure ( which provides professional evaluation, migration, integration and support services to paying customers, it supports all distributions and applications and promises a guaranteed response from certified experts. The service seems to live up to these claims as our test query was answered in a few minutes. Go on, give it a try. With all these information and support resources at your disposal it would be surprising indeed if you couldn’t find an answer to your Linux problem. It might take a little while, and involve a bit of work, but that’s all it will cost you. If you need a reply within a guaranteed time and are prepared to pay for it then companies like Linuxsure or your distribution vendor will have a support plan to suit your needs. But that’s not the subject of this article. ■

[left] linuxhelp on the undernet – Linux help using IRC [middle] The Virtual Dr, offering support for all popular operating systems [right] Free Linux support for all UK users

10 · 2000 LINUX MAGAZINE 77



An explanation of MIME content types


Linux graphical environments use the Internet standard MIME content types to determine what the content of a file is. Julian Moss explains how to use them.

MIME stands for Multipurpose Internet Mail Extensions. It’s an Internet standard that was originally created to enable information other than plain text to be sent across the Internet using electronic mail. MIME defines standards for a number of things: the way the various parts such as the message text and attachments are combined into a single file (the mail message), the way the content type of each part of a message is specified, and the way items are encoded for electronic transmission so that they can be handled by software designed to process messages containing only ASCII text. The role played by MIME in the sending and receiving of electronic mail messages and attachments is normally invisible to the user, who needn’t be concerned about it. However, MIME standards have been adopted for use by more than just elec-

tronic mail. Web servers use them to tell web browsers the type of material they are about to receive. And MIME content types are used by graphical environments that run on Linux such as KDE and GNOME to identify different types of file and associate them with the applications that should be used to open them. For this reason, it is a good idea for Linux users to know a little bit about them. MIME content types consist of two parts, a main type and a sub-type, which are separated by a forward slash. Main types that are commonly encountered are “application”, “text“, “image“, “audio“ and “video“. The “text“ main type contains various types of plain text file. Examples are: “text/plain“ (unformatted text); “text/html“ (text containing HTML coding); “text/rtf“ (text in rich text format). The “application“ type contains data files. Examples are: “application/msword“ (Microsoft Word document) and “application/x-zip“ (compressed archive in Zip format). The “image“ type is used for still images. It includes a number of subtypes like “image/gif“, “image/jpeg“, image/png“ and so on, whose content is probably obvious, as will be the type of material covered by the “audio“ and “video“ main types. Table 1 shows a list of commonly encountered MIME content types. MIME content types are assigned and listed by the Internet Assigned Numbers Authority (IANA). A

How to add a new Mime type in KDE

INFO IANA Home Page: RFC Editor Homepage: RFC tree (short): /~rourke/links/rfc/ MIME Content Types:

1. From a KFM window select Edit, Mime types (to make changes that affect only the current user) or Edit, Global Mime types (while logged in as root, to make changes that affect the entire system.) You will see a set of folders that represent each of the main Mime types. Click on one of these folders to open it and you will see a set of icons representing all the Mime sub-types defined within it. To change the program used by default to open a file of a particular type, edit the icon’s properties.

■ 78 LINUX MAGAZINE 10 · 2000

2. An easy way to create a new Mime type in KDE is to drag a similar one to the desktop to copy it, right-click it and click Properties. On the General tab, change the name. On the Binding tab change the Pattern entry so that it contains the file extensions used by files of this type. In the Mime Type field type the full content type descriptor. Add a comment that describes this type of file. If you wish, you can also select an application that will be used by default to open it. Then move the new Mime type icon back to the KFM window.


How to add a new Mime type in GNOME

1. Open the GNOME Control and select Mime Types from the tree on the left. Check in the list on the right that the type you want doesn’t already exist, then click Add. In the Add Mime Type dialog box type the full content type descriptor in the Mime Type field. Type the file extensions used by files of this type in the Extension field, separated using commas if more than one extension is used. Click OK.

2. Now the new Mime type is added to the list, click Edit. You can now set the Open, View and Edit actions to be performed on files of this type by selecting programs to carry out these actions. By clicking on the icon button you can also change the icon used for this type of file.

full list of the officially recognised types can be found at The list includes standard content sub-types and vendor-specific sub-types (the ones with names beginning “vnd.“) In real life you will also come across sub-types that start with “x-“. These are sub-types that have no official status. If you need to invent a content type in order to associate some type of file with a particular pro-


Table 1: Common MIME content types MIME Type Description</B> application/acad application/dxf application/msword application/octet-stream application/pdf application/postscript application/rtf application/ application/ application/x-debian-package application/x-javascript application/x-gzip application/x-msaccess application/x-msexcel application/x-mspowerpoint application/x-rpm application/x-zip application/zip audio/basic audio/x-aiff audio/x-midi audio/x-mod audio/x-mp3 audio/x-wav image/bmp image/cgm image/gif image/jpeg image/png image/tiff image/x-portable-pixmap image/x-xbitmap image/x-xpixmap text/css text/html text/plain text/richtext text/rtf text/sgml text/xml video/mpeg video/quicktime video/x-msvideo video/x-sgi-movie

AutoCAD drawing files (*.dwg) Drawing Exchange Format drawing files (*.dxf) Microsoft Word file (*.doc) Unknown binary data Adobe Acrobat file (*.pdf) PostScript (*.ai, *.ps, *.eps) Microsoft rich text format (*.rtf) Microsoft Excel file (*.xls) Microsoft PowerPoint file (*.ppt) Debian Package (*.deb) JavaScript source file (*.js) GNU zip archive (*.gzip) Microsoft Access file (*.mdb) Microsoft Excel file (*.xls) Microsoft PowerPoint file (*.ppt) Red Hat Package (*.rpm) ZIP archive (*.zip) ZIP archive (*.zip) Basic audio (*.au, *.snd) AIFF audio (*.aif, *.aiff) MIDI file (*.mid) MOD audio (*.mod) MPEG audio (*.mp3) WAV audio (*.wav) Microsoft Windows bitmap image (*.bmp) Computer Graphics Metafile (*.cgm) GIF image (*.gif) JPEG image (*.jpg; *.jpe; *.jpeg) Portable Network Graphics image (*.png) TIFF image (*.tif; *.tiff) PBM Pixmap image (*.ppm) X Bitmap image (*.xbm) X Pixmap image (*.xpm) Cascading style sheet (*.css) HTML file (*.htm; *.html) Plain text (*.txt) Internet standard rich text Microsoft rich text format (*.rtf) SGML file (RFC1874) XML file (*.xml) MPEG video (*.mpg; *.mpe; *.mpeg) Apple QuickTime video (*.mov; *.qt) Microsoft Windows video (*.avi) SGI movie player format

For a complete list of MIME types see:

gram (perhaps one you have written) you should give it a sub-type that starts with “x=”. If you use KDE you will need to have a MIME content type for any file that you want to have associated with an application for opening it. If you use the GNOME file manager you don’t have to use MIME content types - it can use just the file extension to work out which program to use - but they are supported as an option. ■ 10 · 2000 LINUX MAGAZINE 79



A powerful file manager for KDE


Windows users trying out Linux for the first time will immediately notice the absence of familiar tools they have come to take for granted. One of the most obvious ones they will miss is the Windows Explorer. If they choose GNOME for their desktop they will of course have Midnight Commander. But KDE’s file manager KFM (at least in KDE 1.x) is by comparison a bit basic. A solution is available, though, in the form of kruiser.

Previously known as KExplorer – kruiser is a file manager that emulates Windows Explorer. It isn’t generally available in binary form, however (unless you use the Linux Mandrake distribution, which contains a binary and installs it as standard) so you’ll need to compile it from the source code. For convenience, the source code is provided on the cover CD.


LinuxMagazine/kruiser/ kruiser-0.4.tar.gz

You can find the package containing the source code of kruiser on the CD in the directory Software/kruiser/. For the most up-to-date version, visit the home page of the kruiser project which is: The current version (0.4) can be downloaded by FTP from

80 LINUX MAGAZINE 10 · 2000 After downloading the package or copying it from the CD, switch to a suitable installation directory (e.g. /usr/local/src/) and unpack it there. Administrator rights are needed to do this so you should first switch to root using the su command: [blue<\@>dual ~]$ su Password: ***** [root<\@>dual ~blue]# cd /usr/local/src [root<\@>dual src]# tar xzf /tmp/kruiser-0.U 4.tar.gz The last command unpacks the source code archive – assuming that the kruiser package is located in the /tmp directory. If not then insert the correct path (e.g./mnt/cdrom/Software/kruiser/). In the next three steps, the source text files are configured and compiled, and the finished program files are installed in the appropriate location:


[root<\@>dual [root<\@>dual ... [root<\@>dual ... [root<\@>dual

src]# cd kruiser-0.4 kruiser-0.4]# ./configure kruiser-0.4]# make kruiser-0.4]# make install

Starting the program After restarting the KDE panel (right-click it, select Restart), you can now start kruiser from the K menu. You will find kruiser under the entry Utilities/KDE Explorer (Fig. 1 – right).

Double-click again at last! If you’re a Windows user then the Windows double-click will be second nature. And if you’ve already dabbled in KDE you’ll have found that double-clicking opens two editors, starts Netscape twice or opens two instances of a picture you have found in the kfm file manager. A single click will no doubt seem unnatural, although with kruiser this is no longer a problem. Navigation through the directories performed the same way as using Windows Explorer. To the left of the screen there is a familiar tree view (Figs. 2 and 3 – over page), where sub-directories can be opened or closed with a mouse-click on the plus or minus sign. You can also double-click a directory name in the right-hand half of the window to switch to the required sub-directory. You can move up one level in both Explorer and kruiser by pressing the Backspace key or by manually selecting the directory above in the tree view. Of course, double-clicking is not only used to switch directories: the main use is opening a document using its associated program. kruiser has a standard application registered for most types of document. If it doesn’t recognise a file type, a dialog looking surprisingly similar to that of Windows Explorer opens, in which you can select a program to use to open the document (Figs. 4 and 5 – over page).


Try your luck with an unknown file (in our tests, kruiser did not recognise .pcx files, for example). If you cannot find a matching program in the list but know the command name, you can enter it in the empty field above. (You should be able to permanently associate a program with a particular file type using kruiser’s Options dialog box; to open it use Edit, Preferences and select Extensions. However, this feature doesn’t work in the version we tried. You can also use a slow double-click with kruiser. If you click twice on a file or directory name with a sufficient gap between the clicks, the name can be edited – just like Windows. You can go to any position in the name with the cursor keys and modify it. If you’ve been using Linux for a while and are used to the single click of KDE (it’s easier, when you get used to it) you can tell kruiser to fall into line. To

Fig. 1: Starting kruiser via KDE’s K menu

Configuration: The installation of a program supplied in source code format always follows the same pattern. After unpacking the archive you will have created a new directory full of files and, often, further sub-directories. You should firstly run the script file from this directory by typing ./configure (the ./ before the command is necessary because the current directory is not set in the path). The configure script analyses the system environment, searches for existing or missing help programs and libraries, checks their versions and generates the Makefile which is necessary for the next step. The make command basically means “create here”. There will be instructions in the Makefile describing how to create finished executable programs from the source files. Thus typing make install copies the new program files to appropriate locations in the system, for example: help files to /usr/man, a configuration file to /etc and the program itself to /usr/bin. Out of interest, the path is stored in the environment variable $PATH and tells the shell which directories to look in for a program if it is called without specifying its full path. For instance, if $PATH=/bin:/usr/bin:/usr/local/bin and you enter the command myprog, then the shell will search (in this sequence) for /bin/myprog, /usr/bin/myprog and /usr/local/bin/myprog. Tree view: A tree view is used to illustrate directory structure. Directory hierarchies are shown by means of indents and lines between different hierarchy levels. The great thing about a tree view is that you have the option of simply hiding parts of the hierarchy that aren’t currently of interest. ■ 10 · 2000 LINUX MAGAZINE 81



Symbolic link: A symbolic link is a directory entry which contains just a reference to another file. Think of the link as a small text file in which the name of the other file is stored. If such a link is accessed, Linux then looks to see what name is hidden behind the link and passes all queries on to it. It’s a little like a Windows shortcut. rwxrwxrwx: This is the notation normally used in Unix for access rights: r, w and x stand for read, write and eXecute rights. The three groups of three stand for the rights of the file holder, group or other users in that order. If there is a dash instead of one of these letters this means that the corresponding right has not been assigned. ■

Fig. 2: Tree views in Windows Explorer are just the same ... Fig. 3: ... as their counterparts in kruiser

do this, call up kruiser’s Options dialog with Edit, Preferences, switch to the page called Misc and check the box Open on single click (KFM style) (Fig. 6 – below).

Drag & Drop weaknesses Kruiser supports drag and drop but with certain limitations. Files can be copied and moved within a kruiser window – you can drag a file from the direc-

tory currently displayed into one of the directories in the tree view on the left-hand side, for example. And if you have a number of kruiser windows open you can also drag files from one window to another. When you release the object, a menu appears in which you can select between Move, Copy and Link. Link creates a symbolic link. Other drag and drop actions, such as between kruiser and the desktop or the standard file manager kfm, are not possible at the moment. If you try to drag an object out of kruiser, a “no entry” icon is displayed. The same thing happens if you try to drag a file into kruiser.

Kruiser tools

Fig. 4: Original (Windows) version ... Fig. 5: ... and kruiser counterpart: Open with which program?

Of course, kruiser isn’t merely an Explorer clone. It has a few useful features above and beyond the program it was modelled on. One of these is the thumbnail picture preview which can be activated using the menu View, View, Image preview(Fig. 6). When a picture file is selected, a thumbnail image of it is shown at the bottom of the window. This feature is also useful for ordinary files, for which information such as the file size, owner and group is shown along with access rights in the normal rwxrwxrwx notation. ■

Fig. 6: Explorer cannot do this: Picture preview in kruiser

Fig. 6: Changing some of kruiser’s configuration options

82 LINUX MAGAZINE 10 · 2000



Step-by-step instructions for specifying a program to open a file type When the graphical interface KDE is first installed it doesn’t know which programs should be used to open certain types of file. This means


that you have to type the name of the program you want to use whenever you want to open cer-

tain files. If you have a program installed on your system that can open this type of file, you can make KDE use this program whenever you click on the file’s icon. Here’s how to do it.

1. If KDE doesn’t know what program to use to open a file when you click on its icon it will display this ”Open With:” dialog box. You’ll have to type the name of the program or click the Browser button and select a program from the list. It would be much better to have KDE open the file in this program automatically.

2. To make the change you’ll need root privileges. Log in as root, then launch KFM by clicking on the ”home” icon in the panel. Click on Edit, then Global Applications.

3. Locate the program you want to use to open this type of file. The folders under ”Global Applications” represent submenus on KDE’s K menu. The program icons usually have the name of the program rather than the name that appears on the menu. TIP: The descriptive name of the icon beneath the cursor appears on the status bar of the KFM window.

4. In this example we are going to associate a program with the ”Portable Network Graphics” (PNG) file type. The program we will use is the KDE Image Viewer, kview. So we right-click the kview icon and select Properties from the pop-up menu.

5. Select the Application tab of the dialog box. At the bottom, you’ll see two lists of items. These items are called MIME types. They are standard names for describing different types of file that are used on the Internet, and also by KDE. On the left are the MIME types of the files that KDE already knows the program kview can open. On the right are all the other MIME types known to the system.

6. Select the MIME type you want from the righthand list. For a PNG file it is ”image/png”. Click the arrow button to move the MIME type to the lefthand list. Then click OK to close the dialog box. Log out of root and log in as a normal user. When you click on a PNG file you should now find that kview is automatically launched to open it.

10 · 2000 LINUX MAGAZINE 83



Compression tools


Although graphical interfaces such as KDE or GNOME are a big help, those


who want to fully exploit Linux can’t avoid the command line. Besides,

there are many situations where it is beneficial to have a firm grip of the jungle-like syntax of the $ prompt.

In the world of Linux, the extensions .gz, .tar.gz or .bz2 are perhaps the commonest of all – and not just when downloading files or accessing HOWTOS. We discuss below these cryptic file extensions and tell you how to pack and unpack files and directories.

Gripping Put simply, the gzip file(s) command shrinks files. The resulting compressed file is called file .gt and retains the same access and ownership rights, along with access and modification time attributes. If the file name is too long for the file system, gzip truncates it – lengthy parts of the file name are shortened. If you want to restore a compressed file you can use gunzip or gzip -d (short for gzip --decompress), which is actually the same program. If, on the other hand, you just want to view the packed file, you can type zcat file.gz (if necessary adding | less to the end, or you could use zless file.gz directly), which is the same as gzip -c -d. The option -c, incidentally, has the effect of decompressing the file to stdout. 84 LINUX MAGAZINE 10 · 2000

The degree of compression depends on the size of the input file and the quantity of repeated character strings. That is to say, files best suited to compression are those in which similar data patterns are repeated often. For example, a 1.4 MB bitmap file might be reduced to just 709 KB after gzip has been applied. If you use the parameter gzip -9, the resulting file size is just 708 KB. By appending a digit in the range 1-9, you can decide whether you prefer compression to be faster (gzip -1) but not as compact, or slower but with a better degree of compression (gzip -9). gzip also has some further options that are of interest. If, for example, a file bearing the same name already exists in the current directory, gzip politely asks: user@host ~ > gunzip file.bmp.gz gunzip: file.bmp already exists; do you wish U to overwrite (y or n)? If you want to avoid the query, use the gzip -f option (for --force). This parameter steams through packing and unpacking even if files of the same name already exist. Of interest here is the behaviour of Symbolic links (Symlinks). Normally, gzip will decline a request to compress Symlinks by saying gzip: link.bmp is not a directory or a regular file - ignored. If you use the -f option, the file to which the link is pointing is compressed, but is given the name of the link, i.e. link.bmp.gz. The gzip command has many more features – a quick glance at the man page will provide a good overview. If you want to have your own special gzip equipped with a few options as standard, you can enter this in the environment variable GZIP. For the bash shell you can define your own preferred parameters as follows, for example:


user@host ~ > export GZIP=”-9” user@host ~ > echo $GZIP -9

Shrinking things further? Greater compression is possible courtesy of bzip2. This not only compresses better than gzip but is significantly faster as well. On top of this, it has a recover mode which means it attempts to repair possible damage to compressed files, or to unpack just the undamaged parts. Before we start to discuss this, check to see if you have bzip2 on your system. If not you can find not only the source but also extensive information about the program at . Most of the parameters work in exactly the same way as gzip but some differ. The extension for compressed files here is called .bz2. Access rights and timestamps are likewise maintained, and here, too, you’re not allowed to overwrite files. If, on unpacking, an attempt is made to overwrite an existing file, no query is issued as with gzip. Instead you get the message: bunzip2: Output file datei.bmp already exists, skipping. If you want to circumvent this, then, as with gzip, use the option -f (for --force). The actual differences with bzip2 are subtle. For example, the file to be compressed need not be automatically deleted, you can keep a copy by typing bzip2 -k (for --keep). The feature bzip2recover has already been mentioned but what takes place when it runs is quite interesting. During compression bzip2 decompresses files into separate blocks. Thus should a file be damaged for any reason, the data contained in the blocks that remain intact can be rescued if necessary (for more precise details of this you should refer to the man page).

tar for the archive, mate! One of the things you can do with tar combine several files into an archive - handy if you want to transfer many associated items from one computer to another. This remaining, single archive file can then be compressed more easily as a single entity. To create such an archive, enter the following: user@host ~ > tar cvf archiv.tar direcU tory directory/ directory/file.html directory/test/ directory /test/file2 directory/text If we break it down into the separate parameters we see the following: the option c stands for --create – that is, create a new archive. If one decides to get by without the v for --verbose, no file names are specified during the archiving process. The three letters tar incidentally stand for “Tape ARchiver”: originally the program was intended for backing up to tape drives. That is why f archive-


HOWTOS: In contrast to man pages, which are created mainly for reference, HOWTOS provide instructions on how to overcome specific problem areas and are thus much more geared towards the beginner. In current distributions they are to be found under /usr/doc/HOWTO. You can find, for example, the file Firewall-HOWTO.gz, which you can unpack and then read, or else view directly with zless or zmore. stdout: There are three standard channels for input and output: stdin (standard input), stdout (standard output) and stderr (standard error output). A user, for example, has the keyboard as standard input and the screen as standard output. If you decompress a file using zcat (gzip -d -c), then, providing it has not been redirected, it will be output to the screen. Symbolic link: A reference to another file which is handled by a certain application. If a file that a Symlink points to is deleted, the link is left with an empty pointer. Symlinks are created using the ln -s command. Environment variable: The shell provides the user with storage space for saving certain information which can then be accessed by programs. These environment variables each comprise the name of the variable and the value assigned to it. ■ name is used here, to indicate that tar shall not write such a device but to a file on the hard disk. Following this, of course, a name has to be specified – archive.tar. All the directories, files and subdirectories are written to the archive file. If you want to append further files to the existing archive you can invoke tar rvf archiv.tar furtherfile – where r stands for --append. To be certain that this file doesn’t already exist, you can of course view the archive beforehand: tar tvf archive.tar, where t can also be replaced by the long form --list. If access rights and ownership are to be maintained you should use tar pcvf archive.tar /home (p stands for --preserve-permissions) – if the directories are unpacked again the files will be restored in their original state. To unpack an archive once again you can use tar xvf archive.tar, where x stands for --extract. If individual files are to be extracted from the archive you can append their names when making this call. Finally, bear in mind that tar does not compress automatically. Naturally, an archive of this type can still be packed using gzip or bzip2, but you can avoid this second step and deal with everything in one go: tar czvf archive.tar.gz directory also zips the archive at the same time (in reality the external program gzip is invoked). In the same way, z is to be used to unpack a compressed package of this type – the command is tar xzvf archive.tar.gz. If you prefer to use bzip2 instead of gzip you should check beforehand whether your own particular distribution provides for this (read the man page!). Invoking tar cIvf test.tar.bz2 directory (with a capital ‘i’) worked on the test computer under Debian 2.1, although a second computer declined the instruction: tar: invalid option – Try `tar --help’ for more information. (hge) ■ 10 · 2000 LINUX MAGAZINE 85



Step-by-step tips for adding an application to the KDE menu


MENU Step 1 To start the menu editor, right-click the K button on the panel and click Properties from the pop-up menu. The menu editor will appear, displaying two menus. On the left is the local menu. Changes made to this will affect only the user you are currently logged in as. On the right is the default menu which all users can see. You must be root to make changes to this menu.

When you install an application on your Linux system, it may not automatically appear in KDE’s K menu. There are several reasons for this. Not least is the fact that the compiler of the package does not know what graphical environment (if any) the installer of the software wishes to use. But it isn’t a big problem. KDE provides a menu editor to enable you to customise the menu and add new applications to it in exactly the place you want. Note: in some distributions applications are added to the menu if you install them from a package using the distribution’s own package manager. But to retain control over how your menus are configured it is better to add applications yourself.





Step 2 To add an item to an existing menu, select the menu item in the menu editor, right-click it and click on Add. An empty menu entry will appear. The Type: fields should show Application. Type a name for the menu item in the Name: field, where it says EMPTY, and type a more descriptive comment in the Comment: field - this will be used in a tooltip.

Step 3 Click the large icon button and select a suitable icon from the gallery that is displayed. Click the mini icon button and select a different small icon, too, if you wish. In the Execute: field type the path to the executable program. If you don’t know where it has been installed, open the package or archive you installed the application from and look at the list of files. 86 LINUX MAGAZINE 10 · 2000

Step 4 If the application can be used to open files of a particular type, click the Application tab. On the right you will see a list of MIME types for different sorts of file. Select the MIME types for the files that this application can open and use the arrow button to move them to the right. This will tell KDE which types of file this application can be used to open. Click OK when you are finished, to create the menu entry.



GNOME 1.2 Desktop



users and prematurely

written-off by some distributions the GNOME desktop environment fights back with a new version. Here, we take a look at the most important components of Helix GNOME 1.2 to help you decide if an upgrade or new installation is worthwhile.

About four years ago a group of Linux enthusiasts on the Internet got together with the aim of developing a graphical user interface for Linux that would stand comparison with Windows and Mac OS. It’s true that at that time there were already several window managers – for example, fvwm2 and Afterstep – for the X interface; with CDE a desktop environment was even available. However, these solutions all had disadvantages – the window managers had so session management nor support for drag and drop; CDE was commercial and obsolete – so there was a real need for a new graphical interface to be developed. There were soon differences in opinion about how (and more precisely, using which GUI toolkit) this interface should be developed. First up for discussion was the Qt library from the Norwegian firm Trolltech. The most diehard open source supporters however complained that Qt (at that time) was subject to a commercial, non-free licence. They pointed out that a free library was available in the form of the Gimp Toolkit (gtk/gtk+) which allowed graphical

elements to be produced in a similar easy manner. The differences were not resolved, so in the end some of the developers decided to support the Qtbased KDE project whilst the others started to create the gtk-based GNOME project.

Co-operation Over time, not least because of the subsequent open source licensing of Qt, the former rivalry between the two projects gave way to a spirit of cooperation. The developers of both teams are now anxious to integrate application programs which have been written for the other desktop environment into their own interface. However, KDE has so far enjoyed greater success than GNOME, especially in Europe. The fact that some Linux distributions now rely entirely on KDE (examples include, Caldera eDesktop 2.4, Corel Linux OS 1.0 and easyLinux 2.0) has particularly hindered the advancement of GNOME, as has the practice of making KDE the default choice of the installation routine so that

Helixcode The American firm Helixcode, Inc. ( cofounded by GNOME evangelist and author of Midnight Commander Miguel de Icaza, has made it its goal to help GNOME break through as an Internet desktop. In addition to support of the GNOME project by making the GNOME extension “Helix GNOME” available at no cost and by developing further open source solutions in the office and groupware area, the company makes its money from support and network services. ■

10 · 2000 LINUX MAGAZINE 87



[left] GNOME uses the new window manager Sawmill [right] From the GNOME Control Centre you can select a new window manager.

GNOME is only available after further configuration steps. Red Hat is at the moment the only major distribution that uses GNOME as its standard desktop. But reports of GNOME’s death are exaggerated. With the jump from version 1.0 to version 1.2 GNOME, now supported by firms like Helixcode Inc., could possibly even succeed in making a comeback. We shall show you what features are included in the new version of GNOME and how you can integrate the Helixcode GNOME Binary Release 1.2 into your existing Linux system.

Window manager

Window Manager When switching into graphics mode (X Windows) it is initially only possible to display a grey pattern and to put characters or images on the screen. A window manager brings some colour into the grey world of Unix and Linux and makes it possible to open and close, maximise and minimise and move application windows. It also creates the window frames. ■

GNOME in the narrowest sense consists mainly of the GNOME Panel, the GNOME Control Centre, drag and drop functionality and a session management system. Whilst KDE with its kwm already has its own window manager, GNOME leaves it up to the user as to which window manager is used. Realistically, however, it must be accepted that so far

Installation of Helixcode GNOME 1.2 If you have already installed a recent version of GNOME and it can be called up using the KDM login screen (see the selection list Session), the installation should be straightforward. For this you can use the distribution-specific rpm packages. Helix GNOME is available for download in binary packages for several distributions from: Advanced users will find the latest GNOME sources at On the CD, unfortunately, there was insufficient space for the latest GNOME version. However, Linux Mandrake 7.1 contains GNOME rpm packages, though not the most up-to-date versions.

88 LINUX MAGAZINE 10 · 2000

only a handful of window managers are capable of working well with GNOME. In the current version 1.2 HelixCode only supplies Sawmill as a window manager. If you prefer another GNOME-capable window manager, you can activate this after any necessary re-installation in the GNOME Control Centre under Desktop, Window Manager.

Flexible Panel The GNOME Panel is the focal point of everyday work on the GNOME desktop and can be freely adapted to your own requirements. You can also set up several panels at the same time. Apart from the main menu (accessed using the foot icon) you can add and remove your own user menus, application launchers and applets. GNOME applets correspond to KDE’s docking widgets: they are usually utilities that can be integrated in an active form into the panel in order to have important information or functionality immediately to hand. In the main menu under Panel you will find the submenu Add to Panel which contains all kinds of applets and utilities. In addition you can add an ordinary application to the panel using drag and drop from the Programs menu. This is a handy feature for those applications that you launch particularly often.

Control Centre Just like KDE, GNOME also has a control centre. It can be reached from the menu (Programs, Configuration, Control Centre) or by clicking the toolbox icon on the panel. The Control Centre can be used to customise the behaviour of the desktop. Currently the following options exist: • Desktop (screensaver, window manager, background, panel, theme) • User interface (applications, dialog boxes, multidocument interface) • Window manager settings (dependent on the window manager selected) • Multimedia (mainly noises, sounds, …) • Peripherals (mouse, keyboard, possibly Palm Pilot…)



• Session (start sequence, tips on starting) • Handling documents (MIME types, opening URLs, editor…)

Drag and drop GNOME supports drag and drop. This functionality allows you to move objects by dragging them (by keeping the left mouse button pressed) from one point to another. Although the capability to support drag and drop is built into GNOME, many GNOME applications don’t yet take advantage of it. If you are used to working with Microsoft Windows you may notice this lack of drag and drop functionality. Cut and paste is another feature of modern desktop environments which GNOME of course supports. Although classical X11 applications such as XEmacs have long supported the marking, copying, cutting and pasting of portions of text within a document, as soon as you want to cut and paste objects other than text you need the support to be built into the desktop environment itself.

Session Management Another key area of functionality for a modern desktop is session management. This means the ability for each user of the system to have their own personalised desktop. The first time a user logs in using GNOME, it creates a special folder in which all the desktop settings for that user will be stored. If you make changes to the desktop or leave applications open when you close the GNOME session, GNOME records the final state of the desktop and tries to restore that state, as far as possible, the next time you log in.

Midnight Commander To conclude our brief tour of GNOME 1.2 we will mention the graphical file manager GNOME Midnight Commander. This is GNOME’s answer to the KDE file manager KFM. If you have used MS-DOS you may have used the utility Norton Commander, which was the inspiration for the console-based Midnight Commander for Linux. GNOME Midnight Commander is a graphical version of this powerful file manager which can be accessed from the menu under Programs, File Manager or from the document icon on the panel. You can use this utility to carry out all kinds of file management tasks using the mouse, and even to connect to other computers using the FTP protocol.

Upgrade or not? GNOME has grown up a good bit since the previous version. However, it cannot be denied that problems and program crashes still occur. In most cases these can be traced back to the window manager being used. If you already have an older version of

GNOME installed on your system you should acquire and install the latest version of GNOME right away (see box “Installing Helixcode GNOME 1.2”). Even if you don’t change over immediately you will still benefit from an update of the basic libraries which will be used by the gtk-based applications running under KDE. But if you don’t yet know GNOME, perhaps because your distribution only contains KDE, there is a lot to be said for trying it out. After all, what is the value in having a choice if you don’t use it? One of the many good things about Linux is that it doesn’t force you to use a particular desktop. You can’t say that about a certain other well-known operating system, can you? ■

[top] Everything to hand — the GNOME Panel [middle] Everything is configurable from the Control Centre [below] GNOME’s Midnight Commander in action

10 · 2000 LINUX MAGAZINE 89



A tour of some new Gnome applications


As well as being a powerful desktop function, Gnome boasts an increasing number of high quality gtk+-based applications providing a good reason to use it as your graphical desktop environment. Andreas Huchler takes a tour through some of the most important applications included in the current Helixcode Gnome 2.

gtk+: In order that every programmer doesn’t have to reinvent the wheel, a variety of programming libraries have grown up over time containing code that can be used again and again. gtk+ is an extension of the gtk library used to create the image processing program Gimp (hence the name gtk: Gimp Toolkit). Libraries that produce graphical elements are often labelled Toolkit (e.g. gtk, FLTK, Tcl/TK). ■

You can of course run Gnome or gtk+ based programs under KDE after installing the basic Gnome packages. A popular example, which we’ll come back to later, is the image processing package Gimp. There are two reasons to use gtk+ based applications exclusively under Gnome: the look & feel is more in harmony with the Gnome desktop, and several desktop functions are better supported. Of course, in the end it’s a matter of taste. But many users believe that the widgets, buttons and icons of gtk+ based applications have their own charm and, aesthetically at least, can compete with the Qtbased graphics components used by KDE and its native applications. A standardised look and feel can be achieved using Gnome. This only suffers if, due to the lack of a gtk alternative, an X11 application has to be relied upon, which, in turn, relies on a foreign graphical library. However, an important aspect of functionality provides another reason to use Gnome: important features of modern desktops such as drag and drop or cut and paste usually only function between applications created using the same libraries. For example, a normal text file in Gnome can be easily dragged and dropped from the Gnome file manager

90 LINUX MAGAZINE 10 · 2000

(Midnight Commander gmc) to the word processing program Abiword, also gtk+ based, so that it can be opened there. However, attempts to drop the same file in the Qt based text editor KWrite will fail.

Another office package The Open Source project Abisuite is to Gnome what the mammoth free project KOffice is to KDE. Managed by SourceGear Corporation, the intention is to create a gtk+ based cross-platform Open Source Office Suite in co-operation with the free software developers involved in the project. AbiWord, the word processing module, is now complete enough to be used to produce simple documents. It was therefore included in the current Gnome release of Helixcode. In addition to the usual text editing and formatting functions Abiword even has a spellchecker. The proprietary file format is .abw (and .zabw when zipped.) rtf and Word 97 files can be opened too. The import filter has not yet been optimised, as is often the case with other word processing packages, and the usual conversion errors can sometimes occur when Word files are imported. If you are mainly concerned with importing smaller documents with fairly simple formatting into Abiword the import filter should meet most of your needs.



The documents produced by Abiword can currently be saved in its own file format or as .txt-, .rtf-, .html- or LaTeX files. The current version of AbiWord (0.7.9) is useful for viewing a wide range of text files and creating simple text documents. Although it offers the basic functions of a modern word processing system it cannot yet compete with professional rivals such as StarOffice or WordPerfect.

Gnumeric kills Excel In the past, anyone who needed a decent replacement for Excel under Linux had, almost inevitably, to rely on the spreadsheets integrated into the office packages of commercial software manufacturers (such as StarCalc in StarOffice). But for some time now, Gnumeric has provided a GPL spreadsheet package that aims to beat Excel. It already offers almost everything you could want from an up-to-date spreadsheet package, such as simple mathematical functions and statistical analysis processes (ANOVA, regression analysis etc. ) There are also practical filter and sorting algorithms. One of the few weak points at the moment is the lack of a charting wizard. In addition to the proprietary Gnumeric XML file format there is a wide variety of import and export filters for common and exotic file formats (diff, comma separated, HTML, Excel 95/97 and so on.) An easily extendable plug-in ensures that future formats, including those from commercial manufacturers, can be integrated easily. The development group under Gnome co-founder Miguel de Icaza is both dedicated and competent making many people keen to see what they comeup with next.

Dia: diagram editor Dia is an ideal tool for software engineers. If you are familiar with Visio for Windows you probably already know there is no Linux alternative. However, you should take a closer look at the diagramming program Dia. The current version (0.85) provides several interesting features which allow you to

produce diagrams for various contexts quite easily. Special objects for the following types of application have been implemented to date: • Circuit • Ladder • ER (Entity-Relationship) • Electric • Flowchart • GRAFCET • Pneumatic/Hydraulic • UML (Unified Modelling Language) • Chronogram • Civil • Network • Sybase The program has so far been aimed mainly at engineers and IT professionals. However, because it also provides several elementary diagram symbols (circle, polygon, curve, straight line, …), it can also be useful in other fields.

Work in Progress: AbiWord Personal still lacks many important features

[left] Gnumeric has all the frills you’d expect of a good spreadsheet [right] UML diagrams made easy

10 · 2000 LINUX MAGAZINE 91



All of this makes GIMP a more than adequate alternative to commercial image processing programs running under Windows.

Graphics assistant for Gnome In addition to the new version of the classic GIMP, Gnome 1.2 provides two other graphics tools that were designed for use under Gnome. Eye of Gnome is a simple image viewing and cataloguing program. However, its functions have so far (Version 0.3.0) been limited to displaying images with various zoom settings. As the name suggests, Icon Edit allows users to create and edit icons for Gnome. Unfortunately, in version 1.0.6 there are still some problems with the window size settings which occur regardless of the window manager used.

[above] GIMP has improved quite considerably in terms of ease of use and user-friendliness

Photo digital camera software

[right] GPhoto already supports more than a hundred digital cameras

Application development made easy Ripper: A ripper reads digital music information from audio CDs and uses it to create wav files. MP3 encoder: An MP3 encoder converts sound files into the highly compressed MP3 format. MP3 files are usually only about 10% of the uncompressed size but with virtually no loss in quality. MP3 encoders vary quite considerably in terms of encoding quality and speed. ■

Making MP3s at home with Grip

Have you written a console-based program and would like to add graphics in Gnome’s look and feel without having to learn the details of gtk? No problem! Just use GLADE, a RAD (Rapid Application Development) tool for gtk-based applications. GLADE allows you to put together an attractive GUI with just a few clicks of the mouse. The finished product can then be stored as an XML file. From this, GLADE can produce source code for either the C, C++, Ada 95, Perl or Eiffel programming languages.

The classics of image processing The GIMP (GNU Image Manipulation Program) is one of the Open Source community’s projects that provides a model for others. It is a universal image processing program that, like Photoshop, is particularly suitable for retouching photos or creating and managing image files. GIMP’s 1.2 pre-release 1.1.22 has much to be proud of – it is also proof that its developers are attaching more importance to usability than they have in the past. As soon as the installation program has started, users will be pleasantly surprised that developers have implemented attractive user dialogs. GIMP is also substantially easier to use. Although many new features have been added since the earlier versions, you can still usually find the function you want straight away. Detailed and context-sensitive help is provided, helping users to understand the package.

92 LINUX MAGAZINE 10 · 2000

Are you the proud owner of a new digital camera or are you going to buy one soon? Well, beware – you’ve more chance of finding Lord Lucan in the box than a Linux software disk. If you would like a camera-PC connection under Linux, however, take a look at the latest version of gPhoto – preferably before you buy the camera! The program supports more than a hundred different digital cameras and provides an interface for managing the downloaded pictures on your PC.

Gnome sound studio The Gnome-media package contains a variety of sound tools for Gnome. gtcd is a simple CD player with CDDB support. grecord allows you to record and play back your own sounds. gmix incorporates a user-friendly sound mixer with which you can control the features on your sound card.

Grip ripper and MP3 encoder grip is a graphical front-end for various CD rippers and MP3 encoders. The program also contains the fairly adequate CD player gcd, which you can also choose to install separately without ripper/encoding functions. Grip is actually a front-end for the cdparanoia and cdda2ways console rippers. However, there is the option to integrate other rippers. You have a choice between the six MP3 encoders included or others. If you have Internet access, Grip can automatically find out the ID3 tag of your newly created MP3 files from a CDDB server. With the help of this information, the program can determine the song and album titles of the new MP3s and thus set up an appropriate directory structure for your MP3 store.

XMMS MP3 player xmms is a fairly good Winamp clone under Linux. Like its Windows counterpart, it offers an equalizer, a playlist editor and support for skins and plugins as


well as the ability to play MP3s. You can visit for a variety of skins that allow you to change the appearance of your MP3 player (you can also use the original Winamp skins as the format is identical). Plugins from XMMS’s site can be downloaded, allowing you to greatly extend the functions of the player. For example, you can control the behaviour of your MP3 players using an infrared remote control or IBM’s ViaVoice. Finally, there is a range of popular plugins that allow MP3s to be visualised as graphics.

Multi-purpose Internet clients Are you wondering where friends have suddenly got all their great sounds from? The answer is probably via a Napster client, which lets you retrieve new MP3s from the world-wide Napster network. Although this practice is on shaky legal ground, there is nevertheless a user-friendly Napster client for Gnome. It’s called gnapster and offers a huge variety of functions to satisfy most needs. In addition to the standard MP3 search over the official Napster server, users can establish a connection to the OPENNAP servers where audio, video, image and other files can be found. In contrast to its Windows counterpart, gnapster shows the files available not in a long list but in a clear tree structure. gnapster also allocates any incomplete download files to their own directory so that you can see at a glance whether the download was successful. A resume function is included of course! However, be careful when using gnapster. There are now programs such as Media Enforcer that register illegal Napster activities. There have also been reports of gaps in Napster clients’ security which make it easy for attackers to spy on your system.


with the ICQ server using a unique ID can be sought and found by other users, who can then ask for a live chat. If you are online and have started your ICQ client you can see whether friends – whose ICQ client must be active too of course – are also on the Internet. You can then chat or swap files and interesting URLs over GnomeICU. In principle, the AIM service from AOL works in the same way as ICQ but offers several other features. As AOL itself doesn’t yet provide an AIM client for Linux that can be downloaded, several developers have had to help themselves and write their own client. The result of their efforts is gaim, which now enables you to contact other AIM members under Linux. These are two great programs if you know people who are often online. But be aware of the increased security risk of running them.

[left] A must for all MP3 fans and Winamp admirers [right] File sharing on the Internet with the gnapster client

Web sites containing gtk+ programs: ■

IRC with X-Chat If an occasional chat with friends over ICQ or AIM isn.t enough for you, and you would like to make new friends on the Internet, try X-Chat. X-Chat is an IRC

Find friends on the Internet with GnomeICU or Gaim.

ICQ or AIM The Internet services ICQ (pronounced: I Seek You) from Mirabilis along with AIM (AOL Instant Messenger) from AOL are just as insecure as the Napster service – but are also as useful. The name ICQ actually describes the program well. Users who register 10 · 2000 LINUX MAGAZINE 93



client (IRC stands for Internet Relay Chat), which allows you to enter the wide world of IRC junkies by joining an IRC server. The program actually offers everything that programs like mirc provide for Windows users. Of course, you can add various plugins and perl scripts to make your IRC session even more fun.

Pan newsreader Want to chat on the Net? X-Chat makes it easy.

Pan takes care of your subscriptions to newsgroups

[above] gtop and logview provide you with first-hand information on the status of your system. [below] All sorts of useful things for the Gnome panel: mail monitor, clock, battery indicator, modem lights ...

Another way to chat on the Internet is through Usenet. Usenet provides newsgroups which are the notice boards of the Internet. There’s a newsgroup for almost every subject imaginable and pan lets you get at them under Gnome. After logging on to a news server you can request a list of newsgroups from it. Once you have chosen a group you can then subscribe to it and read the individual postings. After this, pan takes care of almost all the administration relating to group subscription and article selection. For example, you can instruct pan to monitor particular groups for new postings and display anything it finds.

gtop and logview system monitors Every now and then you might need detailed information on the status of your system. In days gone by, you needed to be familiar with console commands to obtain this kind of information under Linux. Graphical desktops mean this is no longer absolutely necessary. For example, you could use gtop and logview system monitors which are typical examples of system tools under Gnome. gtop is the graphical gtk front-end for the top console tool. It presents up-todate information about running processes, memory capacity and the space currently available on your

computer’s file systems. The data can, of course, be updated almost in real time if required. Unlike gtop, you are required to have administrator rights when you use the system log monitor logview. You can use it not only to view your system’s various log files but also to monitor them for irregularities. If you would like to ensure that unusual activities at certain ports do not go undiscovered you can instruct logview to monitor particular log files for certain patterns, with the help of regular expressions, and, where necessary, to start particular actions (such as sending a warning message to the user of your system).

Applets for the panel The programs we have seen so far are launched either via the Gnome main menu (the paw print) or directly from an X-Terminal. However, Gnome also offers what are known as applets. These are smaller utility programs that are integrated into the panel when activated so that they are always available. Typical applets include clocks, status indicators and monitor lights. Here’s a brief introduction to three particularly useful examples from the collection of applets. The clock and mail monitor or mail monitor applets monitor your mailbox for new mails. Although clock and mail monitor incorporates two functions, the current version can only register incoming mail on the local computer. If you would like to monitor a mailbox on a remote POP3 or IMAP mail server, you should use the mail monitor applet. The modem light applet connects you to the Internet at the touch of a button and closes your session again just as quickly. During your time online, you can obtain information on connection time and throughput. In theory, ISDN users can also use this applet to monitor their ISDN card. If you are the owner of a laptop, you may be interested in the battery status display applet. Provided that your current battery and laptop supports Advanced Power Management (APM), this applet will always show you your laptop’s current battery status and warn you if it falls below a minimum level.

A rich assortment We’ve looked at just a small selection of Gnome applications. Of course, there are hundreds of others, all of which can be used as an alternative to KDE. These programs are all contained in the current Helixcode Gnome version 1.2. On the web sites listed in the panel you will find many more gtk+ based programs. Even if there isn’t always a gtkbased software solution – as is the case with modem and ISDN tools like kppp or kISDN – the selection introduced here shows that Gnome now boasts a sufficient number of “native” application programs to establish itself as a KDE alternative to be taken seriously. Projects such as Helixcode may, in future, help make Gnome more attractive. ■ 94 LINUX MAGAZINE 10 · 2000



Jo´s alternative desktop

SOMETHING A LITTLE SPECIAL You alone determine the appearance of your

Linux desktop. Here, Jo


Moskalewski takes a look at an alternative to the wellknown window managers and desktop environments which you might like to try.

Away from the mainstream of KDE and Gnome can be found EPIwm, a project started by a French group of programmers. The aim was to create a small and fast window manager with a simple configuration and an extensive range of functions. The project’s success – particularly with regard to the list of features – is very impressive. Here is an extract from top, for example, which lists information about active processes (programs): PID USER PRI NI SIZE RSS SHARE STAT LIB U %CPU %MEM TIME COMMAND 730 jo 1 0 1324 1324 1028 S 0 0U .0 1.0 0:05 epiwm The RSS column provides details of the total memory used by a program – in our example this is 1.3MB. You can look out for new versions and further information about EPIwm at . You will also find a link to a graphical configuration tool which saves you the bother of editing configuration files. However, this tool can produce bad configurations and so is only recommended for advanced users. Even without this kind of front-end, the configuration is easy.

Ready for EPIwm? In order to install EPIwm, your hard disk should contain at least the xdevel (the Header files to X) and make program packages. However, if you would

rather install a graphical program from a tar.gzarchive, you should first install the relevant development packages for imlib, libpng, libtiff, libxpm, libgif and libjpeg. Only these will enable all the features contained within EPIwm. If the package EPIwm-0.5-5.tar.gz is in your home directory it can be unpacked quite easily using the tar tool: a new directory is then created containing the program code, after which you switch to it. An example would be: jo@planet ~> tar xvzf EPIwm-0.5-5.tar.gz [ ... ] jo@planet ~> cd EPIwm-0.5-5 jo@planet ~/EPIwm-0.5-5> Typing make should compile the window manager from the program code, resulting in an executable binary file. If make stops due to an error you’ll either have to install the missing program packages or adjust the Makefile, which tells make what to do. You will find instructions to do this in the INSTALL file.

Buggy? Buggy!

Header files: When developers write programs they include the header files of foreign program libraries: these header files contain the most important information about the functions already provided by the libraries. This helps the compiler when it runs. The libraries referred to here describe the X-Header files that are used for programming X-Window programs. ■

But the program isn’t actually installed yet! Only administrators (i.e. root) may undertake this task. Unfortunately, there’s a small bug in this procedure – CVS files cause the installation to fail. Fortunately they are not actually needed, so the problem can be solved quite easily by deleting the files in the following way: 10 · 2000 LINUX MAGAZINE 95



invent their own. We recommend the following method: deactivate graphical login (which should always be done before a new window manager is tested) and ensure that the user does not own any files that control the start of X (in particular, ~/.xinitrc or ~/.xsession). The following commands should then produce the desired effect in all distributions: jo<\@>planet ~> export WINDOWMANAGER=epiwm jo<\@>planet ~> startx Or: jo<\@>planet ~> startx epiwm

Initial steps

Figure 1: EPIwm’s Standard look with its own tools (including Tkgoodstuff, gkrellm and the file manager F)

jo@planet ~/EPIwm-0.5-5> rm -rf config/CVS jo@planet ~/EPIwm-0.5-5> rm -rf bin/CVS jo@planet ~/EPIwm-0.5-5> su Password: root@planet:/home/jo/EPIwm-0.5-5> make instU all [ ... ] root@planet:/home/jo/EPIwm-0.5-5> exit jo@planet ~/EPIwm-0.5-5> cd jo@planet ~> The directory containing the source code is no longer required and can simply be deleted. But before starting EPIwm for the first time all users must create a set of configuration files in their home directory. This can be done by retrieving epiwm.inst. A broken or faulty configuration can be reset to its original values in the same way.

Light at the end of the tunnel! There are a number of ways to start a window manager and many distributions, rather unnecessarily, Figure 2: Oclock with border

EPIwm should now be king of your desktop. Press the left mouse button and a start menu should appear. Your desktop should now look like the one in figure 1. Here is how to use the program: • left mouse button: start menu • centre mouse button: task list (active programs) • right mouse button: window options • left window icon: minimise • 2nd window icon: maximise • 3rd window icon: maximise window height • right window icon: close window The mouse can be moved beyond the right-hand edge of the screen so that it then appears in the second virtual desktop.

Made to measure The list of features is long and the default settings are mostly sensible and acceptable. Therefore just a few points about the configuration need to be made: all settings can be found in the user’s own configuration files under ~/.epiwm, divided into individual files by subject: ~/.epiwm/icons: IconWidth 48 Often the user doesn’t see an icon (if an icon has been allocated at all) - just text. Although this is not a disaster (after all, in our experience, users use the text rather than the graphics to guide them anyway) those who are unhappy with it can simply try typing 200 here. IconFont fixed The xfontsel, gfontsel or kfontmanager tools list which fonts are available. ~/.epiwm/key The keys can be adjusted to suit the user’s own requirements and additional shortcuts can be defined for programs. ~/.epiwm/menu MenuColor H dimgrey grey Unfortunately, the default colours make it difficult to read the menu: it uses a horizontal (H) graduated fill from dimgray to grey. Instead you can use a ver-

96 LINUX MAGAZINE 10 · 2000



Figure 3: EPIwm in its element

tical graduated fill specifying the colour using the hex RGB colour coding similar to that used when designing web pages: MenuColor V #B2337A #BFBFBF A colour value rather than “H” or “V” obviously indicates that it is all one colour. ~/.epiwm/start This is the Autostart folder of EPIwm. Anything entered here automatically starts with EPIwm. The default setting for the background is taken from xsetroot. The interesting part of the configuration is that it is possible to allocate individual programs particular window characteristics from the start. The following keywords can be used: • NoTitle: Button title is not displayed • NoBorder: no border • Sticky: visible on all virtual desktops • StayOnTop: not covered by other program windows • WindowListSkip: not listed in the task list Here’s an example: the clock oclock is to appear on the desktop. So we cheerfully type xterm “oclock” and a window like that in figure 2 appears. This is not really what we want, of course, so we can leave the following entry in the ~/.epiwm/style file: "oclock" NoTitle NoBorder Sticky WindowListSkip Furthermore, if the ~/.epiwm/start file contains the entry Init oclock &, our newly configured clock always appears on the desktop (see figure 3). ~/.epiwm/window This defines the appearance and behaviour of the window itself. It’s quite difficult to stop experiment-

ing with this option. The option to swap the window buttons with your own graphics is highlighted – the graphics format used must correspond to one of the formats whose development packages were available when the program was compiled (this means, for example, only PNG graphics if only libpng-devel was present during compilation). ~/.epiwm/workspace WorkspaceChangePercent 100 As you can also place a window between two desktops, many find it helpful to scroll only half a screen: WorkspaceChangePercent 50 If you now go beyond the right-hand edge of the screen, you will go only half a screen further.

Shortcut: Shortcuts are key combinations that allow the user to reach frequently used menu commands in a program more quickly. For example, many programs use [Alt+Q] or [Ctrl+Q] to quit – not to be confused with the shortcut Alt+F4 used by many window managers to close a window. ■

WorkspaceResistance 150 Determines the time period after which moving the mouse to the right-hand edge of the screen causes it to move to the next desktop.

Info EPIwm home page:

Conclusion EPIwm is released under the GPL and is therefore available free of charge. In fact, EPIwm exacts a small price for it’s non commercial development. For example, a program with the -geometry option, which allows users to position it using the command line, creates problems. On our test system, for example, it is not possible to use ATerms (a modest replacement for XTerm). However, this aside, EPIwm is highly recommended to anyone seeking an alternative desktop. ■

10 · 2000 LINUX MAGAZINE 97



Package installation made easy

UNWRAPPING THE PACKAGE Unlike their Windows counterparts, Linux

There are many different types of software archive in the Linux environment. Before we look at how to install them, let’s find out a bit more about them.

programs rarely come complete with a setup program that checks your disk for free space, creates an installation folder and sets up icons for you to run the program. Under Linux this must all be done manually. But don’t panic: Hans Georg Esser show you it’s not as hard as it sounds.

Binary: Binary files, often just called »binaries«, are programs that have been comoiled into a form that can only be understood by the computer. They are kept in directories such as /bin, /usr/bin and so on. Source: Source files are the original text files containing the program statements (in a language such as C or C++) that the programmers wrote. You can read them, though unless you’re a programmer too you might not understand them! ■

rpm packages If you’re already running Linux you’ll no doubt have already found many files with names ending in .rpm. this stands for Red Hat Package Manager and identifies archives which have been compiled according to a standard introduced by Red Hat. Nowadays this format is used by almost all distributions. But there’s an important difference between binary RPMs and source RPMs. Binary RPM packages contain the executable files as well as configuration and miscellaneous other files that go together to form the application. In addition to this, a binary RPM archive holds information on what to do immediately before and after the installation. It also works out if any other packages are required for the installation and if so, if any file conflicts might occur. Source RPMs contain the program source code (the text written by the programmer, usually in C or C++), together with instructions showing how to compile this code into something useful. From a source RPM you can produce a binary RPM package. More on this later. In most cases, RPM packages can be installed either by typing a command into a terminal window or by using a graphical tool under KDE or Gnome. In order to do this, system administrator (root) rights are required. So before starting you must login as user root or use the su command. Owing to differences in individual Linux distributions it’s often the case that a particular RPM package can only be installed on the distribution for which it was created. Therefore, when searching for files on the Internet you’ll frequently find different RPM packages for different distributions. The filename lets you recognise the version of the program and the platform for which it was created. A typical example might be: kpackage 1.3.10 3.i386.rpm The version number here is 1.3.10. The number 3 after this means that the package has been created three times, implying there are packages num-

98 LINUX MAGAZINE 10 · 2000

bered 1.3.10-1 and 1.3.10-2 also around which perhaps were made during an earlier version of the distribution. »i386« indicates that the program will run on all Intel based systems (all computers with 80386, 80486, Pentium I/Pro/II/III or compatible processor, including AMD and Cyrix etc.). If you find several versions of a program you’re interested in but each has a different ending like i386.rpm, i486.rpm or i586.rpm, select the one that best suits your computer. Packages which were compiled for Pentium (586) processors are better optimised than 386 packages since they use additional commands which the 80386 lacks. Source RPM packages have the abbreviation »src« in the name instead of the platform designation - the source package for the above RPM package might therefore be called: kpackage 1.3.10.src.rpm Source RPM packages don’t need a platform indication because under Linux the source code is – in general – not specific to a hardware platform. It is the process of compiling to a binary file, translating the source code into machine language, which makes a program platform-specific.

deb packages In addition to RPM there is another popular package format: the Debian format. Debian packages end in .deb and are used by the new Corel Linux distribution as well as Debian’s own distribution. Owing to the relatively limited use of Debian – because Debian is one of the less user friendly distributions – we will dispense with a detailed description. It is worth noting that Debian and RPM packages can be converted in one another with the aid of the program alien, though this has a variable success rate. If possible you should choose a package type that suits your system.

tar.gz archives Files that end in .tar.gz are broadly equivalent to zip archives under Windows. But while zip archives frequently compress the contents of a whole folder and subfolders in one go, this is a two step proce-


dure with Linux. First, a tar archive is created which contains the folder hierarchy. The files are not compressed until the second step is instigated, in which the program gzip is used to pack the tar archive. This two stage process explains the double file extension (ie package.tar.gz.) Program packages in tar.gz format usually contain the program source. For installation to take place they must be unpacked, configured, compiled (turned into a binary program) and finally copied to the correct place in the Linux folder hierarchy. We’ll describe in more detail how this happens later on. Occasionally you will find tar.gz packages which contain compiled program files but it’s pretty unusual.

tar.bz2 archives These are a variant of tar.gz archives. After tar has run the compression program bzip2 is used which achieves a higher compression rate than the older gzip program. However, the archive doesn’t differ significantly from the tar.gz archive apart from the fact that a different command is necessary to unpack it. Now that the overview is complete we can discuss the installation procedures. We’ll start with the simplest variant: the installation of RPM archives.

Installing RPMs Once you’ve found a binary RPM package that suits your Linux distribution you can install it via the console using the rpm command or via a graphic front-end like kpackage under KDE or gnorpm under GNOME. Some distributions supply further tools which fulfil the same purpose, such as SuSE’s YaST, easyLinux’s eProfile or Mandrake’s rpmdrake. If your distribution has its own installer it is better to use it as it may provide extra benefits like automatic menu configuration.

kpackage In order to install packages you must have administrator rights. Open a console window and type the


command su. You will be asked for the password for the administrator root. Once this is completed you can start the package manager by typing kpackage. Some distributions allow you to call up kpackage using the K menu without becoming root beforehand. When the program starts a window opens into which you must type the root password (Fig. 1). When it starts, kpackage first reads the information in the RPM database which tells it about the program packages that are already installed. It then displays this in a tree view (Fig.2.) Each package you’ve installed should be present in this hierarchy – for example, you would find the editor emacs under RPM/Applications/Editors. In order to install a new package select the menu item File, Open. The usual Open dialog should appear and you will be able to select the package you want to install. kpackage will now display information on the package in the right half of the screen. Under the Properties tab you will find a brief description which amongst other things displays the name and version (Fig 3). Clicking on the tab File List should give you a listing of all the files which will be created during installation. You can find out from this what folders the files will be put in. On the left you will find five check boxes: Upgrade, Replace Files, Replace Packages, Check Dependencies and Test. These have the following meanings: • Upgrade: If you want to install a program that already exists on your system in an older version you must check this box to carry out an update. The older version is automatically uninstalled. If Upgrade is not checked and an older version is present, the installation will halt with an error. • Replace Files: rpm keeps an eye open for files that already exist being overwritten by the installation. If so it halts with an error. This can happen when two packages place the same configuration file in the folder /etc, for example. If you mark this field, a package is installed even if it means overwriting files that already exist. • Replace Packages: This option is similar to Upgrade: a package is installed when it is already present in

Fig. 1: When starting kpackage as a normal user you are prompted for the root password.

[below left] Fig. 2: In the tree view you will find all the RPM packages already installed. [below] Fig. 3: Package properties using kpackage

10 10 ·· 2000 2000 LINUX MAGAZINE 99


[top left] Fig. 4: A red mark at top left means that this window has administrator rights [top right] Fig. 5: gnorpm first displays all installed files

Fig. 6: Clicking on Install starts the whole thing going

Fig. 7: Package installation and querying using a terminal window


another version but the old package is preserved. This might be useful if perhaps you want to install two versions of a particular library file. • Check Dependences: As already mentioned, RPM packages “know” which extra packages are required by the one you are installing. An example of this is the KDE base package kdebase whose programs can only run if qt and kdelibs are also installed. If a required package is missing you will receive an error message. If you know for sure that all necessary files are present, you can uncheck this option. You might do this if for example you want to install a package designed for a foreign Linux distribution and you know that the required package has a different name and is not being found despite being present. • Test: This is simple enough: it checks whether the package can be installed without difficulty. Placing an check in this field means that despite going through the motions, no files will actually be installed. After making any changes to these settings click on Install to install the selected package. Cancel will return you to the tree view. Instead of selecting a package using File, Open you can also drag it from a kfm window to the kpackage window. If kpackage is not yet open you can start kpackage from kfm by clicking on the rpm archive. This needs you to be user root, however. There is a way to call up kpackage with the necessary rights from the kfm window: select the menu option System, KFM file manager (Super User Mode). After entering the password a kfm window is opened in which you have administrator rights: you can tell because of the red mark at the top left of the kfm window. If you click on an rpm archive in this window, kpackage is started without a password needing to be entered.

gnorpm Gnome has a similar program: gnorpm. This is superior to Red Hat’s own tool glint and has the benefit of being available in all Linux distributions. 100 LINUX MAGAZINE 10 · 2000

In principle, the same thing is done here as with kpackage. Start the program as the administrator root (type »su« and enter the administrator password). You’ll also have to open a gmc (GNOME Midnight Commander) file manager window and drag the RPM package from it to the gnorpm window. This should open a new window, Install, in which the package is displayed (Fig 6). By clicking on Queries you can obtain more detailed information (the current gnorpm version 0.9, however, always crashed when we tried it.) A click on Install starts the whole thing running. gnorpm also checks for package conflicts, and, if applicable, pops up a warning dialog box in which you can alter the installation settings.

Installing RPMs by hand In addition to using a graphical front end which simply calls the rpm routine, it is also possible to manually use rpm within a console. If you are not afraid of a hands-on approach then you will find this method more efficient. All you need before starting is the precise filename of the RPM, including its path (if the archive is not in the current folder.) You must then become the administrator root using su. Once you have done this, enter the command: rpm Uvh path/package 1.2.3 1.i386.rpm It’s as simple as that! During installation a progress meter is displayed showing how much of the work is completed. rpm can work on several packages at the same time, using a command like: rpm Uvh download/*.rpm You can find out which version of a package is installed by typing rpm -Uvh q package name (see Fig. 7). You will find a list of the options that can be used in Table 1. When using the options -q and -e for querying or erasing, only the package name (without the version number) needs to be indicated, i.e. not rpm -e package 1.2.3.rpm but simply rpm -e package.



Fig. 8: ./configure : the first step to a finished program

Compiling source files Now we come to a more complicated way to install software – from the programmer’s source files. Here the program exists in its basic form as a source code archive. Before anything else happens it must be unpacked to a suitable place: /usr/local/src/ is usually the default. In order to do this you must become administrator root. You can then unpack the program archive. Archives which end in .tar.gz or .tgz are unpacked using: tar xzf path/package.tar.gz and packages which end in .tar.bz2 are unpacked using tar xIf path/package.tar.bz2 (Note: between »x« and »f« in the command above is a capital »i«.) This command creates a new subdirectory with the name of the source code files. You must now change to this directory. There then follows what some call the classical installation triple step: »configure/make/make install«: [root@dual myprog 1.1.0]# ./configure ... [root@dual myprog 1.1.0]# make ... [root@dual myprog 1.1.0]# make install All three commands will cause your screen to be filled with many system messages. What do they all mean? Well, the first step ./configure (which must be typed with a dot and slash before the word »configure«) starts a shell script in the current folder. This script has been created by the programmer and looks around your Linux system. It checks what operating system and what version you are using (frequently the same source text archive can be used on other Unix variants), which compiler is installed

(under Linux it’s usually GNU C), and whether all the necessary program libraries exist in sufficiently upto-date versions. If everything appears satisfactory the script produces a makefile (Fig. 8). You need the makefile for the next two steps. When you run the program make (which must also be on your hard disk, of course), it processes the freshly created makefile which itself contains a recipe-like listing of what must happen – and in what sequence – in order to create a finished program. ./configure and make can take quite a long time to run depending on the size of the program. Finally, by typing make install, all created files will be copied to the correct places on your system. Programs themselves usually end up in /usr/bin or /usr/local/bin, help pages (man pages) in /usr/man or /usr/local/man, configuration files in /etc and so on. Once this is all done, the installation is finished. Try running the newly-installed program. If it works properly you can delete the folder from which you carried out the compilation. If after unpacking a source code package you find there is no configure file, examine the other files in the folder. Usually, you will find a COMPILE or README file in which the procedure for installing the program is described. ■ Table 1: rpm options i Install (no update) U Update v »verbose« (i.e. detailed) — displays package name H displays progress meter q »query«; enquires whether a package is installed e »erase«: deletes a package nodeps ignore dependencies (i.e. install even if necessary packages are missing) force force installation in the case of conflicts

10 · 2000 LINUX MAGAZINE 101



Four Download Managers

SURFING ASSISTANTS The Internet offers an inexhaustible pool of software for Linux enthusiasts. Hundreds of programs are available to be downloaded free of charge. A download manager helps you organise your downloads and make the most of your online time. We look at four download managers for Linux.

[left] Caitoo with full-screen display and minimised “Drop Target” icon [right] KDE online help and Caitoo docked in the KDE control panel

Have you ever stayed online an extra half hour so that you could finish downloading a huge file, only to end up frustrated because the connection failed at 99% complete? Or do you get annoyed by the many windows that your browser produces during downloads that stop you from seeing how much of the file you’ve got so far? If you would like to make your downloads easier, better organised and more efficient, one of the download managers discussed below may be for you. They include two programs – KWebGet and WebDownloader – that are suitable for what is sometimes called “recursive downloading” – the mirroring or duplicating of a whole or partial website on your PC.

102 LINUX MAGAZINE 10 · 2000

Caitoo Caitoo is being developed as part of the KDE project. The current version, 0.6.6, is already proving very useful and the most important functions that any download manager should provide have been implemented. The program can be integrated into the KDE panel as a “Docked Widget” (in the shape of an extended hand) which lets other KDE applications or Netscape download files using drag and drop. Alternatively, users can download files via URLs that have been copied to the temporary clipboard using the shortcut [Ctrl+V] – simply use the menu option File, Transfer open (retrieve URL from the temporary folder).


Caitoo recognises three different transfer modes: in order, according to schedule and delayed. If in order is selected, Caitoo first checks whether the maximum number of transfers has been exceeded. The transfer begins as soon as an opportunity becomes available. The schedule option allows the user to determine the date and time of the download. There’s an option to close the connection (and, if required, Caitoo itself) once the download has been completed which is also very useful. The automation index tab in the Settings dialog box contains a range of options for closing the connection after particular events. In later versions, the authors are planning, among other things, to implement features that are currently lacking such as the ability to search FTP sites and a bandwidth limit for downloads. Ignoring a few minor errors in the settings menu, Caitoo has already proved itself useful thanks mainly to its intuitive usability and the smooth integration of the most important functions. But the development team are still working on it – it could be worth taking an occasional look at the project’s homepage.


noting. These allow users to pre-select precisely which file types they wish to download from which domains under which conditions. In addition to HTTP and FTP registration forms and the usual proxy settings, a powerful scheduler has been built-in so that users can set to the second the time when downloading is to start. All in all KWebGet seems to be very well thought out and implemented. It is ideal for mirroring local websites. However, as a conventional download manager it is only partly useful.

[left] Direct URL insert in GTransferManager [right] A lack of functions relegates GTransferManager to the sidelines

WebDownloader for X GTransferManager For updates, GTransferManager banks on GNOME’s fine gtk library and attracts many with its name alone. Unfortunately – compared to the other download managers here – it’s lacking much functionality. As the project currently stands, it lacks both important download options such as Resume, and expected management functions such as scheduling or logging. The only interesting feature is the CORBA interface, which it may be difficult for ordinary users to get much out of. In its current stage of development GTransferManager is only partly useful as a download manager. We’ll have to wait and see how the project develops over time.

This project by the Russian developer Koshelev Maxim has been under way for some time. At first glance, it doesn’t appear to be too spectacular. However, it’s when you click on the unprepossessing settings, general menu that you realise what the program is about – we were amazed by the enormous range of functions on offer. WebDownloader has probably every function you can imagine in the context of downloading. In terms of transfer limits – maximum number of simultaneous transfers, maximum bandwidth/speed, host limits – this download manager offers considerably more than the other Internet tools here. In addition, WebDownloader is similar to KWebGet in that it caters for recursive

KWebGet – A skimpy online guide but clear and largely self-explanatory program functions

KWebGet opens with a nice wizard that allows you to easily fill in the settings required for the download.

KWebGet In KWebGet, KDE developer Frank von Daak has essentially created a graphical front-end for the powerful Unix command line program wget. Rather than being just a download manager in the strict sense of the word, KWebGet is a program with which users can mirror whole websites on their hard disk (a process called “recursive downloading”) so that they can read them offline at their leisure. In principle however, KWebGet can also be used to download individual files. The author has gone to great lengths to give users a clear overview of the available options. In addition to the ergonomic structure of the program, von Daak has written a wizard which uses dialogs to take users through the most important steps prior to the download. The options, file types and domains boxes in particular are worth 10 · 2000 LINUX MAGAZINE 103



[left] WebDownloader offers an overwhelming range of functions and an attractive appearance [right] URL transfer to the dragand-drop recycle bin (blue circle, top left-hand corner) and host limits on WebDownloader

Table: Overview Name Author Licence Homepage General Manual Requirement

Download behaviour Direct insert URL Dock Widget Drop Target Drag and drop from KD applications Drag and drop from Netscape automatically inserted in temporary folder Transfer options Resume/Reget function Limit maxim. open connection Limit DL speed Partial Download Time planer Automatically separates after download Automatically closes after download Other Features Recursive Download FTP Search Proxy settings Logging Special features

Evaluation Suitability in practice

downloads (you can set parameters for the desired recursion depth) and you can give it a schedule for each individual download request. WebDownloader combines the wealth of functions of Caitoo and KWebGet. It’s just a pity that it can’t be integrated as easily as Caitoo into the KDE control panel. But fortunately you get a drag-and-drop recycle bin, to which you can transfer URLs using drag and drop. Of course, there is also an option to transfer Download URLs to the tool using the temporary folder. Although WebDownloader lacks extras such as FTP search it is certainly the best under Linux in terms of functions – at least as far as Open Source goes. Although it is slightly inferior to the KDE project Caitoo in terms of usability, the wide vari-

ety of functions makes up for this shortcoming. After all, don’t forget that not all users work with KDE.

Caitoo 0.6.6 Matej Koss GPL

GTransferManager 0.4.4 Bruno Pires Marinho GPL

KWebGet 0.5 Frank von Daak GPL

WebDownloader for X 1.16.1 Koshelev Maxim Open Source

KDE manual KDE 1.1 QT 1.42

in production GNOME gtk 1.2.x

Short overview KDE 1.x QT 1.4.x wget

FAQ gtk > 1.2.0 wget

Conclusion Under Linux, WebDownloader for X and Caitoo are two free graphical download managers offering almost the same variety of functions as commercial Windows programs such as Getright or GoZilla! Therefore, there’s no reason not to hold download orgies under Linux in future. Indeed, in view of the fact that Linux is far more secure than Windows, not to mention the immense range of software available on the Web at any time, Linux is ideal as a download platform. ■

x x x x x x

x x -

x x -

x x x x x

x x x x x

x -

x -

x x x x x x

x x

x -

x x x

x x x

Dock Widget in KDE control panel

CORBA support

Frontend to wget Mirrors whole sites Domain limit Details of recursion depth Wizard Selection of file types

Buttons for different speed limits Domain limit Details of recursion depth Mirrors whole sites

Very good – Good


Good – Satisfactory

Very good – good

104 LINUX MAGAZINE 10 · 2000



An astronomical ephemeris for Linux


XEphem is an interactive astronomical ephemeris or planetarium that has been developed over a period of ten years by Elwood Charles Downey. Originally written for Unix systems, it runs under Linux too. And it’s one of the best programs of its type you can get on any platform. If you have any interest in astronomy at all, it’s well worth trying. The program comes in two versions. There’s a free version in source code form or a ready-to-run CD-ROM with printed manual for $69.95 (plus $12

Fig1: The main dialog allows you to set the date, time and location for observations.

air mail.) The CD-ROM holds 240MB of data including catalogues of deep sky objects, asteroids and the Hubble Guide Star Catalogue of stars down to magnitude 15. It would take a day or so to download all this data making the CD-ROM an attractive buy. However, the free version is fine for casual stargazers or as a way of trying the program out before buying it. Pre-built binaries for Intel, Sun Sparc and PowerPC systems can be found on the Web for those who aren’t happy about compiling the program from source code. XEphem starts up with a dialog (Fig. 1) that lets you set the date, time and location. The start-up location is fixed and can’t be changed - at least, not in the free version - unless you change it in the source code. It’s quite easy to change the settings interactively, though. If you aren’t sure about a setting you just point at it and a tool-tip pops up with some more explanatory information. The calendar usefully displays the dates of new moon and full moon to help you pick the best times for observing.

Whether you’re a casual stargazer or a serious amateur astronomer, you’ll get a lot of enjoyment from this charting program.

10 · 2000 LINUX MAGAZINE 105



[left] Fig2: The Sky View shows the night sky for the chosen location at the selected time and date, as if in a planetarium.

[right] Fig3: Digitised Sky Survey images for an area of interest can be downloaded from the Internet and displayed.

[left] Fig4: A view of the Solar System can be animated to show the movement of the planets.

[right] Fig5: The Moon view shows the exact phase, with Apollo landing sites labelled.

The main feature of the program is the Sky View. When you select this from the main menu, XEphem displays a 360 degree planetarium view of the sky as seen from the chosen location at the time selected if you were lying on the ground with your feet pointed south looking straight up (Fig. 2). Using the scroll bars you can change both the angle of view and the direction. Left-clicking on the map causes information such as the right ascension and declination, altitude, azimuth and constellation containing the selected point to be displayed. Rightclicking on a star or other celestial object causes a pop-up menu to appear showing all this information plus a description of the object, its size and magnitude, rising and setting time and a lot more. From this pop-up menu you can centre the display on the selected point or centre and zoom to that location to see more detail. Options on the Sky View menu let you control the appearance of the display. You can choose whether constellation lines or boundaries are shown, the proportion of objects that should be labelled, the magnitude scale and much more. You can flip the image left to right or top to bottom so that it matches the view seen in an astronomical 106 LINUX MAGAZINE 10 · 2000

telescope. You can create a list of all the objects shown in the current view and you can print out a map. XEphem is better than a printed star atlas. The free version of XEphem is limited in the number of objects it displays by what most people would consider a reasonable size of file to download. But the program isn’t restricted to using the star catalogues it comes with. Perl scripts are provided to convert various astronomical database files available on the Internet into XEphem’s own format and you can set paths to the directories where these databases are held. XEphem can also read Hubble Guide Star Catalogue data from various sources. A C program is included that can convert GSC data from CD-ROM to a more compact form for permanent hard disk storage. XEphem is supposed to be able to read GSC CDs directly (though it didn’t like my rather old set for some reason.) If you don’t have the GSC CDs XEphem will pick up GSC data for the area covered by the map from an Internet source. GSC data downloaded from the Internet is cached so it can be reused later. XEphem also has the ability to display Digitised Sky Survey Images. Armchair astronomers will love this feature! To use it you just zoom in on an inter-



[left] Fig6: A view of Jupiter showing the Red Spot, with two moons in transit.

[right] Fig7: Cloud cover, sea temperature and synoptic data are plotted on a map of the world.

[left] The M101 Spiral Galaxy, from the Digitised Sky Survey

[right] The Mars view displays a digitised image of the red planet.

esting part of the sky and select Image from the Control menu. This will bring up the Sky FITS dialog. Digitised images in FITS format are available online from both the Space Telescope Science Institute (STScI) or the European Southern Observatory (ESO). From the UK, the ESO is nearer, so you just click the ESO button and XEphem connects to the Internet and downloads an image which is then displayed in the Sky View window. The results can be stunning (Fig. 3). Again, the FITS files you download can be saved to hard disk for viewing whenever you want them. If you are interested in the objects in the Solar System, XEphem has even more to offer. The Solar System view displays an orrery which you can view from any angle and animate to show the relative motion of the planets (Fig. 4). The Moon view shows an image of the moon, shaded to show the exact phase. Points of interest such as the Apollo landing sites are marked, and you can find out the names of lunar features by clicking on them (Fig. 5). There’s a similar option for Mars showing an image of the red planet. The views of Jupiter, Saturn and Uranus don’t attempt to show any surface detail, although a red dot on Jupiter (Fig. 6) shows where

the Red Spot should be at that particular time. The major planets’ moons are shown in their correct positions and the display can be animated which is a handy tool for finding the times of transits and occultations. The Earth view probably won’t be of much interest to astronomers. It shows a map of the Earth in either a cylindrical or spherical projection. Various locations can be shown, and identified by rightclicking on them, and the part of the Earth that is in daylight can be highlighted. Perhaps most interesting is the weather map display (Fig. 7). When you select it, an Internet connection is started and the program downloads up-to-date cloud cover, sea surface temperature and synoptic weather data and displays it on a map of the world. XEphem can be used to control a telescope. If this option is selected you can click on the Sky View with the mouse and the program will send the coordinates to a remote process via a fifo. This should make it relatively easy to interface the program to any telescope control system. Whether you are a dedicated amateur observer, a casual stargazer or an armchair astronomer, you will find XEphem to be a program with much to offer. ■

Links Clear Sky Institute: http://www.clearskyinstitute.c om/xephem/ Craig Kulesa’s web page: Bob Brose (N0QBJ) home page: ose/xephem.html ■

10 · 2000 LINUX MAGAZINE 107



Distributed file sharing


Recently there has been a lot of uproar about Napster, a program that allows users to exchange MP3 files and, with the aid of Wrapster, other types of file as well. At the end of July a US judge ruled that Napster should be shut down, a decision that was overturned on appeal a few days later. No doubt there will have been be further twists in the story by the time you read this. But whatever the final outcome, the fact remains that Napster isn’t the only program or service of its kind. There are some interesting successors. Bjôrn Ganslandt tunes in.


Gnutella was originally developed by Nullsoft which has now been part of AOL for some time, and when AOL merged with Time Warner the development of Gnutella was officially stopped. Since Time Warner sells CDs, the reason behind this decision seems fairly clear. However, despite this there are betas of the Windows program to be found online, not to mention many clones of Gnutella. Some of these are for Linux. What distinguishes Gnutella from Napster is that there is no central host via which everything is handled. Instead, programs pass search requests to each other to find out who has the required file. Thus all Gnutella programs are both server and client. This approach means that the system continues to exist even if a program fails. Interesting files are quickly acquired by many different users providing many points from which they can be downloaded by others. The disadvantage of this system is that you need the address of another Gnutella user in order to take part in the network. Such an address can be obtained from A further problem is that Gnutella is not completely anonymous. It is possible with no great difficulty to log the IP addresses of all users who download a file from the local Gnutella. Gnutella was originally designed by Nullsoft for use by relatively small groups; therefore it is rather slow in the opinion of the many users.

108 LINUX MAGAZINE 10 · 2000

A good Gnutella implementation for Linux (and Win32 too) is Gnut. Gnut is relatively mature but to date still has no stable graphical interface. The “help” command displays a list of available commands. Gnut is configured using the file ~/.gnutrc. In the file ~/.gnut_hosts the addresses of all known hosts are stored. Gnut can display all search inquiries received using the command “monitor”. When searching with Gnut and all other Gnutella programs you must avoid using wildcards (*, ?) as they are not recognised by all implementations. It’s best to search only for keywords. Gnut can be downloaded from A nicer program to use is Gtk-gnutella. It emulates the look and feel of the original Gnutella and runs with no problems. The interface is fairly intuitive: IP addresses are entered in the format “Address:Port” in the “Add” field. Both binaries and source code for Gtk-gnutella can be obtained from As the name suggests, you need GTK+ for Gtk-gnutella to run. Another Gnutella implementation having a graphical interface is Gnubile. With Gnubile you must enter all IP addresses using the menu File, Preferences in the format “Address Port”. Apart from this the program is also self-explanatory. A requirement for using Gnubile is GNOME. The program can be obtained from



If you would only like to sniff around in Gnutella there are some Gnutella search engines on the web. For a start, try the addresses page/nut.php3 and It’s also worth visiting if you’re looking for more clients or for information on Gnutella.

Jungle Monkey Jungle Monkey is relatively new: consequently, hardly any files can found with it. Despite this, the system is interesting. Jungle Monkey provides several channels in which files can be offered. Each user can open these channels, exchange files and chat with other users. Jungle Monkey differs from other programs like Gnutella in that it uses multicasting (end-to-end multicasting to be precise). First, Jungle Monkey opens a “root channel” which carries information about all the other channels. As you cannot get information about the other channels without this root channel, Jungle Monkey has the same weakness as a centralized system like Napster. It is possible to open channels without having the root channel, but only if you have a a note of the addresses. In the channels themselves you can then download files. However, it can be a while before all files are displayed. Information about which files are available in the channel is only sent about once every ten seconds. If no other root channel is selected Jungle Monkey opens the standard channel on start-up. Jungle Monkey exists in both a GTK and in a GNOME version, the GNOME version being recommended. In addition, there is a command line version called jmlite and a command line search server named jmsearchserver. Information about Jungle Monkey as well as source code and binaries can be found at (If this page cannot be found, try

Freenet Freenet is an attempt to create a network in which information can be anonymously procured and made available. Just like Gnutella, Freenet is a distributed and decentralized system. The system consists of a network of nodes. From these nodes you can download files by sending a key to a node. Keys have a format such as “text/philosophy/sun_tzu/art_of_war”. In this system it does not matter where a file is physically stored. If the first node does not have the file, it simply asks the next one. The whole thing functions in a similar way to Gnutella, except that Freenet has been designed for larger networks and the protocol is therefore rather more complicated. The current version of Freenet still doesn’t provide any anonymity or encryption but the next version should be relatively secure. At the moment there are still some problems with the architecture of Freenet. Thus, for example, it is still not possible

[above] Figure 1: Gtk-Gnutella [left] Figure 2: Jungle Monkey

to update or efficiently search through documents. However, these problems should be eliminated in future versions. You can obtain more information on the protocol and the philosophy behind Freenet, as well as the program itself, from the Freenet home page Freenet is written in Java, and is therefore readily portable to other platforms. However, there are also clients in Perl and C. The Perl client called “Liberator” is part of the Freenet software; FreeClient, the C Client, can be found at (this page is not constantly online.) You can also get limited access to Freenet via the Web ( although of course the access is then not anonymous. For the Java version of Freenet you will need a Java implementation installed on your system.

Multicasting: Using multicasting you can send a package of data from one source to many targets at the same time. This saves bandwidth compared to the more commonly used unicasting. ■

The future There are some good implementations of Gnutella with which you can find a vast range of different files. However, for the future Freenet opens up some interesting possibilities. Take a look at it! ■ 10 · 2000 LINUX MAGAZINE 109



Installing Mandrake 7.1

LINUX MANDRAKE 7.1 Mandrake 7.1 is the latest release of what is becoming one of the most popular Linux distributions. Reviewed elsewhere in this issue, the GPL version is on your cover CD so you have no excuse for not trying it for yourself.

Reiser file system: The Reiser file system was developed by the file system specialist Hans Reiser and enables rapid recovery after a system crash by means of accurate logging of open files. The data is also packed more efficiently onto the hard disk so that small file fragments no longer waste as much space. Previously, it was possible to store everything except the boot directory in the Reiser file system because previous boot managers could not read the kernel from a Reiser partition. Mandrake provides a boot manager that can handle this. BIOS (Basic Input Output System): The BIOS enables the computer to communicate with its peripherals. After the BIOS has started, it loads the operating system from the storage medium. Under normal circumstances, this is the hard disk, but it is also possible to boot from floppy disk and, in the case of most modern computers, from CD-ROM. From the BIOS setup screen you can specify whether the diskette, the hard disk or CD-ROM drive is to be searched first for a bootable system. ■

Packages on the CD (Technical Support see page 113) If you are already using another distribution and are happy with it you may not want to install Mandrake 7.1. In that case you may still find many useful packages on the CD. The packages that make up the Mandrake distribution are in binary RPM format and should run with most other distributions based on recent versions of Red Hat and other derivatives.

110 LINUX MAGAZINE 10 · 2000

This version of Mandrake is fully functional. It supports virtually all current hardware and includes the kernel 2.2.15 which has been optimised for Pentium systems. As an extra treat it offers the option of using the Reiser file system for all partitions. In addition, Mandrake 7.1 supports Universal Serial Bus (USB) devices and mice with a centre wheel can now be used in most applications. XFree86 version 4.0 is included which should enable many 3D graphics cards to run. However, you may find a few problems (such as with the NVidia Riva TNT 2.) If this is the case you will probably need to look for updates on the Internet.

Preparing for installation If your computer is currently running Windows you will probably have to start by making space on the hard disk for Linux. The program fips.exe was a standard part of many distributions provided to shrink DOS. Instead, Mandrake provides DiskDrake, its own alternative. It’s always a good thing to back up important files before using any utility like this, so do so before even reaching for the Linux disc. In order to run Linux properly you should make at least 500MB available for the installation – but more would of course be better still. It doesn’t matter whether you want to start the Linux installation from the CD or using a diskette, as


long as the boot sequence of your computer is set up correctly. To check this and if necessary to change it you must take a quick look at the BIOS setup program. To do this, immediately after starting the computer press the [Delete] or [F2] key; this should take you to the BIOS settings. Virtually every BIOS menu looks different so it is impossible at this stage to define a general procedure. However, only ever alter something if you know what it does! The potential for damage is high if you play with BIOS settings you don’t understand. As an alternative you can start the installation of Linux from another operating system. Later on we will describe the method used to start installation from Windows and or DOS.

Installation direct from CD If your computer is set up properly you just have to put the CD in the drive. On starting up you will see the Mandrake welcome screen. You can obtain further information about the installation by pressing the [F1] key or just carry straight on with installation by pressing the [Enter] key.

Starting with a boot diskette If you computer is unable to boot from a CD-ROM you must create a boot diskette under Windows. This is done as follows. Switch to the \dosutils directory on the CD and start rawwritewin.exe. Put a blank disk into the drive. Click on the Image file button in the lower section of the Rawwrite dialog and navigate to the \images directory on the CD. There you will find five files with the extension .img. All image files can be written byte-by-byte to a data medium such as a floppy disk using the command Write. The image we’re interested in is cdrom.img which will let us install from the cover CD. If you require PCMCIA support for the installation (if you’re using a laptop), you should choose pcmcia.img instead. You should only select the file hd.img if you have copied the data from the CDROM to a local hard disk before starting. The file rescue.img is of no use during the installation. Next click on the correct drive letter in the Floppy Drive field (in most cases this is A:.) Then start the procedure by clicking Write. Once the program has finished, start the Linux installation by leaving both the CD and the diskette in their respective drives and restarting your computer.


Fig. 1: Installation under Windows 98

If you’re using DOS, the installation procedure should start straight away. In the case of Windows a starting screen is displayed. Select Complete installation if you’ve already provided a separate partition for Linux. Otherwise you can install Linux in two files under Windows by selecting the Linux for Windows Installation option. We will look at this option in greater detail later.

Everything in one go After the first welcome screen you will be asked to specify the language. After this, you can choose between three installation classes: automatic, userdefined or expert mode. The automatic installation automatically shrinks your Windows partition (if one is present), creates new Linux partitions and installs a selection of the most useful packages. Until the program gets to network installation and printer configuration your intervention is not needed so you can go off and have a cup of tea. Questions about the network and printer, when you come to them, are explained in detail so you shouldn’t have any trouble here. You will be

DiskDrake offers you the possibility of automatically creating partitions for Linux. However, if you have only a limited amount of space for your whole Linux system the default setting is not ideal (you get a very small /root partition along with equally sized /usr and /home partitions.) ■

Fig. 2: Choosing the installation class

Starting installation from Windows or DOS Under Windows 95/98 look on the CD for the directory called dosutils and then start the file autorun.exe. If you are using DOS as your operating system look for the file autoboot.bat on the CD in the \dosutils\autoboot\ directory. 10 · 2000 LINUX MAGAZINE 111



Root user: The root user is the user you log in as to perform system administration. The root user is all-powerful and can read and write to any file on the system. Because of this, it is dangerous to run as root unless you need to. When using your Linux system normally you should log in as an ordinary user. Boot manager: A boot manager is a small program which loads the actual operating system. It is usually located in either the Master Boot Record (MBR) on the hard disk – in other words the main boot sector (which is called by the BIOS during startup) – or in the boot sector of the Linux partition. Swap partition: The Swap partition is where the operating system stores data from main memory that is not urgently required – this happens if the main memory is almost full. It’s similar to the swap file in Windows but it is more efficient by virtue of using a dedicated partition. ■

requested to enter a password for the root user and are then given the option of creating further users. You should create at least one ordinary user for yourself. After that you will have to set up the graphics card, monitor and display resolution, after which you have finished.

Linux and Windows co-existing After rebooting, the Boot manager grub should start and offer you a choice between Linux, Windows, a safe start with a minimum configuration of Linux, or starting from a diskette. An alternative and more traditional boot manager is lilo.. This does the job just as well but does not offer you a graphical menu for the various boot options. Instead the boot: prompt appears and you must type in the name of the operating system you want to boot (i.e. “linux” or “windows”.) After boot-up has been completed the operating system switches into graphical mode and displays a log-in dialog with a selection of penguin icons for the user. To log in, click on one of these icons and enter the appropriate password.

Three star menu The user-defined installation and expert modes give you more options for configuring your system during setup. Be careful about choosing these because you have to know exactly what you are doing. The menudriven installation procedure is an improvement over many earlier Linux distributions. On the left-hand side you will see the individual options for installation (don’t be frightened by the quantity – they’re carried out very quickly.) Beside each option is a star. This shows you the current status. A red star means that the option is still waiting to be completed. A yellow star indicates the option is being carried out. Completed options are given a green star. Should you wish to go back to any point in the installation, perhaps because you want to change something, all you have to do is click on the relevant item in the list.

Fig. 3: Partitioning with DiskDrake 112 LINUX MAGAZINE 10 · 2000

Development or server At the start of the user-controlled installation you will be asked what your computer’s purpose in life will be. At this point, you should click Development, even if you have never written a program in your life and have no intention of ever doing so. This will save you having to install packages needed to compile programs that are distributed in source code form later on. The Server option makes available a large number of services which the ordinary user will not need.

File system Again, in the case of a user-defined installation Mandrake asks you to specify the size of the Swap partition. Linux uses the Ext2 file system as standard for the other partitions. As mentioned above, when choosing a file system you also have the option of switching to the Reiser file system. Mandrake includes the DiskDrake tool. You can use it to delete, create and modify partitions. The user interface of the program is simple. Disk space is shown in the form of a bar, broken up into different coloured boxes representing the partitions. If you click on the empty area, a button appears with the name Create. If you click it you can then select not just the size but also the type of file system. One of the options you are offered should be ReiserFS.

Spoilt for choice After partitioning the hard disk you can install an initial selection of packages. A menu offers you various choices such as KDE, Gnome or Communication Facilities. By clicking on them you can bring-up individual selections. You then select or deselect each individual package directly. The setup program takes package dependencies into considerationbut this can result in some strange decisions. For instance, when we selected

Fig. 4: Initial package selection



and then unselected lynx, it also wanted to remove the HTML help pages, even though another browser had been selected. However, on the whole the individual packages are well organised and the tree structure makes navigation easy. You may find it confusing that when specifying the sizes of individual programs, kilobyte and megabyte are mixed up in the lists. Sizes of 11534Mb are, of course, unrealistic, and anyone would realise this but not everyone would recognise that a size of 53Mb for a package was wrong.

KDE and GNOME As standard, Mandrake sets up the KDE desktop as the graphical user interface. Right from the start the desktop shows its best side: icons are created to give quick access to your floppy and CD-ROM drives, and even Windows partitions if you have them. Among the other icons on the desktop, DrakConf is worthy of note. It conceals a configuration tool which can be used to manage the X (display) settings, mouse, printer and network. It also provides a graphical interface for installing further program packages (rpmdrake). The kpackage package manager is also present (if you chose to install it) and does much the same job, although rpmdrake already knows what packages are on the Mandrake CD making it more convenient for browsing. Another useful tool is DrakFont. You use it to control the fonts on your computer. If you have a Windows partition on your hard disk then DrakFont offers you the option of installing the Windows TrueType fonts from it.

Setting up ISDN Mandrake 7.1 conveniently takes care of setting up a modem for dial-up Internet access but it is a bit lacking in support for setting up ISDN. The distribution includes the ISDN4Kernel utilities which permit manual configuration, at least in principle – the required kernel modules are present and capable of functioning. The convenient front-end kISDN is not there, however. In the isdn4net package you will find a script /usr/local/bin/isdn which guides you through the ISDN configuration process in text mode, however, this process is laborious and requires spot-on answers to many questions. If the configuration is successful, the same script enables you to set up and terminate an ISDN connection. To make life easier for the ISDN user we have included the latest free version of kISDN on the CD.

“Are you ready to boot Linux?” Linux Mandrake 7.1 offers you the option of running Linux without actually creating Linux partitions. During installation you’ll see a Linux for Windows installation option. The program creates two files in your Windows partition, to which the Linux

installation will be written. One of these files is used for the root file system and the other for the Swap partition. The actual installation takes place just like an installation in a separate partition, just more slowly. There is no difference as far as the functionality of the resulting Linux system is concerned. Should the Windows partition prove too small to accommodate the Linux installation DiskDrake is called automatically to create more disk space. After the installation you will find two extra directories on your hard disk: one is called lnx4win and the other Mandrake. Saved in the first one is the installation information for both of the files that contain the Linux installation. You will also find in it a script called Uninstall which deletes the Linux for Windows installation without any problems. But who would want to? ■

Fig. 5: Mandrake’s standard KDE desktop

Technical Support Neither the publishers nor the editorial staff of Linux Magazine can provide technical support for any of the software on the cover CD, nor do they accept responsibility for any loss or damage to your computer or data files that might occur as a result of using it. Your entitlement to support is exactly the same as if you downloaded the software from the developer’s web site. Most of the software on this CD is released under the GNU General Public License which sets out in full your rights and warranty entitlement. There are, however, many resources available on the Internet which you can use to get help with any problems. For more information read the article “How to get help with Linux” in this issue. Other help resources specific to Linux Mandrake which are not mentioned in the article include the MandrakeSoft Support Forum on MandrakeSoft’s homepage, the mailing lists ( and the FAQs (

10 · 2000 LINUX MAGAZINE 113



Turn an old computer into an Internet gateway using Linux

GATEWAY TO THE INTERNET Lots of people, even home users, now have more than one computer. And most people who have more than one computer would like to be able to connect to the Internet from any of them. There are many ways to achieve this, but a good method – if you can afford it – is to use a dedicated gateway/firewall. In fact, this solution isn’t as expensive as it sounds. If you have an old, redundant computer you can set this up easily and cheaply using a Linux-based free software package called FREESCO. Julian Moss investigates Dedicating a PC to providing Internet access for a couple of computers may seem an expensive solution. But the computer you can use for this doesn’t have to be very powerful. Any old 386 that’s sitting gathering dust in a corner will do. FREESCO will let you turn a redundant PC into something genuinely useful. About the only thing you’ll have to buy – if you don’t have a spare one handy – is a cheap network card. Windows users might be interested to know that you can use this Linux-based gateway to let Windows computers access the Internet. FREESCO is a lot more versatile and robust than Windows’ “modem sharing.” Nor does it require you to reconfigure your Internet software to use proxy addresses or a SOCKS interface like many Windows-based solutions. FREESCO will keep your computers safe from attacks coming from the Internet since it functions as a firewall too. The router is configured using a text-based 114 LINUX MAGAZINE 10 · 2000

menu interface and managed using either a terminal or a web-based interface so you don’t actually need any knowledge of Linux or Unix to use it. As well as a gateway FREESCO provides a time server for computers on your network to set their clocks with, plus a print server, a DNS server, a DHCP server, a remote access server (allowing remote users to dial in to your network) and a web server. However, the web server is really an extra use of the server that is there to provide FREESCO’s web interface. There’s no FTP server for uploading files so if you want to use FREESCO as a web server you must manually transfer your web pages to the appropriate directory while the router is down. FREESCO supports up to three network cards and two modems. Its main limitation is that it only acts as a static router. You can set it up to use a choice of ISPs but it won’t automatically try alternatives if it has trouble getting a connection. If you


want to try FREESCO the first thing you should do is check that you have all the necessary hardware. The main item you will need is an old computer that isn’t being used for anything else. It should be a 386 or better with at least 6MB of RAM. It doesn’t require a hard disk as FREESCO can boot and run from a floppy. Nor will it need a keyboard or monitor: once set up FREESCO can run without either. If you really can’t spare a keyboard, though, make sure you can disable the keyboard test in the BIOS boot-up checks or you may find that the boot process halts with a “keyboard not found” error. You’ll need a network card to connect your router PC to your other computers. FREESCO supports several inexpensive, popular cards including the NE1000, NE2000, 3C509 and 3C59x series. If you’re buying a network card specially for this project make sure that its IRQs and I/O addresses can be manually configured. FREESCO doesn’t support Plug and Play. “Jumperless” cards are OK, although you’ll have to set them up first using the DOS or Windows based setup software. Use the default settings if possible, to make it easy for FREESCO to detect the card. Finally, you’ll need a modem. If you plan to use an internal modem, make sure it isn’t a Winmodem. This type of modem is often supplied with a new PC, but it needs special Windows drivers in order to work. It won’t, therefore, run under Linux.

Buffered ports If the modem is an ISA bus card and not PCI, it’s almost certainly OK. If it’s an external modem you shouldn’t have a problem, but if the computer is very old make sure the serial port has a buffered UART compatible with the INS 16550A. Most 386s and many 486s have un-buffered ports that drop characters when run at the speeds needed to use a V90 modem or ISDN TA. Note that FREESCO only supports external TAs that are connected via the serial port and can be controlled in the same manner as modems using the AT command set. If your old computer lacks a buffered serial port you may be forced to buy an internal modem especially for FREESCO. A few years ago replacement serial cards were easy and cheap to obtain; now, even ISA internal modems are getting a bit hard to find. An inexpensive choice would be the Dynanet 56K Internal ISA from Simply Computers. It works very well with FREESCO. Having got all the bits together, your next step will be to download the FREESCO software. You can get this from the FREESCO home page at http://www. If you can’t get to this site try www. or instead: these URLs point to a mirror page. You should download two files: the software – currently version 0.26 – and the documentation, which at the time of writing was only up to version 2.0. Both are Zip format archives. Unzip them to a convenient tempo-


rary location. The documentation is a set of HTML files so you’ll want to put it somewhere where you can view it in a browser while you’re setting up the software. An annoying feature of the documentation is that it contains a banner ad on each page: this may make your computer try to connect to the Internet whenever you open it.

Boot floppy Now you need to create the FREESCO boot floppy. The software archive that you downloaded contains a disk image which you simply transfer to a spare floppy disk. Open a console window, change to the directory containing the archive contents and execute the command: dd if=freesco.026 of=/dev/fd0

INFO The FREESCO home page: diald information: ■

If you’re using Windows or DOS you can achieve the same result using the command: rawrite.exe freesco.026 For convenience, the DOS rawrite program is included in the package. Now the fun starts. Install the network card and modem in your router computer, connect it to your network and attach a monitor and keyboard. Boot the computer from the floppy disk you just prepared, and when the command prompt comes up log in as root, password root. (You’ll probably want to change this later on: the process as described in the documentation.) If the computer’s hard disk was previously formatted for DOS you now have the choice of installing FREESCO to the hard disk. There are several benefits of doing this. Hard disks are less errorprone than floppies so you’ll avoid read errors. FREESCO will load quicker. A hard disk installation will also give you needed extra space if you want to use the built-in web server. To install FREESCO to the hard disk type the command: move2hdd

Fig. 1: Choosing your setup option

10 · 2000 LINUX MAGAZINE 115



Then remove the floppy disk and reboot. To start FREESCO from DOS you must use the router.bat file provided. This can easily be made to run at start-up so the computer boots up FREESCO automatically when it is turned on.

Setting up Now you must perform the initial configuration. You should be logged in as root and looking at a command prompt. Type the command: setup The router setup program is easy to use. You just answer a series of questions. Many options are selected by typing a number, and most can be left to the default settings. However, you can’t go back and change an option if you made a typing error so

Fig. 2: Explanatory help text is displayed by the program

Fig. 3: FREESCO’s web interface 116 LINUX MAGAZINE 10 · 2000

it will pay to check each entry carefully before pressing Enter. Nothing is written to disk until the end, so if you make a mistake you can simply reboot and start again. Setup gives you a choice of three types of router configuration. The first, which is the one we will describe here, is the LAN-to-Internet dial-on-demand gateway. You select this by typing “d” at the menu (Fig. 1). A very similar option to set up is the LAN-toInternet gateway using a leased line. The third option doesn’t provide a gateway function at all: it simply acts as a bridge connecting two or three small Ethernets, reducing network traffic by restricting local data packets to their own segment of the network. To complete the setup you’ll need to know the I/O address and IRQ number of your network card and all the details – phone number, login name and password, authentication method and domain name server (DNS) address – for connecting to your ISP. You’ll also need to know the IP address range and network mask for the PCs in your network. If you already have a working network and can connect to your ISP from one PC you should already have all this information. While working through the setup steps look at section 4 of the documentation. It gives a list of most of the steps but not all of them as it hasn’t been updated since version 0.20. However, you’ll find that helpful notes are displayed by the setup program itself before each choice (fig. 2).The default value for most of the choices will prove to be sensible for most users.

Enabling If the computers on your network have each been allocated their own IP address already (the typical home or small business network case) you can answer No to the option for enabling the DHCP service. You’ll also probably answer No to the WINS address question. You won’t want to enable the public HTTP server (web server) unless you need a local web server on your network for some reason. If you do, bear in mind the difficulty involved in updating the web pages, as previously mentioned. If you pay by the minute for Internet calls you should choose the value for “Keep up ppp link” with care. (Note: it’s in seconds.) If it’s too high, you’ll waste money keeping the link up while it’s idle: if it’s too low and you use a modem you’ll waste time redialling whenever the link drops. You can use FREESCO’s web interface to bring the link up and down manually, but you may find this method a bit cumbersome. If you choose the value 0 for this setting the link will be controlled by the rules in the file /etc/filter.cfg. It’s worth trying the supplied rules to see if they work well for you. You can customise these rules to suit your requirements, but the way the rules are expressed makes this far from easy. The filter rules


use the same format as the file diald.conf so for more information on how to customise them see the diald documentation which you can find online. If you plan to use more than one ISP bear in mind that the default ISP – the one that FREESCO will begin using immediately after boot-up – will be the first one in the alphanumerically-sorted list. ISP names are limited to 8 characters in length. Therefore you may find it useful to use names like 1-FSNET, 2-CIX to ensure that they are listed and used in the order you want. Remember to start the ISP phone number with a T (for tone dialling) and don’t put spaces in the number. You can enter more than one number for an ISP, with spaces between each one, so a space within a number will cause it to be treated as two separate numbers. Once you have completed the initial configuration you can reboot the router and it will be ready for use. With the network interface active and assigned the correct IP address you can now Telnet to the router (assuming that you enabled the Telnet interface during setup.) Once connected you can log in, edit the configuration, add new ISPs and so on. For day-to-day use you can use the web interface to check the router’s status, change its settings, switch ISPs and even reboot the router (Fig. 3). This means that you can, if you want, remove the keyboard and monitor from the router PC and move it out of the way somewhere.

Workstation configuration Before you can start accessing the Internet through your router you must set up the PCs on your network so that they know that your FREESCO PC is the default gateway to the Internet. Your PCs – and any software running on them that performs domain name look-ups – should also be configured to use the router PC as their primary DNS. With most Linux distributions you can do this using a system configuration utility such as linuxconf. Choose the Networking option, then choose “Name server specification (DNS)” and enter the FREESCO PC’s IP address (fig. 4). Save this, return to the menu and select “Routing and gateways.” Again, enter the router’s IP address here (fig. 5) and ensure that routing is enabled.


Another method would be to update each PC’s kernel routing table using the route command. If your router has the IP address then the command would be: route add default gw You’ll probably want this command to be executed every time the PC starts up. One way to do this is to append this command to the file /etc/rc.d/rc.local . To tell a Windows PC to start using your FREESCO router to access the Internet you should open the Control Panel and double-click Network. Go to the Configuration tab and select from the list the item “TCP/IP -> network card”. Then click on Properties. Go to the DNS Configuration tab, enable DNS and ensure that the router’s IP address is the only one listed (Fig. 6). If you’re enabling DNS for the first time you must also enter a domain. Then go to the Gateway tab and add the same IP address as the first and only gateway (Fig. 7). You’ll probably also need to open the Internet control panel and change the setting from “Dial using this connection” to “Use Network.” Once the workstations are set up all a user needs to do to connect to the Internet is launch their web browser, mail client or whatever. If the router isn’t connected at the time there will be a short delay while it dials your ISP and establishes a connection. The connection will remain up until no data has passed over the link for the period you set in “Keep up PPP link” or according to the rules in the filter.cfg file. If you have ISDN then connections are almost instantaneous and it feels almost as if you have a leased line.

[left] Fig. 4: Setting up the DNS using linuxconf [right] Fig. 5: Setting the default gateway using linuxconf

Fig. 6: Configuring DNS on a Windows PC

Conclusion By using the FREESCO router you can provide Internet access to all the PCs on your network using a single ISP connection, dialling on demand. There is no need to reconfigure any of your Internet software to use special proxy addresses or a SOCKS interface. FREESCO also protects your network behind a firewall to prevent unauthorised access from the Internet. It is a very good, very simple and very inexpensive way to access the Internet that can be used by any small home or business network. ■

Fig. 7: Setting the default gateway under Windows

10 · 2000 LINUX MAGAZINE 117



Open Source 3D Package for Game Creation


People sometimes have very good ideas for a game. So what do you do if you have such an idea? Well, first you design your game. You document what you want to happen in your game. You describe the mechanisms, the target, the goals, … Depending on the type of game this can be a big task in itself. Jorrit Tyberghien seeks help.

“Blocks” is a 3D tetris clone and is one of many games based on Crystal Space.

Eventually, the time comes when you can start development on the game you’ve been dreaming up. But what happens? You end up doing a lot of work trying to get graphics, sound, and networking running when you would very much prefer to work on the game itself. Programming a graphics engine is a difficult task, which is why it often takes a whole team of programmers to create a good game. If you have to do it all on your own you will need a lot of time. So why not use a package for this? In this short article we examine the use of Crystal Space as a possible solution for this problem. Crystal Space is a set of libraries and modules that are mainly useful for programming 3D games. If you’re about to create a 2D-only game then Crystal Space is not for you. (If that is the case you should perhaps look at other good libraries like SDL or ClanLib). Crystal Space is Open Source. This means you can download it for free and use it in your own pro-

118 LINUX MAGAZINE 10 · 2000

grams. Because Crystal Space uses the LGPL (GNU Library General Public License) you can even use it for making commercial games. You can download Crystal Space from Crystal Space is also portable. It currently runs on Linux (all flavours), Windows (Win95, Win98, WinNT, Win2000), Macintosh, OS/2, BeOS, DOS, FreeBSD, SGI, Solaris, NextStep, OpenStep, MacOS/X, … To do 3D rendering it can use OpenGL, Glide, Direct3D, or software rendering. This portability gives game developers a lot of choice in their potential game platform targets. One bit of warning. Crystal Space is still in development. Various parts of Crystal Space work reasonably well already (for example, the 3D Graphics Engine) but some other parts need to be improved a lot before being usable (such as the 3D Sound Engine). What does Crystal Space offer you? The three main parts of Crystal Space are a 3D Graphics



Engine, a 3D Sound Engine and networking support. These three major components are briefly explained in the following sections. Note that all these components are optional. You don’t have to use 3D sound if you don’t want to. You could choose to produce sound using another package for example.

The 3D Graphics Engine The 3D Graphics Engine is by far the most important part of Crystal Space and is actually the main reason that Crystal Space exists at all. This piece of software is responsible for managing a “world” in 3D and displaying it on screen. It contains features such as: • visibility checking (so that the hardware doesn’t have to work too hard); • particle systems (to simulate things like rain, snow, and fountains); • collision detection; • physics simulation and so on. Basically, it relieves the game programmer from the hard job of having to worry about the 3D management and rendering part of the project.

The 3D Sound Engine The sound engine is currently in heavy development. When it is finished it will be possible to

Networking Not all games need networking support. However, if you are writing a role playing game or a Multi-User Dungeon (MUD) you can use Crystal Space for this. Crystal Space contains some basic low-level networking support as well as support for client-server networking operations.

have real 3D sound (and normal sound as well) that is integrated with the 3D engine. For example, if there is a wall between the sound source and the camera the sound will become fainter. The sound will also appear to come from the right direction as well.

Add-ons Crystal Space contains a lot more than the three main parts we have mentioned. For example, there is also the Crystal Clear package which sits on top of the rest of Crystal Space and is responsible for controlling and interacting with game entities. Also planned is a facility to add scripting using various languages. Currently we are working on support for Python scripting but other options will be available in due course.

INFO Crystal Space Homepage: Online-Manual: ocs/online/manual/ Game projects using Crastal Space: projects.html ■

Summary So here it is. If you want to write a 3D game you should at least consider Crystal Space as an option. It could potentially save you a lot of work. If you have any questions about using Crystal Space just mail me at the address below. ■

The Author Jorrit Tyberghein is one of the two Maintainers of Crystal Space. You can reach him at Jorrit.Tyberghein@

10 · 2000 LINUX MAGAZINE 119



The monthly GNU-column

BRAVE GNU WORLD Georg C. F. Grave reports on the current developments and progress within the GNU-Project and tries to explain its philosephy. In this issue, you can read about GNU Sather, Ruby, a386, Guppi.

Welcome to Georg’s Brave GNU World. I hope to have found an interesting mix of topics this month. Starting in the heart of technology with two very interesting programming languages.

Sather GNU Sather [5] is an object oriented programming language that was originally branched off from Eiffel, but it has been changed in so many ways since that it must be regarded as an independent language. The beginning of GNU Sather was as a scientific project at the ICSI, Berkeley, where it was distributed under a license that did not quite qualify as a Free Software license. But after development was stopped for financial reasons in 1998 a bunch of people were able to convince the officials at the ICSI to release the last version under the GPL/LGPL. This made it possible for GNU Sather to become an offi-

124 LINUX MAGAZINE 10 · 2000

cial GNU Project. Among the remarkable things about GNU Sather is its revolutionary interface concept where class interfaces are completely separated from their implementations; this makes multiple inheritance very easy. It is also possible to change the underlying code completely without touching the interface, should it become necessary. The current maintainer of GNU Sather, Norbert Nemec, highlights the iterator concept, which allows all kind of loop concepts that other languages use to be implemented with a single breakstatement. His view is also that GNU Sather is not just “another design study” - it is a language that has been designed for speed and comfort for the developer right from the start. The current status of GNU Sather should probably be best described as “almost ready for day-today use.” The interface to C and Fortran is easy and well-documented, so practically everything should be possible. The biggest weakness right now is the compiler which doesn’t use the possible optimisations and has to be called “buggy.” Obviously writing a new compiler is on top of the task list - but this will take some more time. The library also needs some more work, which is currently being done by the University of Waikato. Despite these rough edges, developers interested in object oriented programming should check out GNU Sather. If nothing else it’ll be an interesting


experience. Especially the integrated support for parallel computing (multi-threading up to TCP/IP clusters) and the library that was based on internationalisation since it’s inception should make this an excellent tool once the latest problems have been solved.

Ruby Ruby [6] by Yukihiro Matsumoto is another objectoriented programming language; it started in 1993 when the author wasn’t able to find an object oriented scripting language and made the decision to write one himself. The name Ruby was chosen because the author was looking for another “jewel name” to symbolize the closeness to Perl. His declared goal is to make Ruby the successor to and replace Perl. To achieve this he took the strengths of languages like Perl, Python, Lisp and Smalltalk and tried to incorporate these into Ruby. Just like Perl, Ruby is very good at text processing and additionally gains from it’s very broad object orientation. All data in Ruby is an object - there are no exceptions. The number “1,” for instance, is an instance of the “Fixnum” class. It is possible to add methods to a class at runtime - even to an instance if need be. These possibilities make Ruby very flexible and extensiive. Additionally, it supports iterators, exceptions, operator overloading, garbage collection and much more that one likes to see in a language. In order to be able to replace Perl, it is also very portable and runs under GNU/Linux (and other Unices) as well as DOS, MS Windows and Mac. Ruby has CGI-classes that allow easy CGI programming, and modules for the Apache also exist: eRuby (embedded Ruby) and mod_ruby. It contains a well thought-out network socket class, and thanks to Ruby/Tk and Ruby/Gtk it is possible to implement GUIs easily. There are also special features for the treatment of XML and an interface to the expat XML parser library. Finally Ruby supports multithreading independently of the operating system - even under MS-DOS. Despite this complexity the syntax has been kept as simple as possible (inspired by Eiffel and Ada). Ruby can be distributed under the GNU General Public License or a special license that gives users stronger “proprietarisation rights,” but it might qualify as a Free Software license - although this remains to be thoroughly checked. Although its features have probably mostly technical value, I do think that even non-programmers could be interested in developments in this area.

a386 a386 [7] is a library by Lars Brinkhoff that has a virtual Intel 386 CPU running in protected mode as a “Virtual Machine” (VM). This should be of most use


to kernel hackers and scientists, but might also prove interesting for people just wanting to check out another operating system within their existing system. Compared to similar projects like the Brown Simulator or plex86, a386 has the advantage of running privileged operations faster because they are implemented as function calls or inlined code. Additionally it aims for portability - working other CPU architectures as well as other operating systems. Currently the task at hand is to enhance the Linux port, but in the medium term he seeks to create a NetBSD &amp; HURD port as well as making a386 run on these operating systems. The long term goal is to take the experience gained with a386 to create a new machine model which will be an abstraction of the wide-spread workstation/server CPUs and to implement this as a C library and a “Nano-Kernel” running directly on the hardware. Of course everything is distributed under the GNU General Public License and if you’re interested in these things, a look on the project’s homepage might be a good idea [7]. But now I’d like to talk about things of more direct importance to the end-user.

Guppi Strictly speaking, Guppi [8] is three things in one. First of all it is an application for data analysis and the creation of graphs and charts. Then it is a Bonobo component which allows embedding this functionality in other applications, and finally it’s a set of libraries that allows any GNOME application to use it. The application itself is definitely important for everyone relying on visualisation and analysis of empirical data - especially scientific users. In fact Guppi is the only program of its kind based on full GNOME integration from the start, and so it seems that it is slowly becoming the GNOME standard for visualisation. Thus, it is not surprising that the GNOME spreadsheet Gnumeric and the finance manager GnuCash rely on Guppi.

Info [1] [2] [3] [4] [5] [6] [7] [8] [9]

Send ideas, comments and questions to Homepage of the GNU Project Homepage of Georg’s Brave GNU World “We run GNU” initiative GNU Sather home page Ruby home page a386 home page Guppi home page Jon Trowbridge

■ 10 · 2000 LINUX MAGAZINE 125


Gupy in action …


According to Jon Trowbridge, current maintainer of Guppi, the big advantages can be summed up in four points. First of all Guppi is scriptable, the internal API is available via Guile and Python - so it is possible to solve rather complex problems without having to program in C. Second Guppi has a very flexible data import filter with good guessing capabilities as to how a file should be read without intervention by the user. Third a lot of the functionality is broken down into plugins which makes it easy to extend, and finally Guppi has a WYSIWYG interface that should not give anyone trouble.

126 LINUX MAGAZINE 10 · 2000

But the end-user should still be a little careful right now Guppi is still in very active development and the user interface especially is not yet complete. There are also some functions lacking and the documentation is somewhere between sparse and nonexistent, so only expert users should consider it for daily use. Other members of the Guppi team are Jody Goldberg and Michael Meeks who work on the GNOME integration, Andrew Chatham, who takes care of the Python-binding, and I should also mention Havoc Pennington who doesn’t work actively on Guppi anymore but did a majority of the work in the early phase. Anyone interested in development is very heartily invited to get in touch with Jon [9] - he also informed me that his current location is close to the University of Chicago (USA) and that he’d be interested in meeting more GNOMEs in this area.

...the end Okay. That should be enough for this month… as usual I’m encouraging you to send your ideas, comments, questions and topic suggestions via email [1]. ■



Programming Wizards for GNOME

DRUIDS, WIZARDS AND GURUS GNOMEs are normally thought of as mystical beings. Related to the leprechaun, they’re small hunchbacked creatures who live underground and guard their hoards of precious stones! In the world of Linux there are also small hunchbacked people around (many of them are programmers!) But GNOME itself refers to something entirely different: Thorsten Fischer meets a GUI for X-Windows.

Figure 1: The setup Druid of the mail client Balsa.

Just as Captain Picard might shout for Number One, his assistant in Star Trek: The Next Generation, so the computer user also expects to have assistants usually called wizards. Under Windows, they’re intended to speed up and simplify tasks such as altering settings and configurations. On the first page of the Wizards used under Windows to install programs, there’s also the big, bad licence agreement which takes from users all the freedom that they ought to be able to enjoy with the software!

120 LINUX MAGAZINE 10 · 2000

Under GNOME, Wizards are called Druids. One well known program that uses Druids is Balsa, whose setup dialog box has probably already been seen by everybody involved in Linux – if not, then see Fig. 1. The precursor to Druid was GNOMEGuru. The development of Guru has, however, been suspended in favour of the Druid, owing to the greater flexibility it allows. On the following pages we to show how with the aid of the Druid, assistants can be created for simple tasks without having to write a full-blown


program. In order to be brief, we will dispense with an autoconf-compatible source text tree, and also tackle the problem not in C but in the wonderful script language Python. This means we can also discuss the GNOME API of Python at the same time. The installation of the necessary package GNOMEPython is covered in a separate box.

Installation of GNOME-Python The current version of GNOME-Python is 1.0.53 – the version jump to GNOME 1.2 has not yet been made, apparently. The sources can be obtained from a GNOME mirror, for example from As you might expect, the installation runs like this after unpacking: frog@verlaine:~/gnome-python-1.0.53 # ./confU igure –prefix=/opt/gnome frog@verlaine:~/gnome-python-1.0.53 # make frog@verlaine:~/gnome-python-1.0.53 # make iU nstall Of course, Python itself must already be installed. However, in a well-configured Linux system this should already be the case – Python is contained in nearly every distribution. The same applies to an already compiled gnome-python.


anything useful! Next we have to think about a constructor for the GUI class of the program. This can be seen in Listing 2. The desired classes are imported at the beginning. There are two ways of importing a module in Python. One is to import the module with its complete name space, as happens here with GdkImlib. Thus, the whole content of the module is accessible but with the restriction that every call of a function or class method from the module must be prefixed by GdkImlib. In Listing 2 this is seen in lines 15 and 16, in which the two graphs for the logo and the watermark are loaded. The other possibility is to import using the from call, which only imports the names of the classes and functions. Here, trouble can occur if a module defines a name that already exists. However, the widget names from the modules gnome.ui and gtk are used very frequently. LinuxMagazine/ gnomedruid

Listing 1: gp-letter 001: #!/usr/bin/env python 002: 003: import gui 004: 005: def main (): 006: gp = gui.GUI () 007: gp.mainloop () 008: 009: if __name__ == ‘__main__’: 010: main ()

A letter druid An friend of mine once said that when writing letters using LaTeX the same letter could be used over and over again. He simply created and copied files as required and then inserted different addresses or other things with a text editor. This is a prime target for automation. A small program with a graphical interface which helps in drawing up letters would save time. It would have to have all the components needed to do the job, however, and bring them together quickly. This is the kind of task GNOME and Python were built for.

The Code In Listing 1 the code for the executable file gp-letter can be seen. The access rights for this – and only this – must be set to execute. That is done, for example, by: chmod 744 ./gp-letter In the first line the code is transferred to the Python interpreter. In the third, the module gui is imported, and this will allow us to create our graphical interface. The entry point into the program is line 9 where the function Main is called which creates a new instance of the class and starts its gtk main loop. So far, so good. This is the nice thing about object-based programming: if an empty definition of the class GUI already exists then the program is capable of running. Not that it would actually do

Listing 2: Constructor of the GUI class 001: from gtk import * 002: from gnome.ui import * 003: import GdkImlib 004: 005: class GUI: 006: 007: def __init__ (self): 008: self.lettertype = "dinletter" 009: self.filename = "./letter.tex" 010: 011: = GNOMEApp ("gp-letter", "gp-letter") 012: ("destroy", self.quit) 013: ("delete_event", self.quit) 014: 015: self.logo = GdkImlib.Image ("logo.jpg") 016: self.wmark = GdkImlib.Image ("wmark.jpg") 017: 018: self.druid = GNOMEDruid () 019: self.druid.connect ("cancel", self.quit) 020: 021: self.dp_start = self.start_page () 022: self.dp_letter type = self.letter type_page () 023: self.dp_sender = self.sender_page () 024: self.dp_content = self.content_page () 025: self.dp_finish = self.end_page () 026: 027: self.druid.add (self.dp_start) 028: self.druid.add (self.dp_letter type) 029: self.druid.add (self.dp_sender) 030: self.druid.add (self.dp_content) 031: self.druid.add (self.dp_finish) 032: 033: (self.druid) 034: 035: () 10 · 2000 LINUX MAGAZINE 121



Mind the Tab!

by a GNOMEApp. This happens in line 33. The latter must then also be displayed, which occurs in line 35.

The indentation of code blocks in Python must be carefully noted – the structure of the program is defined by them. There are no curly brackets or other symbols to mark the start and end of blocks. Instead, a block is usually marked by a colon on the previous line, say in the case of for loops, or in Listing 2 line 5 in the class definition, or in line 7 at the beginning of the constructor function. The lines 11 to 13 show the typical handling of gtk widgets in Python. Gtk is indeed written in C; but owing to its object-oriented approach wrappers for object orientated (OO) languages can be created quite simply and above all, as we see, easily used. The design of the Druid begins at line 18. Here we define that clicking on the cancel button is to be followed by the program aborting. For each individual page of the Druid your own text can be entered, which will then form the basis for the Druid. As a Druid alone is not allowed, it must be contained in a gtk window which in this case is provided

Listing 3: Start and finish page; Program termination; main loop 001: def quit (self, event = None, data = None): 002: mainquit () 003: 004: def mainloop (self): 005: mainloop () 006: 007: def start_page (self): 008: page = GNOMEDruidPageStart ("gp letter", "WelU come to gp letter.\n\nThis will help you to simply aU nd quickly create the source text \nfor a LaTeX lettU er.\n\n Simply answer the questions asked by the DruU id.", self.logo, self.wmark) 009: return page 010: 011: def end_page (self): 012: page = GNOMEDruidPageFinish ("finish gp letteU r", Thank you for using gp letter.\n Click the ‘FinisU h’ button to create your LaTeX file.", self.logo, seU lf.wmark) 013: page.connect ("finish", self.create_letter) 014: return page 122 LINUX MAGAZINE 10 · 2000

The pages So far, gp-letter has five pages in its basic form. Whatever the case, an initial page for the greeting and a finish page with the close button should be present in every GNOME Druid program. Both page types are implemented using their own widgets GNOMEDruidPageStart and GNOMEDruidPageFinish. The pages in between are of the type GNOMEDruidPageStandard. All three types are derived from the superclass GNOMEDruidPage, but only the last one of these possesses a box of the type GtkVBox which can be used as a container for user-defined contents. To avoid tedium we won’t go through the detailed design of each individual page widget. The method for creating the start and finish pages are clearly demonstrated in Listing 3 along with the method for terminating the program and starting the gtk main loop. The loop is called up in Listing 1 by gp-letter. The method quit shows the basic struc-

Listing 4: A simple GNOMEDruidPageStandard 001: def content_page (self): 002: page = GNOMEDruidPageStandard ("The content of the letter ", self.logo) 003: 004: box = page.vbox 005: box.set_border_width (5) 006: label = GtkLabel ("Please enter into this text box the content of U your letter.") 007: 008: frame = GtkFrame ("letter text") 009: framebox = GtkHBox () 010: 011: self.contentfeld = GtkText () 012: self.contentfeld.set_editable (TRUE) 013: 014: frame.add (framebox) 015: framebox.pack_start (self.contentfeld) 016: box.pack_start (label, FALSE, FALSE, 5) 017: box.pack_start (frame, TRUE, TRUE, 5) 018: 019: page.connect ("next", self.get_content) 020: return page



Figure 2: The individual pages of our letter wizard using druid.

ture for a gtk callback function in Python. Note the importance of the two additional parameters which, as in C, describe the event structure by which the callback was called, as well as any transferred data. A start and a finish page are built up according to the same pattern and require a description as well as a logo – to be seen at the top right on a page – and a “watermark” which is placed on the left edge of the page. For this example we have taken the symbols from Balsa as they are quite suitable. On the finish page the finish button is linked to the function which undertakes the creation of the letter content. The fourth page is the one in which the user inputs the text of the letter and should be blank. This page is simply constructed. Box is used as a reference for the GtkVBox of the page. The structure – the creation of the widgets and packing them in containers – is similar to the structure of a widget collection in C, except that the widgets are to be accessed again later. They are – like the text box in line 11 – labelled with the prefix Self. Because of this they become properties of the class in which they are used and can be re-used later. The final function, with which line 19 is linked, fetches the content of the text field and copies it into a variable which is also a property of the class so that during letter creation it can be accessed later.

The start Now all you need to do is run the program. That occurs quite simply with: ./gp-letter & The Druid should now run and appear as in Figure 2. Happy writing!

Non-linear Druids Druids can be created in the manner described in which the pages follow each other in sequence. But sometimes the structure may need to branch. In a

program like gp-letter, for example, it would be nice if, after the dialog which offers the choice of letter class, pages are presented which offer alternative designs although they are still in the same class. The new pages will still be inserted in the Druid using the add method (in Listing 2 in the lines 27 to 31). To make this possible GNOMEDruid defines the following signals: • next • back • cancel • finish The first two of these signals can be used for nonlinear control flow in conjunction with the set_page method of GNOMEDruid. The signal is received and in the respective callback function we simply insert the current page. In this way we can skip to and fro in the linear list of pages in the Druid and the user receives the impression of a flexible program. This procedure would produce a program with considerably more scope. But whatever the case, the program has several weaknesses: only a few letter classes are supported and, perhaps more importantly, the design of the finished letter is not very pretty. And of course the non-linear control flow is missing because each letter class – and there are a few of these – really needs its own options which would more than justify its own page. However, these things are enhancements for the future – for after users have got a feel for the program. We used the program and it turned out to be considerably more useful than anticipated. Anyone who is interested in doing so is welcome to make improvements. However, our requirements at the start were for a small and quickly created program. And we’ve succeeded.

Info The GNOME.Project: Python: gnome-python sources, mirror in Poland: NOME/ GNOMEUI API Reference: libgnomeui/book1.html The source code for the example program ■

Source code Of course, no one wants to copy the whole source code. Therefore, it is present on my homepage ready for downloading. Suggestions for improvements and patches for this program which is released under the GPL are of course very welcome! ■ 10 · 2000 LINUX MAGAZINE 123

Linux Magazine UK  

Linux Magazine UK ed 01

Linux Magazine UK  

Linux Magazine UK ed 01