Page 1

Linux Magazine - Office suites, StarOffice, Corel WP Office, KOffice, SuSE Linux, D...

Pรกgina 1 de 2

Issue 2: November 2000 l l l

Comment: Make Linux your future News: Red Hat, Dell, VA Linux Community: The User Group pages:


worldwide l






l l

l l l l l




l l



l l l

Linux Beer Hike 2000 - Lake District, UK Business: Caldera and SCO - The future Merger On Test: SuSE 7.0 - Personal and professional editions On Test: Corel PhotoPaint 9 - Image editing software First Look: Opera - Web browsing with Opera 4.0a Fist Look: Embedded Graphics - PDA graphics using Qt embedded On Test: Office suites - Star Office 5.2 On Test: Office suites - Corel WordPerfect Office On Test: Office suites - Applixware Office 5.0 On Test: Office suites - Teamware Office 5.3 On Test: Office suites - KOffice Feature: Clustering - High availability clusters Feature: High Availability - How Motorola change Linux to support HA Know how: USB - How Linux handles the Universal Serial Bus Know how: Debian 2.2 - Installing Potato (Debian 2.2) Know how: Routing - Dynamic routing protocols explained Project: Linux Infra-Red remote control Programming: KDevelop - Using the C/C++ development environment Programming: GNOME - Writing your first GNOME program Beginners: FAQ - How to optimize your system Beginners: Dr. Linux - Troubleshooting Beginners: FAQ - How to get online with kppp Beginners: FAQ - Creating KDE themes - part 1 Community:

file:///K:/REVISTAS/INFORMATICA/INTERNACIONAIS/Linux%20Magazine%2010t... 13/04/2012

Linux Magazine - Office suites, StarOffice, Corel WP Office, KOffice, SuSE Linux, D...

l l

Beginners: Beginners:

Pรกgina 2 de 2

Take Command - Tail and Head Out of the Box - NScache - Cache

browser l


KDE Korner - Kgutenbook - etext

reader l

l l l l l

l l

Desktops - Alternative Window Managers - WindowMaker Beginners: Games - Soldier of Fortune review Beginners: Games - Descent 3 review Beginners: Games - Sim City 3000 review Beginners: Games - Alpha Centuri review Beginners: Games - Installing and using Joysticks Community: Brave GNU World Cover CD: WordPerfect, Applixware, KOffice and games plus much more Beginners:

<= Previous

Back To Archive

Next =>

Order Back Issues

file:///K:/REVISTAS/INFORMATICA/INTERNACIONAIS/Linux%20Magazine%2010t... 13/04/2012


General Contacts General Enquiries Fax


Subscriptions E-mail Enquiries Letters

01625 855169 01625 855071

Editor Julian Moss

Staff Writers

Keir Thomas, Dave Cusick , Martyn Carroll


Richard Ibbotson, Jono Bacon, Nick Carr, Martin Milner, Larry G. Cruz

International Editors

Harald Milz Hans-Georg Esser Bernhard Kuhn

International Contributors

Ulrich Wolf, Mirco Dölle, Tim Schürmann, Frank Ronneburg, Fritz Reichmann, Karsten Scheibler, Ralf Nolden, Thorsten Fischer, Marianne Wachholz, Hagen Höpfner, Heike Jurzik, Chris Perle, Stefanie Teufel, Jo Moskalewski, Fionn Behrens, Georg Greve


vero-design Renate Ettenberger, Tym Leckey


Hubertus Vogg

Operations Manager

Pam Shore


01625 855169 Neil Dolan Sales Manager Linda Henry Sales Manager

Publishing Publishing Director

Robin Wilkinson Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £73.88 Rest the World: £85.52 Back issues (UK) £6.25


COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE

Linux Magazine is published monthly by Linux New Media UK, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2000 Linux New Media UK Ltd

MAKE LINUX YOUR FUTURE If you want a well-paid job in IT, learn Linux. That might surprise some people who thought that the biggest demand was for people with Microsoft experience. But however you look at it, it seems a pretty safe bet that systems administrators with good knowledge of Linux will be in increasingly high demand in the years to come. Linux is now the fastest growing operating system on the planet. Desktop use is already on a par with the Apple Macintosh, while over 30 per cent of enterprise servers are now said to run Linux. And according to research company IDC these figures are set to grow by 25 per cent year on year for the next three years. Now consider the effect of the nationwide shortage of skilled IT personnel which is leading the Government to consider relaxing immigration rules for people with the desired qualifications. Already, if you have good technical expertise, companies are falling over themselves to offer you a job. So how can you acquire the expertise they seek quickly and cheaply? Learning Linux is the obvious choice. Unlike other operating systems used in commercial environments it costs next to nothing to acquire a copy and it doesn't need extravagant hardware in order to run. Most of what you will learn running Linux on your home PC can be applied equally when using it on much more powerful enterprise hardware platforms. You might want to fork out for a few books to help your studies along, but much of the information you need is available free, online. And there are plenty of opportunities to obtain practical experience that you can put on your CV by getting involved with open source projects. Some Linux distributors are developing qualifications like the Red Hat Certified Engineer scheme. Achieving certification would be a good goal to aim for. But possessing such certificates isn’t yet considered as essential by employers as it is in the Microsoft world. There are many ways other than by taking an exam to demonstrate what you have learnt. Linux is an operating system with a glittering future. It could be your future too. If you are looking to give your career a boost, what are you waiting for? Our thanks to everyone who wrote complimenting us on our launch issue. We're very happy to have got it just about right first time. We still welcome your comments and constructive criticisms about the content of the magazine itself and the CD. We know that many of you would prefer not to have a full distribution on the CD most of the time: if you are one of those people, tell us what else you would like to see there. Until next month,

No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, e-mails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. ISSN 14715678 Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.

4 LINUX MAGAZINE 10 · 2000

Julian Moss Editor

We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it.



NEWSLETTER San Jose LinuxWorld show breaks records Linux is on a roll, if the figures for the LinuxWorld show held in August in San Jose, USA are anything to go by. Over 200 exhibitors and more than 20,000 visitors turned out. Organisers IDG World Expo were forced to turn over 60 more potential exhibitors away. Highlights of the show included keynote presentations by Michael Dell, president and CEO of Dell Computer Corporation, Ransom Love, president and chief executive officer of Caldera Systems, and Joel Birnbaum, chief scientist at Hewlett-Packard. Many companies took the opportunity to make important announcements including AOL, which announced a plan to make its products available to Linux users, Agenda Computing which displayed a Linux-based electronic organiser (the VR3) and NeTraverse, which preannounced the October availability of version 2.0 of Win4Lin, which enables Windows applications to run on a Linux desktop. Hewlett-Packard unveiled a number of Linux products and services as part of a multi-OS strategy, Corel Corporation introduced the second edition of CorelDRAW Graphics Suite for Linux and Debian announced the release of Debian GNU/Linux (Potato) version 2.2. However, the show was stolen by news of the formation of the GNOME foundation (see next story.)

There was the usual round of show awards nominated by attendees, which were: Best of Show: Loki Software Development Tools: Lone Star Software Corporation Desktop Environments: KDE Distribution: Red Hat Inc Hardware/Peripherals: Atipa Linux Solutions Internet/Intranet/Extranet: Vovida Networks Inc Office Suite: Sun Microsystems, Inc Peripherals/Support Services: Lone Star Software Corporation Publications: Linux Journal Servers: Atipa Linux Solutions Software Utilities: Enhanced Software Technologies, Inc Database: Sleepycat Software, Inc It was also announced that Debian will receive the $25,000 IDG/Linus Torvalds Community Award. Since its inception in 1999 this award has granted over $100,000 to companies and organizations whose use of the Linux operating system involves a combination of both innovation and usefulness. The next LinuxWorld Conference and Expo will be held in New York on January 30 - February 2, 2001 at the Jacob K. Javits Convention Centre.

Info LinuxWorld Conference and Expo ■

Microsoft Office on Linux soon?

FreeCell runs under Linux courtesy of Mainsoft's MainWin

Well-known applications from the Windows environment could be appearing on Linux soon. Microsoft has renewed an agreement with Mainsoft Corporation under which Mainsoft will assist Microsoft in porting the Internet Explorer application suite and potentially other technologies to UNIX platforms. The agreement also includes a license to use MainWin, Mainsoft's flagship product, a tool that enables software companies to develop and run Windows applications in the UNIX environment. MainWin is described as “a Windows platform for UNIX.” Similar to the open source Wine Project, it is an implementation


of Win32 APIs and Windows-based services on UNIX. MainWin is neither free (a typical starting price is over $20,000) nor is it open source, but this does have some benefits for those who need to run Windows programs on UNIX and aren't concerned about the cost. Through strategic agreements with Microsoft that would not be possible with an open source project, Mainsoft has gained access to Windows NT and Windows 2000 source code, and MainWin incorporates several million lines of original Microsoft source code. Mainsoft claims that this will ensure that applications developed for Windows using C, C++ and Dynamic HTML will run identically under UNIX. The announcement of the agreement refers specifically to creating Solaris versions of Internet Explorer and Media Player but its use of the words “and potentially other technologies” has given new life to speculation that

Microsoft intends porting its Office 2000 suite to UNIX and, potentially, Linux. The growing interest in Linux on the desktop is a good reason why the company would want to do this. It is not like Microsoft to ignore a rapidly growing market, observers suggest. Other software companies such as IMSI, developer of the popular technical drawing package TurboCAD, are already believed to be working on Linux ports of popular products and it is expected that these products will appear before the end of the year. For Linux fans, it could be a good Christmas.

Info Mainsoft Corporation The Wine Headquarters ■



GNOME makes bid for world domination GNOME could be set to become the leading desktop environment – and not just for Linux – following the creation of the GNOME Foundation.Tasked with the goal of advancing the availability of the easy-to-use open source desktop environment, the GNOME Foundation will provide organizational, financial and legal support to the GNOME project and help to determine its vision and roadmap. It has already received backing from organisations such as Compaq, Eazel, the Free Software Foundation, Gnumatic, Helix Code, Henzai, Hewlett-Packard, IBM, Object Management Group, Red Hat, Sun Microsystems,TurboLinux and VA Linux. The foundation will help set the technical direction of the GNOME project, promote the broad adoption of GNOME on Linux and Unix desktops and offer a forum for industry leaders to contribute to GNOME. It will be modelled on the Apache Foundation and will be using the services of CollabNet to help set up the organisation. The GNOME Foundation will have a board of directors elected by the hundreds of volunteer GNOME developers. “The GNOME Foundation marks a major step forward for the GNOME project,” said Miguel de Icaza, the project's founder. “As GNOME continues to gain momentum we needed a forum where the

GNOME developers and corporate partners could come together to coordinate the continued development of GNOME. The support of these industry leaders will help us to achieve our dream of building a fully free, easy to use desktop environment that will be used by many millions of people.” At the same time, the GNOME project announced five major initiatives aimed at realising that dream. They are: • To establish the GNOME user environment as the unifying desktop for Linux and Unix; • To adopt (the open source version of Star Office) technologies for integration into GNOME; • To integrate the Mozilla browser technology into GNOME; • To encourage industry leaders to work together to improve the quality, reliability and accessibility of the GNOME user environment; • To establish the GNOME framework as the standard for next generation Internet access devices. The prospects for GNOME have also been boosted by its adoption by Sun Microsystems and HewlettPackard as the future default user environment for Solaris and HP-UX. Major IT companies are con-

GNOME – aiming to stamp out rival desktops

tributing technology to the project. IBM is providing application development tools that enable development of web-based applications using open web standard languages. Red Hat will provide an objectoriented widget framework, CORBA, support for distributed software, layout and rendering of internationalised text and configuration management technology. Sun Microsystems will provide printing, internationalisation and accessibility technology as well as expertise in improving reliability and quality.

Info The GNOME Foundation ■

Dell Linux web server scoops performance prize Dell achieved record revenue, profits and cash flow results in the second quarter of this year, and Linux can take much of the credit. Worldwide shipments of Dell PowerEdge servers increased at nearly double the industry rate. Revenue from high-density rack-mounted servers grew more than 300 per cent during the quarter, and represented 33 per cent of total server sales. Sales of external-storage products were also up 70 per cent. During this quarter Dell introduced the PowerApp.cache and PowerApp.web appliance servers which are designed to help customers manage online traffic and Web-hosting activities.more effectively and efficiently. Shipments of the PowerApp products were about evenly split between Windows 2000 and Red Hat Linux, Dell claims. Dell recently expanded its strategic relationship with Red Hat and has selected its Linux distribution as a strategic operating system that Dell installs and supports worldwide. Linux also helped Dell walk away with the prize for the fastest web server at the LinuxWorld show held in the US during August. Dell PowerEdge 8450 and Dell PowerEdge 6400 servers running Red Hat Linux 6.2 modified with a beta version of the upcoming 2.4 Linux kernel and the new TUX web server developed at Red Hat produced the fastest results out of 33 servers measured using the SPECweb99 benchmark. This benchmark simulates

how many individual visitors can be connected to a particular Internet or intranet site simultaneously. The full test results can be seen on SPEC's Web site. The Red Hat Threaded Linux Web Server Add-on (TUX) is a new Web server that has been available from Red Hat since August 2000. TUX exploits kernel level operation to achieve higher operating efficiency of the TCP/IP stack and provides a configuration and setup interface to user space applications. Major improvements in performance are achieved by using hardware independent algorithms to improve efficiency of operation and reduce the amount of work that is done more than once in typical network application. “The 2.4.x Linux kernel provides nice scalability as is demonstrated through TUX's impressive SPECweb99 benchmark figures,” commented Linus Torvalds. “These benchmark results show how the open source community drives advanced performance computing concepts. TUX pulls together these concepts to take full advantage of the performance enhancements of the 2.4 kernel.”

Info SPECweb99 Dell ■ 2 · 2000 LINUX MAGAZINE 9



SGI opens Inventor

Now open source developers can afford OpenGL logo certification

SGI has announced a new licensing programme for the OpenGL logo and trademark that will benefit open source developers and vendors. It has also open sourced the code of its Open Inventor 3D graphics application development system. The new OpenGL license will allow open source developers to use the OpenGL logo and trademark free of charge for OpenGL programs that run on open source platforms. Implementations must pass the same strict set of conformance tests as commercially licensed implementations and the results will be reported on SGI's Web site. OpenGL allows developers to incorporate a broad set of rendering, texture mapping, special effects and other visualization functions and provides a graphics pipeline that allows unrestricted access to graphics hardware acceleration. Since its introduction by SGI in 1992, OpenGL has become the industry's most widely used and supported 3D and 2D graphics application programming interface. It is a vendor-neutral, multiplatform graphics standard and runs on all major computing platforms.The development and evolution of the standard is controlled by an independent review board.

Open Inventor is an object-oriented rapid application development toolkit for creating 3D applications. It is platform independent and built on top of OpenGL. It presents a programming model based on a 3D scene database that dramatically simplifies graphics programming. It includes objects such as cubes, polygons, text, materials, cameras, lights, trackballs, handle boxes, 3D viewers, and editors that speed up the development of 3D graphics programs. OpenGL is still a closed source product but earlier this year, SGI released the source code to a sample implementation of the OpenGL API under an open source license. SGI hopes that this will encourage the development of OpenGL applications for Linux.

Info SGI SGI open source developer web site ■

VA Linux launches open source developer portal

OSDN – a new portal for open source developers

VA Linux Systems has launched the Open Source Development Network (OSDN). OSDN is a new organisation that aims to integrate the leading Internet sites for Open Source software development, distribution and discussion. At the web site at you can create a personalised login page displaying content from popular sites sponsored by VA Linux Systems. aims to enourage co-operation between these community sites and will focus on

improving the development technology infrastructure of sites such as SourceForge, which is currently the world's largest ASP for Open Source developers which hosts more than 7,500 projects and over 50,000 registered developers. It is also hoped to introduce new services such as OSDN-hosted email and instant messaging. currently provides a message board for Open Source community discussions, including a forum where users can request new features and suggest improvements. As an organization, the goal of OSDN is to build community programmes to support open source development and outreach programmes to introduce individuals and corporations to Open Source development and the benefits of Open Source solutions. The web sites that form OSDN include SourceForge (, QuestionExchange (, Freshmeat (, (www.themes. org), Slashdot (, (, Geocrawler (www.geocrawler. org) and NewsForge ( The OSDN's newest site, NewsForge provides news articles, commentary pieces, product reviews and press releases relating to Open Source topics.

Info VA Linux Systems: 0870 2412813 Open Source Development Network ■

10 LINUX MAGAZINE 2 · 2000



SuSE to drive Linux port to AMD X86-64 architecture SuSE Linux will be driving the Linux community's work to port the operating system to AMD's new 64-bit architecture, x86-64, the company has announced. The X86-64 port of Linux will run new 64-bit applications as well as existing 32-bit ix86 Linux applications. The first versions of the development tools (gcc and binutils) have already been written by SuSE developer Jan Hubicka. AMD recently released the specification of its new 64 bit architecture called x86-64 (codenamed Sledgehammer). This has allowed programmers to begin writing code to run on the architecture. AMD x86-64 technology is designed to enable platform suppliers, developers and corporations to move to 64-bit environments while continuing to run the vast installed base of existing 32-bit applications with high performance. Early participants in the project include developers from AMD, SuSE, CodeSourcery LLC, ACT and PGI. With the opening up of the specification, developers can now participate in the discussions themselves. Details, including a draft version of

the System V ABI (Application Binary Interface) for x86-64, can be found on the x86-64 website. Dirk Hohndel, CTO of SuSE Linux AG, said: “AMD's action supports the open source community and certainly encourages code development. We're looking forward to working together with AMD and the Linux community on an open port of Linux to x86-64.” Fred Weber, AMD's Vice President of Engineering, said: “It has been exciting and a great pleasure for AMD to begin working with Linux developers to help refine the x86-64 architecture. Together, we can extend the benefits of today's x86 architecture to meet the needs of tomorrow's demanding applications.”

Info x86-64 Linux SuSE Linux: 0208 387 4088 http:/ ■

Free Itanium compile farm now ready SuSE Linux has introduced a free test environment for Intel's new 64-bit architecture. The environment is accessible through the Internet, allowing application developers to test and adapt their Linux applications to the 64-bit-processor without needing to possess their own 64-bit hardware. By offering remote access to this compile farm SuSE and Intel are jointly supporting programmers who wish to port their applications to the 64-bit platform. Interested software developers must register to gain free access to the password protected iA-64 compile farm. SuSE has also provided the preliminary version of the com-

plete SuSE Linux 7.0 for iA-64 distribution, which is available for free download from its ftp server. SuSE Linux 7.0 for iA-64 is a complete operating system including a complete set of programming tools and Internet/intranet applications for Linux.

Info iA-64 compile farm registration SuSE Linux 7.0 for iA-64 ■




Borland JBuilder 4 is released

JBuilder 4 -- a Java development environment written in Java

Inprise/Borland has released the next major version of its enterprise developemnt tool JBuilder 4 for Linux, Solaris and Windows. The product develops programs for, and is written in, Java 2 version 1.3 and creates programs that run identically on all supported platforms. The development environment has all the features that today's developers expect including a source code editor, integrated debugger, visual development tools and a project manager. The Enterprise version includes support for Enterprise JavaBeans (EJBs) and distributed application technologies such as CORBA. Major new features in this release include support for team development and integration with CVS. Borland claims that JBuilder 4 is the only development tool that supports EJB 1.1 compliant development on Linux, Solaris and Windows. This makes it possible to rapidly build e-business applications and deploy them to leading Java 2 Enterprise Edition application servers including Borland Application Server and BEA's WebLogic server. “With the increasing demand for complex e-business applications, companies are seeking solutions to enable them to build and deploy Java applications that are robust and scalable,” said Mark Driver, a senior analyst at Gartner Group. “Robust tools are

essential components to meet the demands of the Internet economy.” JBuilder 4 Enterprise includes a development license for VisiBroker for Java and Borland Application Server. The Professional and Enterprise versions are available from retailers or Borland's online store. A Foundation edition will also be made available in October for free download from Borland's web site. Borland is also presenting a series of briefings aimed at managers and IT professionals covering Java and related technologies. The locations and dates are: 31 October City Conference Centre, 80 Coleman Street, London 14 November Malmaison, Piccadilly, Manchester 23 November Malmaison, 1 Tower Place, Leith, Edinburgh 28 November City Conference Centre, 80 Coleman Street, London More information is available online.

Info Borland (JBuilder) Java seminar information ■

VA Linux and Penguin Computing offer configured-to-order systems Linux vendors VA Linux Systems and Penguin Computing have both launched web-based services that enable customers to choose the operating system configuration and applications for their systems whilst online. VA's version is called Build-to-Order Software (BOSS). It allows customers to select the specific Linux and Open Source software components they want pre-installed at the factory and delivered on their servers using a point-and-click Web interface. The system currently offers a selection of over 700 software packages with hundreds more expected to be offered in the coming months. The Build-to-Order Software Selector offers background information on each software package and checks for dependencies among all selected packages to make sure that customers' selected configurations will work seamlessly. Once a customer submits an order, the configuration information is sent directly from the web server to the factory, where the software components are automatically installed onto the systems they have Select what packages you want installed before you buy ordered. The pre-con12 LINUX MAGAZINE 2 · 2000

figured servers can then be delivered directly to the customers' data centre. Customers' configurations can be saved online for future purchases. The software selections are fully supported by VA Linux Systems' Total Linux Coverage (TLC) support package. VA Linux Systems has plans to upgrade BOSS by offering a choice of recommended configurations for specific applications such as web servers (optimized for various types of traffic), application servers, load-balancing servers, database servers (optimized for Oracle or MySQL), caching servers, firewall servers, file servers, mail servers, print servers and others. These recommended solutions are intended to make it easy for customers to navigate through thousands of open source software packages by presenting them with “best practices” for each type of application that can be further customized if necessary. Penguin Computing's online configuration system goes by the somewhat contrived acronym RAPTOR (RAPid configure-TO-ordeR). It allows purchasers to specify hardware, software and storage configurations including custom configuration of disk partitions and RAID storage. The company claims that most customers will be able to use their systems upon delivery without having to reinstall their system software first. RAPTOR offers tandem configuration of hardware, software and storage.The selection of any

hardware option automatically gives customers a list of compatible software choices, and vice versa. For example, when buyers select different hard drive configurations, their disk partition options change according to the selection. Penguin Computing claims that it is the first company to allow customers to configure custom disk partitions over the Internet. Additional features planned for RAPTOR include: • A Web-based configurator and personalization engine; • A RAID configurator that automatically selects hardware based on user requirements; • A dynamic disk partition configurator that adapts to the selected hardware; • A software configurator with nested levels of complexity; • Ability to do custom application loads; • An automated software loading and configuration engine with software dependency checking and revision control; • A mechanism for configuration recovery.

Info VA Linux Systems 0870 2412813 Penguin Computing +1 888 736 4846 ■



New Alpha motherboard makes Linux go faster If Linux doesn't run fast enough for you, here's a solution. Alpha Processor has developed a new UP2000+ dual processor motherboard which the company claims makes it easier to build Alpha servers than ever before. The board has an ATX Extended form factor allowing it to fit standard server enclosures as well as the API PowerRAC Chassis 320. It is also compatible with industry standard power supplies, memory and other components. The UP2000+ accommodates up to two Alpha 21264 processors with speeds of up to 833MHz and up to 8MB of level two (L2) cache each. The new motherboard is the first to support Double Data Rate (DDR) cache giving up to 8.8GB/s of L2 cache bandwidth and 2.6GB/s of memory bandwidth.

API claims that the UP2000+ is the only server platform currently available that takes full advantage of the Linux 64-bit architecture making it the ideal foundation for high performance computing (including Beowulf) clusters, Internet solutions and render farm applications, as well as the basis for high-end state-of-the-art workstations

Info Alpha Processor Inc. Build your own server using Alpha's UP2000+ motherboard

Motorola joins with Red Hat to offer HA Linux Motorola Computer Group and Red Hat have teamed up to produce a version of Linux designed for telecom applications that need to run 24 hours a day, 365 days a year. Under the agreement, Motorola will package and market the Red Hat Linux operating system together with Motorola's Advanced High-Availability Software for Linux (HA-Linux) product, which will be shipped with Motorola's high-availability embedded computing platforms. This means that the thousands of software tools and application enablers supported on Red Hat Linux are now available to telecom OEMs who are

building wireless, wireline and Internet infrastructure applications. Motorola's HA-Linux integrated with Red Hat Linux 6.2 provides telecoms OEMs with the features they need to build carrier-grade telecoms applications designed for 99.999% availability (equivalent to less than five minutes of downtime per year.) Features and benefits of HA-Linux include: Host CPU multi-stage switchover, allowing IP switchover in less than 1 second and shelf switchover within 45 seconds; Hot-swap of all components including processors, I/O controllers, power modules and

fans; network management, allowing remote monitoring and operation of systems from the network operations centre; management of telecom alarms and in/out of service LEDs. The product is now shipping. U.S. list prices range from $449 to $748 depending on configuration.

Info Motorola Computer Group ■


2 · 2000 LINUX MAGAZINE 13



Dell offers new network storage devices Dell has recently launched a range of new network storage devices under the PowerVault name. For enterprise users the Dell PowerVault 530F provides a storage capacity of up to 17 terabytes. Smaller businesses can expand the storage capacity of their network using the PowerVault 705N with a total capacity of 120GB, which will be increased to 240GB later this year. Priced from £32,000, the PowerVault 530F costs around a quarter to a half the price of comparable systems from other vendors and takes much less time to configure, set-up and use, according to Dell. It uses software technology from StorageApps, a leading provider of storage area network (SAN) applications and appliances. By combining hardware and software into a pre-configured appliance the PowerVault 530F lowers configuration costs and enables enterprise customers to quickly implement the SAN in a variety of environments. By combining the PowerVault 530F with PowerVault fibre-channel switches a SAN configuration of up to 20 servers and four storage arrays with a total capacity of over 17 terabytes can be achieved. Implementation of the PowerVault 530F is facilitated by Dell OpenManage, a collection of tools and technologies designed to ease the installation and administration of Dell systems.

The PowerVault 530F boasts advanced disaster recovery features including: • Remote Mirroring: Supports real-time disaster recovery for fast access and recovery of businesscritical data, allowing users to develop a disaster recovery configuration to ensure safe access to data. • Point-in-Time Copy: Provides a more robust disaster recovery solution by taking a „snapshot“ of the file pointers that represent the data and copying each pointer to another location, a space-saving process that is significantly faster than copying the data in its entirety. • Three-Way Mirroring: Increases the availability and disaster recovery features for backup, data warehouse loading and application testing by providing a third copy of data. • LUN Masking/LUN Mapping: Creates a pool of storage resources rather than designating specific storage for each server, enabling customers using Windows-based environments to allocate their storage resources more efficiently without reconfiguring the entire SAN. • High Availability: Provides system and data availability features for a variety of customer configurations through fully redundant components, dual data paths with automatic fail over and clustering support.

At the opposite end of the scale the PowerVault 705N starts at £1,829 and is a „1U“ format device designed to fit in a standard server rack or stand on a tabletop. The product connects directly to the local area network (LAN) and not to any specific server, so users can access data directly even if the primary server goes down. Dell claims that the addition of NAS servers can increase the performance of general purpose servers by offloading the file-serving function. The product features Web-based configuration and management tools enabling customers with no previous IT experience to install and manage the server using a standard Web browser. On installation, the server automatically responds and connects to all client protocols including TCP/IP, IPX, NetBEUI and AppleTalk.As supplied, it supports Windows NT, Novell NetWare, Unix, Linux and Macintosh environments with no special configuration being required.As shipped, RAID 5 is enabled on each of the four 30GB drives.The server can also be configured for high performance RAID 0 (striping to one large virtual drive) or instant backup RAID 1 (internal disk mirroring).

Info Dell 0870 9075863 ■

Acrylis offers software management for personal users

Terabytes of storage – Dell’s new PowerVault

Acrylis Inc. has launched a version of its WhatifLinux software management system for personal users with standalone computers.WhatifLinux Personal Edition is an Internet-delivered service that monitors and manages the open source software assets running on a system. It provides users with immediate access to software updates, security alerts, patches and the latest open source software information right from their Web browser. It also provides a decision support tool that allows the user to create „What If“ scenarios to determine the impact of installing or uninstalling software on their system, including the effects of dependencies upon other software components.The service costs $49 per year and can be purchased by visiting Quick Guide to the system in Adobe Acrobat format is also available to be downloaded. The key components of WhatifLinux Personal Edition are: • Knowledge Base – This contains information such as software dependency and conflicts, alerts and problems, plus customer usage information. The software components being monitored can include system software, utility software and application software. Customer information is collected from the agent running on the customer’s system. This enables WhatifLinux to provide alert information directly to those customers who are actively using the affected software. • Decision Support Tool – This tool provides the user with a recursive analysis of the impact of any software

14 LINUX MAGAZINE 2 · 2000

changes, installs and uninstalls. The user can link directly to the software provider for a full description of the alert, patch or update. Software patches and updates can be immediately applied, saving time and money. • Intelligent Agent – This is a small Java-based application that resides on the system being monitored. It takes an inventory of all the software packages loaded on the system. It regularly checks in with the WhatifLinux Knowledge Base for news and updates specific to that system, no matter what Linux distribution is resident „Hundreds of thousands of early adopters are relying on Linux as their desktop or server operating system of choice,“ said Reg Broughton, CEO of Acrylis. „But for these users, searching for the tools, tips, updates, patches, dependencies and analysis information is a time consuming process. Once the user has the information, evaluating the impact of a software change is a key decision. With WhatifLinux Personal Edition our customers are given this key information pro-actively over the Internet and the decision support tool makes software update tasks fast and less risky.“

Info WhatifLinux ■



Linux Beer Hike 2000


Info Linux Beer Hike home page The author, Richard Ibbotson ( is chairman of the Sheffield Linux User Group ( ■

The annual Linux Beer Hike this year took place in Coniston in the Lake District from the 29th July to 6th August. As usual, it was a very jolly occasion. 150 people attended and quite a lot of beer and Linux software was consumed. At the end no-one was quite sure which of the two was more important. Three days of sun more than made up for the four days of hacking away at a keyboard in Coniston village hall. The best walk of the week was the Old Man of Coniston. The delights of the Lakeland scenery were as much in evidence as the Linux users who surrounded me and engaged in long chats about the latest stuff. The beer hikers came from all walks of life and many countries. Some were IT professionals who would like to know more about Linux. Others were scientists and engineers who use Linux for their work. Some were home users who just wanted to live a bit. The general idea of the event was that if you wanted a structured week then you could have that with tutorials and other lectures plus guided walks with people who knew about the hills around the Lake District. If you just wanted to wander between talks about space physics or solid state materials and the local pub then you could do that as well. Plenty of choice for everyone! There were many good lectures. Jenny Bailey gave us a really good talk about SETI and Radio Astronomy with a bias towards Linux. Jenny says that she would like to get more people and institutions to point more dishes towards the skies so that more useful data can be collected. She thinks that Linux is a useful tool for getting through to the right sort of technologists who might be interested in SETI related projects such as radio astronomy and

18 LINUX MAGAZINE 2 · 2000

Beowulf. If you want to get in touch with Jenny then please do send her an email: her address is The lectures about Apache and Linux Security concepts were also very good and certainly the kind of thing that would make me want to go to the next beer hike next year. We made an attempt at producing a Beowulf cluster. This of course caused a few problems on the way through. One of these was the question of which distribution do we use ? This was followed by the usual hardware problems and finally the end bit where everyone downed many pints of beer. We used Debian for the cluster with laptops, 486s and Sparcs Well worth a pint. There was a question and answer session at the end of the week which involved almost 40 minutes of discussion about our difficulties with hardware and software that just refused to work. It more than made up for the technical difficulties and we were able to laugh a little bit before wandering along the road for another beer. At this point a reporter from the Wall Street Journal arrived and interviewed a few people for a report about the beer hike which was published shortly afterwards. Well, where will the event be held next year? The jury is still out on this one and none of us are too sure. Suggestions made so far are Holland or Belgium. Wherever it is, we sincerely wish that we will see you there and perhaps we might even buy each other a beer or two? If you want to know more you can visit the Linux Beer Hike web site and subscribe to the discussion list whereupon you will find that many of us are still ranting on about all sorts of things. ■



The merger of Caldera and SCO


Caldera is about to take over the core products of the traditional Unix producer, the Santa Cruz Operation, better known as SCO. The Unix systems owned by SCO, Openserver and Unixware, will continue to be supported for the time being, but will receive a Linux personality. SCO and Caldera employees and business partners first learnt of the details of the future strategy at the Forum 2000 in Santa Cruz.

For fourteen years Unix developers, marketing personnel and SCO partners have been meeting at the Santa Cruz university campus – which can only be described as idyllic – for a forum: a three day long conference in an informal atmosphere. This time the participants were particularly on tenterhooks because Caldera had taken over two out of three of the traditional business divisions, the server software division and the professional service division. SCO thus becomes history, in fact doubly so. SCO, as it used to be, no longer exists. The divisions for Unix software and for service will become an integral part of Caldera. Left behind is a company called “Tarantella” which will continue with the development and marketing of “web enabling software”. Tarantella will receive 28 per cent of the Caldera shares and two seats on the board of directors. One of these will be occupied by the present CEO and president of SCO, Unix veteran, Doug Michels, who will become CEO of Tarantella. Doug Michels gave the keynote speech at the SCO/Caldera function. In order to sweeten the takeover for SCO, Ray Noorda's financial conglomerate, Canopy Group, who own the greater part of shares in Caldera, has

provided a loan of 18 million dollars. Tarantella will furthermore receive the income from the open server business and the rights to the system source code. SCO is one of the few businesses that have written Unix history. The company was founded by Doug Michels and his father Larry Michels in 1979. Since 1983 they have had only one main focus: Unix for Intel computers. SCO XENIX system V was the first commercial Unix for PC's with 8086 and 8088 processors. Since that date progress has been straightforward: Unix for Intel's 286 and 386 computers, stock market entry, build-up of a global sales network. 1995 was a milestone with the acquisition of the whole Novell Unix line of business. Novell had acquired the “basic Unix” some years before from AT&T. SCO had furthermore taken over Novell?'s Unixware operating system. Since that time, SCO has marketed the Unix system for Intel computers under two different brand names. The “Open Server”, SCO's original Unix, serves the lower segment, Unixware is reserved for the upper echelons. To that, various “value added” products were added, which often had only a short life cycle. So what exactly does SCO bring to the marriage with Caldera? 2 · 2000 LINUX MAGAZINE 19



Mixed dowry. One of the most valuable parts of the business owned by SCO has nothing to do with technology. It is the SCO sales organisation developed over the years together with its extremely loyal world wide customer base. At any rate 38 percent of all world wide installed Unix computers are said to run a SCO system. Development of business with new clients has to date, however, been less than satisfactory, especially following on from the Y2K problem, which has had its day as a means to increase turnover. As a result Caldera received in one fell swoop an incredible client base in comparison to all other Linux businesses. In order to capitalise on this certain marketing efforts are unavoidable. SCO is going on the assumption that the classic resale outlets are increasingly changing into application service providers (ASPs). These will be offered under the slogan “Open Internet Platform”, a solution-oriented approach, where one of the existing systems will be chosen as an alternative, Open Server, Linux, Unixware or else in future the 64 bit Unix AIX5L, which has been developed from the Monterey project.

Old wine in new casks - the “Linux Personality”

Can you feel the tension? Yellow-dressed SCO employees listening to Ranom Love of Caldera.

Unixware and SCO Open Server will be given a “Linux Kernel Personality (LKP)”, which in fact means that these systems will be fully compatible with Linux. Linux RPM packages can be installed on Open Server and Unixware and the programs run within an entirely Linux set-up without any loss of performance. SCO already delivers a Linux emulator (lxrun) with the current Unixware variants. LKP will however go far beyond an emulator. There two reasons for keeping the Unix systems on a medium term basis. One is the wide user base which cannot be coerced to migrate in its entirety

towards Linux. The other, seen from a technological viewpoint, is that Unixware still has a lot to be said for it. Above all, its ability to scale up to larger systems is still somewhat better than that of Linux. In particular, when dealing with threads, Linux is limping behind. Efficient thread management is necessary in order to scale applications properly to multi processor machines. General statements about the performance of Linux applications on Unixware with an overlying “Linux personality” are currently difficult to come across. According to Jürgen Kienhöfer, who is the authority involved with the development of the kernel personality, threaded applications such as the Lotus Domino Server give an obviously better performance when compared to the pure Linux solution.

The sixty four thousand dollar question - what's your attitude towards a licence? Caldera's CEO, Ransom Love, has given assurances on many occasions that Unixware is to be placed under an open source compatible licence. Whether this would be GPL would be decided at a later date however. The “Linux Kernel Personality” will probably have been available for a long time before this issue has been resolved. According to Jürgen Kienhöfer, SCO has safeguarded itself as a precaution against accusations of violating GPL's rights, by reprogramming the kernel interface in a “clean room” implementation, without utilising Linux code. It is noteworthy that the whole procject file system has been implemented by Linux, so that applications can have access to it. The rights of third parties are the main obstacles standing in the way of the plan to place the whole Unixware kernel under an open source licence. There is a great deal of “intellectual ownership” in Unixware by third parties such as Novell and Compaq. The technically mature journalling file system, Veritas, will be the one most likely to cause problems. Ransom Love has promised, however, to reprogram those parts of Unixware which violated the rights of third parties.

Some are more open than others. In the upper echelons of Caldera's future range of products “the real Unix” will play a significant role, but at the lower end things look quite different. The managers have not yet tired of affirming the continuing existence of SCO Open Server but the division of the low end server market for the present system, into which a lot of effort has gone, seems to have been badly constructed and little more than an excuse to keep the sales people and SCO partners in a job. The roadmap which has been introduced for this product with the possibility of “leapfrogging” from one version to another, version 5.9.5 over 5.9.5a up to 5.9.5c by the middle of the year 2002 doesn't really seem to be visionary so we can expect 20 LINUX MAGAZINE 2 · 2000



A relaxed Dony Michels giving the keynote speech.

that this sales channel will be utilised intensively for Caldera's Linux OpenServer.

Two birds with one stone - nonstop clustering One of the most recently developed SCO gems is the clustering software Nonstop Clustering (NSC). This is where the server division tries to kill two birds with one stone: the software serves on the one hand to improve performance, but on the other hand also offers failover solutions in case of system failure. NSC is at present still an independent product which SCO offers as an option with Unixware, but should be fully integrated at a later date.

The former Monterey project The development of a high performance 64-bit operating system for Intel's Itanium was (or is?) a joint project by IBM, Intel and SCO. The massive Linux initiatives by IBM were above all the reason why IT prophets found the future of Monterey a popular subject. Now at least something can be revealed : the system will be called AIX5L and will therefore appear as a successor to IBM's own Unix. IBM will be the first to derive the main benefits from it, since the only architecture available at the moment, on which AIX5L can be installed is based on the PowerPC. Furthermore, the L in the new name stands for Linux (no joke). IBM is therefore making it very clear to all on the outside that this system, produced with such immense expenditure on development, is really only a temporary solution until Linux is “enterprise ready”. As a result, it was difficult for many SCO managers and developers to maintain the required aplomb and to predict a great future for AIX5L. Since IBM has provided the main expenditure for the development, it is obviously a case of: he who pays the piper calls the tune. Furthermore, we should note that the hardware producers like to present their Itanium run server prototypes with Linux. Hewlett-Packard were at the Linux Day in Stuttgart, now we've got Compaq at the SCO forum in Santa Cruz.

Cultural war - Open source and SCO, will it work? The failed merger between Dresdner Bank and Deutsche Bank has demonstrated once again that company mergers only work if the culture of both businesses is compatible. Banks, whether their colours be green or blue, are certainly much closer culturally than Linux people and the Unix freaks of the seventies. Still, a mutual curiosity was all pervading at the forum and it was amusing to note in places how Caldera and SCO employees were mutually sizing each other up. It was, however, during the keynote speech by Ransom Love in the misty quarry on the campus

grounds, almost physically perceptible how many of the SCO people inwardly breathed a sigh of relief. This was no wild freak before them who was about to push ahead with the sell-out of a traditional business, but someone who valued profitability and continuous development and who understood how to convince his audience that he throroughly understands and takes their reservations seriously. Experts on the business affirmed that Doug Michels, the CEO of SCO still in office, has not looked so relaxed for a long time.

Forum 2000! without ties: Ransom Love in sporty guise. So are we now all in complete harmony? Not quite. It is in sales and marketing that the prejudices against free software run deep. An American SCO sales partner protested in a session on open source software that, if software is free, no matter how good Linux is from a technological point of view, 90 percent of his clients would not accept it. A German SCO manager was of the opinion that Unixware would never be open source, nothing as dreadful as that could ever happen. Time, however, heals many wounds and neither will want to be reminded of their utterances next year. The fact is that a Linux distributor now has direct access to the sales channels of an old and well established Unix business. This fact cannot be overestimated enough. One can only hope that Caldera will handle the Unixware legacy in a sensible manner - for its own benefit but also for the benefit of free software.

The Forum - another heirloom Amongst managers and developers who are in any way linked with SCO, the SCO forum in Santa Cruz enjoys a legendary reputation as an informal meeting place. If Caldera continues with this tradition, it will create an opportunity whereby classical Unix people can meet with the Linux scene, so that both may be able to learn from each other. It is said that, once upon a time, anyone spotted wearing a tie, had it cut off. Today this is no longer the case there's no-one wearing one any more. ■

“Forum 2000” without ties: Ransom Love in sporty commitment.

2 · 2000 LINUX MAGAZINE 21


SUSE 7.0

SuSE Linux 7.0 Personal and Professional Edition


With version seven SuSE presents two versions of its distribution for private users and professionals. We took a critical look at both shortly after they appeared.

Fig. 1: Basic configuration of the system presents no difficulty.

The two distributions differ in both price and scope. The Personal Edition with three CDs costs £29.00, but SuSE is asking £20 more for the Professional Edition which boasts six CDs and one DVD. Everything else is pretty much the same in both versions: a quick install manual, manuals covering configuration and software, a few chameleon stickers and a Tux badge. The Professional Edition also

22 LINUX MAGAZINE 2 · 2000

includes the traditional SuSE "know-how" manual and entitles you to 90 days installation support instead of 60.

Included software SuSE Linux has gained a reputation as being one of the largest and most comprehensive distributions. The Personal Edition now contains just three CDs and to achieve this some packages have been removed. Despite this, the source code of all the programs is still supplied. Two CDs contain binary programs and one holds the source files. However, the Personal Edition is no mere half portion. If you don't need things like archie, nntpd, five different FTP servers and the complete KDE development environment it is possible to get by quite well with it. We only missed the Apache module PHP, SSL and Perl. Contrary to rumours that have repeatedly cropped up in IRC, the Personal Edition does contain (of course) a C and C++ compiler, Perl and Tcl/Tk, LaTeX as well as frequently used daemons like inetd, Network Server Kit, portmapd, Samba Server and Client, YP Server and Client. KDE 2 (Beta 2) also exists in the Personal and Professional Edition despite what it says on the box. XFree86 doesn't fall short either. SuSE – like Mandrake – achieves a dual installation. You select using SaX or SaX2 whether XFree86 3.3.6

SUSE 7.0

or 4.0 is to be used. Both versions are installed on the computer and many of the libraries and other files are shared. For XFree86 4.0, SuSE supplies hardware-accelerated drivers for nVidia chipsets and is the only distribution so far to include a special X server plus a kernel module for the Diamond Fire GL1. The installation of this exotic graphics card doesn't even require manual work. YaST 2 recognised the card and even installed on request the 3D hardware acceleration. For manual installation you must use a special SaX2 option but the required files and documentation are all available. Thus we succeed for the first time in commissioning "out of the box" a Diamond Fire GL1 with 3D hardware acceleration.

Graphical set-up Whoever expects that the two versions use installation procedures suited to the respective target group of users will be disappointed. The installation is absolutely identical (apart from the differences in included packages) so that beginners will also cope well with the Professional Edition. A new installation on our test computer, a Pentium II 400 MHz with 128 Mbytes RAM, ran without any glitch and coped with graphics cards such as the Creative Labs Riva-TNT2-Ultra, Matrox G400 DH-MAX, ATI Rage 128 Pro Fury and Diamond Fire GL1 without difficulty. The keyboard and mouse connected to the USB port were also easily recognised and incorporated despite USB support being switched off in the Bios. SuSE has also thought about the visually impaired: a Braille line is supported in the installation. The installer's capabilities are a bit lacking when it comes to partitioning. Existing Windows partitions cannot be reduced in size during the installation although GNU PartEd, for example, manages it perfectly and competitors like Mandrake and Caldera also provide this facility. For the Personal Edition – the most likely choice of someone using Windows and trying Linux for the first time – this would be very useful. It is hoped that a future version will employ GNU PartEd and perhaps make it easier to operate via a nicer front-end. We were pleased to see that a beginner cannot actually select an unusable partitioning or formatting configuration. If you don't create a swap partition YaST 2 points out that this is unusual (if possible at all). If you want to set up "/" as ReiserFS without having a separate /boot partition formatted with ext2, YaST 2 will not even allow it. The only disappointing aspect is that when the option to automatically partition the entire hard disk is chosen no ReiserFS option is offered: all data partitions are formatted as ext2. The size of the standard installation including StarOffice amounts to around 1.1 Gbyte. However, you cannot avoid inserting four of the six CDs of the Professional Edition, even though in the Personal Edition the whole thing fitted on to two.


Immediately after the installation, XFree86 is set up – normally version 3.3.6 except in the case of graphics cards that are only supported by version 4.0. In our test this included the ATI Rage 128 Pro Fury as well as the Diamond Fire GL1. The basic configuration (Fig. 1) of the sound card, ISDN, network card and NFS devices all went smoothly and the system was ready for operation in the twinkling of an eye. The only real shortcoming is the printer set-up: As many printers as you want can be set up without difficulty (in the case of network printers you can even have the connection tested) but having done so you cannot get rid of them again. There isn't even a list of the printers already installed. Without changing the /etc/printcap manually and using the APS filter configuration program there is no way round this problem. We expected to be able to get a list of installed printers as this deficiency was pointed out when version 6.4 was tested. However, SuSE hasn't been inactive in the printer field: the selection of supported printers has been considerably extended.

YaST1 The console based tool YaST version 1 is preferred to the graphical version by many SuSE experts. But it is not obsolete. User and group administration, for example, as well as some basic system settings, have still not yet been integrated into YaST 2. YaST 1 lets you activate many settings that would otherwise have to be attended to after installation. A selecting manual at the boot prompt of the first CD, as in version 6.4, gets you into the old YaST. Anyone who is attempting to set up an old PC and needing to boot up from a diskette would be advised to go and get a coffee at this point – to the start of the installation on a Pentium 133 with 40 Mbytes RAM took a whole 7 minutes. This also applies to the start-up of a rescue system. YaST 2 requires a minimum of 64MB of RAM so owners of older computers in particular will have to use the old-style interface. What you get is a less easy but considerably more flexible installation. In YaST 1 neither sound nor network cards are automatically recognised, so you must already know what hardware is present in order to get the network to run. The name server, too, has to be entered manually. If you do not need to set up the network, ISDN or sound card during or short after the installation you can still use the new YaST. SuSE now offers a console interface for YaST 2. Pity that we searched in vain for this tip in all the manuals. When partitioning the disk more care is necessary than when using YaST 2 as no kind of test is made as to whether you will be able to boot from your system later. We were able to install without difficulty on a hard disk with just a single ReiserFS partition. Only later did we find that lilo would not install. The Personal Edition provides no help on the 2 · 2000 LINUX MAGAZINE 23


SUSE 7.0

[top, left] Fig. 2: The configuration suggestion is normally usable; only in special cases do you need the selection menu. [top, right] Fig. 3: In most cases the simple configuration is sufficient

Fig. 5: Several resolutions and higher colour depths are not a problem

The graphics card is clearly displayed with all the details.

subject of partitioning: it is assumed throughout the documentation that you are using YaST 2. We found a pitfall in the installation via YaST 1. The USB mouse on the test system has been automatically recognised by YaST 2 but configuration of GPM in YaST 1 proved to be a problem. If you choose the wrong mouse type it can even happen that the keyboard no longer works and you have to reactivate the computer by resetting it. The basic modules for USB support were loaded but the modules mousedev and usbmouse were missing. We recommend that USB mouse owners don't install GPM initially but do it later – the protocol is PS/2 and as a mouse device you enter /dev/input/mice. Another problem occurred when manually calling /sbin/init.d/usb. modprobe constantly complained that the file /etc/modules.dep was too old, but a regeneration of modules.dep did not bring about a solution. The blame for this was down to an /etc/modules.conf dated one hour into the future: a touch /etc/modules.conf and subsequent depmod-a provided the cure.

The computer with the full installation held a nasty surprise. The update proceeded at a snail's pace. For the first CD alone, the 400MHz K6-II with 128 Mbytes RAM took a full six and a half hours: after the second CD and a total of two days we stopped the update. The sticking brake block was found quickly. Before the update the RPM database was a proud 110 MB, after the third CD it had grown to a full 230 MB! For the installation of each individual package the RPM database had to be opened, changed and closed again, and at this immense size and in view of the comparatively puny memory it takes for ever. To save SuSE's honour it should be said that this is a fundamental problem of the RPM database and is a problem that can occur with every other RPM-based distribution. Surprisingly, the system has so far been running smoothly despite the interrupted installation. The update of the notebook initially went unremarkably. The system rebooted after the final restart, but didn't get as far as the graphic login. We found that the old XF86Config (Version 3.3.5) had been removed from /etc and placed in /tmp. The link to the X server had then been deleted from /var/X11R6/bin. That was unnecessary as the configuration could continue to be used faultlessly under XFree86 3.3.6 after the X server had been linked and the configuration file put back into /etc. Another problem concerns sendmail. Although we were informed during the update that the /etc/ would not be changed and the new file would be written as /etc/, we discovered a complete newly generated configuration file – the old one had been moved to the update backup. Moreover, sendmail could not be

Update of 6.4 We also assessed the update of the precursor version to SuSE Linux 7.0 Professional. For this purpose a PC with a full installation of SuSE Linux 6.4 as well as a notebook with a modified 6.4 system were brought into the laboratory. The update takes place exclusively using YaST 1. At the start of the update a whole raft of unfulfilled dependencies and packages no longer present awaited us. This can be traced back to the partial splitting of the packages in version 7. 24 LINUX MAGAZINE 2 · 2000

SUSE 7.0

operated with it. What triggered this dubious action has not yet come to light: we suspect SuSEconfig. A further – but very nasty – trap was sprung by the new mouse. At the time of the update we replaced the PS/2 mouse that had been used under 6.4 with a USB mouse. The result was that neither SaX nor SaX2 could be started: the system hung every time and had to be reset. The remedy is to specify the mouse type explicitly: SaX -d /dev/input/mice SaX2 -t ps2 -n /dev/input/mice

XFree86 configuration SuSE Linux 7.0 contains both XFree86 3.3.6 and XFree86 4.0. In order to set up XFree86, use SaX for 3.3.6 and SaX2 for version 4.0. SaX2, however, is definitely easier to use than its predecessor, and not just as regards the design of the interface. The hardware recognition has been improved and most devices are automatically recognised. Notebook owners pay attention! On one test machine SaX2 wrongly recognised the PCMCIA Bridge as the first and the actual graphics controller as the second graphics card. This is quite interesting because lspci correctly determined the type of the supposedly first card as "PCMCIA Bridge". To get round this call up SaX2 first with the parameter "p", whereupon the graphics cards found will be listed. With the parameter "-c 1" select the graphics chip no. 1 from the list for installation. At the start you will be offered a suggested configuration by SaX2 for your system (Fig. 2) which you can simply accept if you agree with the resolution and refresh rate. With that, everything is now finished. If you want a different setting start from the entry window of SaX. As shown in Fig. 3 there is a simple mode which only requires you to select the graphics card (Fig. 4) and monitor data (Fig. 5). There is also a complex selection that lets you select the mouse, keyboard and system paths. Then the X server starts and you can make changes to the positioning and monitor frequencies.


The Tour is an excellently structured HTML document containing first steps, a precise description of the most important commands including screenshots and many useful tips. We can recommend these instructions to everyone, particularly SuSE's competitors. It makes this distribution very beginner-friendly. In fact, SuSE-Tour can not deny it is german roots - SuSE forgot to replace the german screenshot. StarOffice is already on the hard disk in the server installation version. Every user who wants it needs, therefore, only to complete the StarOffice Workstation installation to create his own settings and folders. If users install the full version a massive 250 MB per user will deposit themselves in the home directory: reason enough for the administrator with several users to point this out.

Info SuSE Homepage: Revised packages: date/7.0/ ■

Conclusion Whether to choose the Personal or Professional editions is a question of application. For the ordinary home use without network ambitions the Personal Edition is perfectly adequate. But the Professional, nevertheless, is equally suitable for beginners, those wanting to update and professionals. As regards updating, it goes quite well but almost inevitably leaves you with further work to do. In the matters of hardware recognition and simplicity of installation SuSE has taken another step forward: the hardware-accelerated drivers do the rest. We have never before found it so easy to carry out such an extensive configuration of even exotic hardware, even if there is still a problem to be solved here and there. But despite all this, SuSE version 7.0 deserves to be a hit. ■

Fig. 6: Not just for beginners – the SuSE Tour explains most commands completely with example and screenshot.

First Start After installation with YaST 2 the system booted up smoothly to kdm which presented us with the users chosen during installation in the menu. root does not appear in this. If you do log in as root you see a pretty red and black background with a few exploding bombs. The risk of damaging the system by making improper changes could hardly be made clearer. The KDE Desktop seen by a normal user has been smartened up – SuSE has done a bit of work on the icons and they now fit in well with the KDE interface. For beginners, SuSE has come up with something quite special: in their first KDE session each new user is presented with the "SuSE Tour" shown in Fig. 6. 2 · 2000 LINUX MAGAZINE 25



Corel PhotoPaint 9


Corel has decided to make the Linux version of its image editing software PhotoPaint 9 available free-ofcharge on the Internet. We decided to take a closer look at this gift.

Fig. 1: A penguin in PhotoPaint 9. In the background the relevant filter is blowing bubbles

PhotoPaint 9 is an image editing program for bitmap and raster images. But although the program can import vector graphics it is not suitable for manipulating them. This task can be handled by Corel Draw which will be appearing for Linux shortly. However, Corel Draw will not be free-of-charge. The free download of PhotoPaint is available on Corel's Linux web site, making it a direct competitor for GIMP, the “market leader” until now in Linux image editing programs. Installation worked perfectly. As is the case in WordPerfect Office 2000 the user is forced to select the directories pre-specified by Corel. 26 LINUX MAGAZINE 2 · 2000

The help uses the established WordPerfect format of HTML pages. It is detailed and successful overall. Unique to this type of program is the integrated tutorial. This introduces the beginner to working with PhotoPaint 9 in several tutorial sessions. Given that PhotoPaint runs under Wine the speed is pretty good. However, the program has difficulty dealing with large images or extensive manipulation. Consequently, the minimum hardware requirements must be observed. A Pentium 200 with 64 MB RAM and at least 170 MB free space on your hard disk are needed, but the faster the computer, the faster the


working speed. In this respect PhotoPaint compares unfavourably with the GIMP. However, unlike WordPerfect Office 2000, no further errors occurred in the graphical display during our test. Initially, the screen may appear crowded, especially as all the elements such as the symbol bars can be removed or repositioned in every conceivable way. At the same time, the user is almost overwhelmed by the array of tools on offer. But if you take the time to get used to it, the graphical user interface offers easier, more intuitive user prompting than GIMP. Users switching from Windows will feel more at home in PhotoPaint. This is especially true when using graphics tools. For example, to draw a simple rectangle all you have to do is select the appropriate tool. In GIMP you have to call the elaborate masking functions. The shadow function is also simple to use. If you want to add a shadow to an object all you have to do is select the tool and then click on the object in question. You can then simply drag the shadow with the mouse until it lies in the right direction. As in WordPerfect Office 2000, Corel has integrated a real-time preview feature into PhotoPaint. Before you use a filter on the picture, PhotoPaint shows you how it will look. Unfortunately, features like this again demonstrate the need for a powerful computer. GIMP requires extra development for plug-ins. PhotoPaint can in theory make use of PhotoShop plug-ins already used widely under Windows, thanks to the Wine emulator. However, the support under Linux is somewhat limited, so we were unable to make the “KPT Convolver” plug-in run in PhotoPaint. When a filter works and when it doesn't rather depends on luck – all you can do is try it out. In general, though, all filters with a separate screen, as well as filters located deep in the operating system scarcely have a chance in Linux. PhotoPaint recognises file filters for all current graphics formats. Only a few animation filters, such as AVI, MPEG and QTW formats, had to be left out when converting to Linux. This is a shame when it comes to saving animations. All you have left is the GIF format with all its limitations. Professional users searching for exotic graphics formats will be disappointed. However, Linux users will be reassured by the availability of filters for XPM and GIMP formats. If you want to convert lots of images or need to perform the same sequence of editing operations on them you can make use of PhotoPaint's “Batch Processing” feature. PhotoPaint will process specific script commands for a list of image files without the user having to intervene. In this way you could easily convert a CD full of pictures in GIF format into JPEG format. As with all other version 9 Corel programs, PhotoPaint now supports the direct output of pictures in PDF format. PhotoPaint uses the SANE package for integrating scanners (as does GIMP) which should be included with every distribution, guaran-


Quick as a flash you've already forgotten about loading “GDI32.DLL”...

No longer unusual: program installation is just like in Windows.

teeing the integration of future generation scanners for the first time. The colour management system is many steps ahead of GIMP and will satisfy the needs of semiprofessionals and even professionals. Apart from predefined colour palettes and use with different colour formats, such as CMYK and 48 bit RGB, as well as suitable colour separation, PhotoPaint also offers ICC profiles for colour management. With the colour manager provided (Tools/Color Manager/Color Management) even different connected devices can be matched to one another using profiles. This feature means that colours will be printed true to their appearance on the monitor. What starts with colour management continues with the print functions. The options you can set extend from printable trimming marks via colour separation to the exact positioning of the image on the paper. At this point GIMP just has to throw in the towel. With these functions PhotoPaint clearly stretches way beyond just home use. In fact, it could only be beaten on Linux by porting Adobe PhotoShop.

Info Corel Linux web site ■

Conclusion PhotoPaint 9 far outperforms GIMP, especially as far as colour management is concerned. With Corel's image editing program we now have a professional, yet free-of-charge alternative to GIMP. In particular, those who cannot or could not get used to working with GIMP will soon feel at home with PhotoPaint, as long as they have a fast computer. This new product is good news for the Linux fan. The invasion of professional applications is gathering more and more momentum. ■ 2 · 2000 LINUX MAGAZINE 27



Opera for Linux 4.0a Technology Preview


A lean, fast web browser with a clean user interface: that’s Opera

One of the biggest sources of frustration when using Linux as a desktop is the lack of a decent web browser. Netscape Navigator, let’s be honest, is a dog of a program that no-one would use if there was anything better. Well, soon there will be. In fact, there already is, as this pre-beta preview of Opera for Linux is already nicer to use than

Info Opera Software m/linux/ ■

Netscape’s heap of shoddy code. Opera is a web browser that has already won many friends in the Windows world despite the fact that – unlike the alternatives – you have to pay for it. Version 4 is in the process of being released, and has been developed in such a way as to make it portable to many platforms. The Linux version is lagging well behind that for Windows, and even BeOS,

28 LINUX MAGAZINE 2 · 2000

and this Technology Preview 4 is missing quite a few features. But it’s already a useful browser as well as an interesting taster of what’s to come. One of the reasons Opera fans pay good money for this program is that it’s lean and fast. The .tar.bz2 archive weighs in at under a megabyte and contains a license agreement and a single executable that you just copy somewhere like /usr/local/bin. It requires glibc 2.1 and Qt 2.1, which you’ll need to install if your system doesn’t already have them. Opera claims it will run on out-of-thebox Corel 1.0. Red Hat 6.1, Mandrake 6.0 and Caldera 2.2 systems: it ran immediately on Mandrake 7.1 too. Opera has a cleanly designed main window that’s quite configurable using the Preferences menu. There are options for setting up things like a caching policy and security. There’s a hotlist which can be displayed in a panel on the left and contains links to various interesting sites. You will be able to add and remove sites to create your own personal list. At present, though, many of the menu options either haven’t been implemented or unceremoniously crash the browser. Remember, this isn’t even a beta yet, so faults are only to be expected. As a simple browser, though, Opera for Linux is already very useful. It supports HTTP 1.0 and 1.1, HTML 3.2 and 4.0, CSS 1 and 2, displays GIF, PNG and JPEG images, executes most ECMAscript programs and browses FTP sites. There’s good XML support and HTTP authorisation, SSL and TLS communication have been implemented. There are still a few problems with page rendering, but Opera already does a better job than Navigator 4.7 and even the Netscape 6 preview. According to information on the web site, new Technology Previews are released every three to five weeks, so there should have been at least one new version since this was written. Opera has the potential to be the best web browser on the Linux platform, so this is one small, easy to install download that’s definitely worth trying and will probably be worth paying for when the release version is shipped. ■



Qt Embedded


Qt is undoubtedly one of the best known and most intuitive GUI toolkits for creating desktop applications under Linux. With the framebuffer-based version, the Norwegian vendor now intends to become an important player in the booming embedded market. And it seems to be meeting the prerequisites for this development. Not least in the context of the KDE project, Qt has proven to be a well thought-through and reliable basis for complex applications. In order to make it easier for developers and users who might wish to convert, Qt Embedded 2.2.0 offers the same widgets and functions as the desktop variant of the same version number. The application just needs to be run through the compiler again.

The widget for handwriting recognition is a nice added gimmick. As is the case with the Palm Pilot and other PDAs, the user must observe certain conventions when forming the letters. The integrated online help with animated writing instructions makes it considerably easier to get to grips with this feature.

Only light on resources to a certain extent

certain extent. It is barely three years since this amount of memory has became standard in a productive Linux workstation together with the X server, browser et cetera. strace quickly demonstrates that Qt-Embedded applications access the graphics hardware directly – as is also the case with the direct rendering interface of the XFree86-4.0 – and determines via a central program who may use the framebuffer device. The applications communicate with the keyboard and mouse using pipes.

On balance Qt Embedded uses more memory than other generically related projects. Nevertheless, the Norwegian toolkit beats its free rivals hands down with its range of functions and is therefore an interesting tool for future Linux-PDA vendors. Note, however, that the liberal licensing conditions of its big sister (QPL) don’t apply to Qt Embedded. Users of the embedded variant have to pay fees for developer licenses (the pricing policy is similar to that for Qt for Windows) plus run time licenses of around $2 per installation. ■

”Embedded” does not mean ”limited”: Qt Embedded offers almost the same range of functions as its big sister does for the desktop.

During the Linux World Expo in San Jose, Troll Tech announced the availability of the embedded variation on its GUI toolkit Qt. Just before this, we were able to get our first impressions by trying out the Beta test version. [left] Only fully compatible from 16MB? However, Qt Embedded may be lighter on resources on other platforms than the variant for x86 processors. [right] After a short settling-in period the handwriting recognition works perfectly; those with the time and the leisure can also teach the widget their own handwriting.

In terms of memory consumption, the Norwegian graphics library for embedded systems behaves in almost the same way as its big sister for workstations: four and a half megabytes are not enough! If the computer is treated to 8MB of RAM (kernel boot parameter mem=8M), the browser can be used; however, starting the text editor at the same time brings the system to a temporary standstill and any movement of the mouse results in intense hard disk activity (despite the fact that the swap partition is switched off: the run time linker provides a similar function for use when the main memory is low). From 16MB onwards the ”Embedded Demo” consisting of browser, editor and alpha blending demo, runs without any problems. At 32MB it only stops after more than a dozen parallel applications. However, this performance is only impressive to a 2 · 2000 LINUX MAGAZINE 29



StarOffice 5.2


The new version of Sun’s StarOffice 5.2 office suite, which is free of charge and now open source, has been around for a few months. It has many improvements, but does it have what it takes to tempt office users away from Microsoft Office and Windows?

Figure: The long-awaited new version of StarOffice is finally available

According to Sun, StarOffice 5.2 incorporates more than 100 improvements. However, as the version number suggests, these changes are mainly improvements, nolt new features. As usual, the installation procedure uses the wizard with which users of StarOffice 5.1 will already be acquainted. An automatic migration option is supposed to help you transfer your existing settings from the earlier version. Unfortunately, during our test, neither toolbar nor menu modifications were transferred from StarOffice 5.1.

The changes in general... One thing StarOffice shares with its Windows counterparts: it’s a big piece of code and takes a long time to load. Once started, things look much the same as 30 LINUX MAGAZINE 2 · 2000

version 5.1. It’s still an integrated package with its own dedicated desktop. This makes StarOffice feel very different to every other office package, and it’s a solution that confuses newcomers and irritates old hands alike. It’s rumoured that Sun will revert to using separate, standalone applications for StarOffice 6.0: if so, this will be a very welcome move. Experienced users of StarOffice will find that Sun has extended several of the existing functions so that they can now be used more efficiently. For example, the AutoComplete function, which was introduced in StarOffice 5.1, has been extended to other parts of the package. As soon as the first characters have been entered in file dialog boxes or internet address entry fields, for example, the rest is added automatically. Where there are several possible options a simple press of a button is enough to


run through all the possible entries. The extended context menus and the improved navigator in the word processing program are claimed to help make the package more user-friendly. Spell-checkers for many languages are included with the package but you can now incorporate additional languages, even dictionaries from external suppliers. Buyers of the CD version of StarOffice 5.2 gain another advantage besides that of avoiding a 100MB download: the CD contains additional language modules which can’t be obtained over the Internet. However, the proofing tools don’t stretch to a grammar checker, something that only WordPerfect Office 2000 provides. StarOffice woos new users with its improved and extended file import and export filters, which are said to guarantee complete compatibility with Microsoft Office 97 or 2000 documents. As we found during our test, Sun has in fact managed to achieve the best import results of all the office packages we have tested. StarOffice smoothly imported our Office 97 test files, which have never before been correctly processed. The only flaw was the position of a few of the images. If a file contains Visual Basic code StarOffice simply ignores it. There are no tools to assist the convertion of this into StarOffice’s own script language “StarBasic“, so users who need to must go to the trouble of doing it manually. No matter how good the ability to import Microsoft Office documents, the benefits are wasted if you can’t keep the same fonts. StarOffice’s TrueType support could do better. These fonts can only be printed after you have converted them to Type-1 format. The other office packages offer a more user-friendly option in the form of an external TrueType font manager.

X-Server shut down We found one still unsolved fault. Any attempt to open the 3D-Convolver (which conjures a threedimensional object out of a two-dimensional object) in StarDraw resulted in a black screen and immediate termination of the whole X-Windows system. Like its competitor Applixware, StarWriter now incorporates “text components“ which are here known as “AutoText“. This function allows you to associate a passage of text with an abbreviation. When the relevant abbreviation is entered into the document it is replaced by the corresponding text. This can make things considerably easier for users who frequently need to use standard text passages such as those recurring in business letters. Sun seems to have borrowed an idea from WordPerfect and added an option to create different directory forms such as content directories or glossaries. StarOffice even allows you to create a literature directory which takes its data from the relevant database. For smaller documents these directo-


ry functions are easier to use than the equivalents in WordPerfect. In particular, the graphics displayed by these functions are much clearer. However, in the test on larger documents, the procedure under WordPerfect proved to be more user-friendly as a helpful tool bar is permanently displayed. Fonts in StarWriter are now displayed in the drop down font list box usinng the actual font so that you can see exactly what each one looks like. However, this function doesn’t go as far as the real-time preview from WordPerfect Office 2000, which shows the effect in the document itself. The formula editor has also been improved. If you have a good command of the associated command language you no longer have to use the formula editor window when entering formulae. You can now enter the formula command directly in the text document.

The rest of the action A notable new feature of StarCalc is the euro converter, which is now available in the Autopilot. Adabas D has also been licensed for the integrated database StarBase. Unfortunately, this alternative search engine isn’t a direct component of StarOffice so you will have to download a package of about 10MB for this module from the StarOffice website. The vector drawing program StarDraw now supports transparency and can create graduated colour fills. This puts the module used to edit bitmap graphics in the shade. The image editing tools in this module urgently need to be extended. On the other hand, the presentation program StarImpress is still unbeaten under Linux in respect of its range of functions. As with StarDraw, StarImpress also supports transparency and graduated fills. This is also the only presentation program under Linux that is able to produce animations similar to the animated GIF images. Output options have also been greatly extended, particularly as regards network capability. It is now possible to export a presentation as a “WebCast“. To do this you first store the necessary files on a server in HTML format. The presenter then controls the presentation from his computer, determining when a new slide should be displayed in the viewers’ internet browsers. A small, standalone program by the name of “StarOffice Player“ has been developed which can be used to play presentations on computers on which StarOffice is not installed. This can also be downloaded free of charge. Finally, it’s worth mentioning that the e-mail program built into StarOffice has been greatly improved by the addition of an integrated PGP coding feature. However, this requires a Java environment to be installed. ■

On balance The price/performance ratio of StarOffice is and remains unbeaten. With its wide range of functions this office package almost matches up to the considerably more expensive WordPerfect Office 2000, which itself lacks a few facilities that StarOffice has. However, Sun urgently needs to do something about the user interface, and in particular the integrated desktop, which many users will find far from intuitive. The long application load times are also annoying. Startup takes longer than it does for WordPerfect Office 2000, which is not exactly a speed demon. Most of the enhancements in version 5.2 improve on the details and allow for more efficient use of the package. Overall, StarOffice 5.2 is an excellent office package. ■

2 · 2000 LINUX MAGAZINE 31



Corel WordPerfect Office


Although the last two versions of the word processing program WordPerfect were converted to Linux by SDC, Corel has undertaken to convert the latest edition itself. For the first time, not only the word processing program but the entire office package is available. The standard English version of Office 2000 consists of WordPerfect 9, the spreadsheet program Quattro Pro 9, the presentation program Presentations 9 and the calendar manager CorelCentral 9. The „Deluxe” version which has also appeared on the market includes the database Paradox 9. As a bonus, users will also find a large number of fonts and images (clip-art and professional photos), Netscape Communicator and the Acrobat Reader on the CD. Corel no longer relies on an external company to carry out the conversion work. Instead it uses the free Windows emulator WINE which it helps sponsor. This saves time and makes conversion easier for users used to Windows thanks to the identical interface. However, there are disadvantages. The minimum system requirements (Pentium 166, 32 MByte main memory) really should be seen as an absolute minimum. The emulation does put the brakes on this already slow office package. Several of the graphical elements didn’t work perfectly either. For example, some of the menus were still displayed on the screen even after they had been closed.

Resource-hungry and demanding ... The requirements of the Linux system should also be considered with caution. When an attempt was made to install this package under the somewhat older SuSE Linux 6.1, which, according to Corel, should satisfy the requirements for a system with Linux 2.2.x, Glibc 2.0 or 2.1 and X Windows, the installation program stopped and displayed an error message. The manual installation procedure did not work on this system either. In contrast, on a Red Hat system, the installation 32 LINUX MAGAZINE 2 · 2000

assistant worked perfectly and immediately revealed its weaknesses: even the manual installation procedure doesn’t allow users to install parts of the package. The only choices are between a complete installation, where the entire 430 MByte are installed, and a minimal installation, where the setup program installs just WordPerfect and Quattro Pro taking approximately 150 MB. The next annoyance lies in the fully automatic directory selection: the entire office package is always installed in the directories /usr/lib/corel and /usr/bin, without giving the user the opportunity to intervene manually.

... but very user-friendly Once you have overcome the obstacles thrown up by the installation procedure, a user-friendly environment emerges: integration into KDE works exceptionally well, the documentation and online help is extensive and well structured and the font manager included with the package helps users to add and remove fonts easily later on. The True Type format is supported, allowing Windows users to retain their existing fonts. WordPerfect 9 is able to embed fonts directly into documents so that these can be edited on computers which do not have the fonts used in the document. There is no standard desktop as there is for StarOffice, but the assistant – called ”PerfectExpert” provides users with a helping hand while they work if they wish so.

Import-Export In contrast to earlier versions, WordPerfect and Presentations have been provided with an export facility in the widely used, cross-platform PDF format 1.2 and the number of import/export filters


(including several exotic types) has been increased again. Unfortunately, they don’t work perfectly. WordPerfect opened simple Word 97 documents relatively well. When it attempted to open a more complex document created using Word 97 it crashed and refused another attempt to open it, displaying the error message ”Unknown file format”. Importing a PowerPoint presentation into Presentations worked better. After a long computing time only the font size was displayed incorrectly. We were also pleased that the file formats of the Corel programs remained unchanged. Therefore, it is possible to swap between the individual WordPerfect versions up to and including version 6.1. Editing and generating HTML pages produces worse than poor results. On the other hand, the package understands the new XML format for the first time. The powerful script language PerfectScript, with which users can add their own components to the package, has been retained and extended.


New features in WordPerfect

Quattro Pro, Presentations and the rest

Everything you need, from document creation to slick presentations.

Experts on the previous version WordPerfect 8 have little adjustment to do. Only a few details in the word processing program have been improved. The wide range of functions that is typical of WordPerfect, such as the automatic creation of directories and glossaries, good version management and the extensive customised form letter functions, remain. The new real-time preview works in all the office components and it is a useful feature. It shows formatting changes (such as new font selections) directly using the data in the document before they are committed. The new autoscroll function makes navigation in large documents easier. The print functions have also been extended. As well as printing a poster, you can now print up to 64 pages on one sheet. The graphical formula editor new to the Windows version isn’t available under Linux. Instead the user must be content with the more flexible but also more complicated command-driven counterpart. Although WordPerfect displays formulas created with the graphical formula editor under Windows graphically, as is the case with all other Windows OLE objects by the way, it blocks any attempt to make changes. The layout functions of the word processing program are still among the best offered by any of the competitors. They include the equally loved and hated shadow cursor, which can now be switched on and off easily using the right mouse button. It is still somewhat awkward to use, and the header and footer lines aren’t yet as flexible to manage as those in comparable products. Surprisingly, the functions used to edit graphics have been moved to the presentation module.

Quattro Pro has everything you would expect from a professional spreadsheet program. As well as handling the Euro, it offers a wealth of chart layouts and an even wider range of functions. This version of the program is the first to be able to process an incredible 1,000,000 lines in one spreadsheet (in contrast, Microsoft Excel manages about a third). One document may consist of 18,000 spreadsheets, would you believe it! Presentations 9 is primarily a presentation program, and one that has no need to hide behind the competition. At the same time it serves as an image processing program for bitmap and vector graphics. Although the manipulation functions on offer go beyond those of StarOffice, they are no substitute for professional image processing. It remains unclear why several of these functions, such as filters and effects for bitmap graphics, have been hidden in the menus. Like all good presentation programs, Presentations also contains what is known as a runtime player, by which means a presentation can be replayed by someone who doesn’t possess a copy of Presentations itself. CorelCentral is a simple calendar with a just as simple address book and integrated memo management. The range of functions is adequate for use at home, but doesn’t match up to more established rivals. Although appointments can be edited as web pages the program still lacks an email notification facility under StarOffice. Corel’s office package is worth recommending to users who either need the relevant functions or can’t acquire a taste for StarOffice’s integrated solution. Other users who pay more attention to the price and can live with a somewhat smaller range of functions will certainly stay with StarOffice. ■ 2 · 2000 LINUX MAGAZINE 33



Applixware Office 5.0


Applixware was the first office package to become available for the Linux platform. In recent months it has been somewhat overshadowed by the free, open source StarOffice from Sun and the arrival of Corel's WordPerfect Office 2000. But version 5.0 has recently been launched, and our test shows it has no reason to hide behind its major competitors.

Perhaps the most significant change in this release is the conversion of the window library to GTK, which is already used successfully by the GNOME desktop and the graphics program The Gimp. The latest version of the library must be installed on your system to avoid errors occurring in the individual applications. The setup program detects whether and which version of GTK is installed and provides a suitable update for the distribution if necessary. Despite this change, experienced users will get to grips with the new applications right away. During installation you can choose to have a Microsoft Office style screen layout or the traditional Applixware one. Unfortunately, once this decision has been made it can only be reversed by reinstalling the program. Like all the other office packages for Linux, Applixware is integrated automatically into the KDE and GNOME start menus. The package contains all the components you'd expect in a modern office package including word processing (“Words”), spreadsheets (“Spreadsheets”), presentation program (“Presents”), a graphics program (“Graphics”) and a database (“Data”). As an added bonus, an email program is supplied along with an HTML editor. 34 LINUX MAGAZINE 2 · 2000

After the main toolbar – a kind of central Applixware control point similar to the Microsoft Office toolbar – has been started for the first time, a help window immediately opens offering the user various help files and several examples. Each of the main applications has been given a tutorial too: something rival products could do well to emulate. Unfortunately, this help dries up as soon as it has been offered. For example, several links in the online help lead to non-existent entries and, on our test system, we were unable to start the tutorials. The way the online help has been structured is somewhat chaotic and it is not as user-friendly as the equivalent help system in WordPerfect Office 2000. It can therefore take some time to locate specific information. The manual provided works well as a short introduction. Clear examples and helpful tips appear in several places. Unfortunately, a considerable amount of more detailed information is lacking, as are references to the relevant places in the online help where more information might be found. You can take the advertising slogan on the packaging – “the fastest office package for Linux” – quite literally. No emulation is necessary as with WordPer-


fect Office 2000. The speed of execution is considerably faster than that of the other major office packages. However, we found that a large graphics file in PNG format caused considerabe distress to both Applixware Words and the HTML editor. As the product is so Linux-orientated, Applixware boasts several Linux-specific features that other office packages don't have such as the user-friendly allocation of access rights when saving a document. A further indication that the package is clearly rooted in Linux/Unix is the restriction to one document per opened window. Using Applixware it is not possible to manage several documents within an application as it is in all other office packages. The range of programs on offer may confuse a new user at first. Although several types of document can be edited by one and the same program, each document type has its own start symbol. Taking graphics as an example, the presentation module is always used to edit them. The main toolbar mentioned earlier could make this a lot clearer. Version 5.0 has revised import filters, however they still cannot stand up to our Microsoft test documents. The biggest problem is still the import of graphics embedded in Word 97 files. The other office packages don't seem to be able to crack this hard nut either. The result of importing PowerPoint presentations into Applixware Presents is even worse than it is in StarOffice. It's worth noting that Applixware cannot export the current WordPerfect format nor support StarOffice file formats. The new font manager makes it possible to use TrueType fonts under Applixware. Unfortunately, this manager confused the Fontastic Font Installer – its WordPerfect Office 2000 counterpart – to such an extent that no fonts were available to this package's applications. Restarting the X-Window system provided the only cure. The range of functions provided by the Applixware applications is comparable with that of StarOffice but not up to the standard of WordPerfect Office 2000. The first thing you notice in Applixware Words is the lack of user-friendly writing functions. There is no automatic correction in the background and no option to quickly create outlines. To create an outline you must first laboriously enter all the key points and then mark them. Finally, you add the numbering using the relevant function. Anyone who has worked with word processing programs in the pre-Linux era will recognise an old friend in Words. Applixware still provides features that have almost slid into oblivion. For example, you can have text components, which are saved text passages that can be inserted into the current document at the click of a mouse. Anyone who has to work with a large number of standard texts will love this feature. Another function we searched in vain for in other packages is the ability to create forms with entry fields. This allows users to quickly create documents they need on a regular basis such as


memos. Applixware Words displays the relevant fields, which then just need to be filled in. Sadly, we discovered a number of weak points in the other applications. The HTML editor – which in terms of functions is comparable to Netscape Composer – and the presentation program Presents are both only suitable for simple tasks. While the HTML editor is adequate for small Web sites, it will fail to make the grade on larger projects. In just the same way, the presentation package also fails to match up to its StarOffice counterpart. The reason why you can only insert two dividing lines (one vertical and one horizontal) remains a puzzle. Only the Spreadsheets application compares favourably with that of StarOffice. It also offers a number of real-time functions which allow you to integrate live information such as up-to-date stock market prices into your spreadsheets. One good point about this package is that the individual programs are well integrated. For example, a double click on a graphic in any document directly opens the relevant editor.

Figure 1: Word processor, spreadsheet and the main toolbar of Applixware Office 5.0

On balance The launch of Applixware version 5.0 has not brought a lot that is new. It would have been better if the new developer VistaSource had concentrated on integrating functions that are already present on other office packages rather than spending time converting to the GTK window library. The functionality in some areas is slightly inferior to that of StarOffice while in others it is about comparable. The strengths of the package lie mainly in its speed and its excellent integration with Linux. Applixware Office is a solid office package that is worth a look if you cannot get to grips with the StarOffice desktop or consider WordPerfect Office to be too expensive. But if speed and integration into the Linux operating system are your most important criteria, look no further than Applixware Office. ■ 2 · 2000 LINUX MAGAZINE 35



Teamware Office 5.3 for Linux Good communications are an essential aid to efficient teamwork, and this means more than just using the phone and email.


Teamware Office may JULIAN MOSS not be as well known as products such as Lotus Notes but it has been around almost as long and has recently been released in a Linux version.

[left] Teamware Library can be used to store electronic documents [right] Using Resource Calendar you can find out when meeting rooms and projectors are free

The fact that the Linux port is new is apparent from the moment you start to install it. The server software is packaged as a binary RPM but is accompanied by a shell script which is used to perform the installation. The script creates a special Teamware administrator user and a directory owned by the user into which the software is installed. Although the option of changing the default user and directory names is offered, the script made a hash of it, resulting in our first non-working installation. After the server has been installed, it must be configured using a console-mode program which you must supply with the answers to various questions. This type of configuration utility isn’t unusual in the UNIX world, but the inappropriate defaults and lack of sufficient guidance in the documenta-

36 LINUX MAGAZINE 2 · 2000

tion led eventually to our second abortive installation. Further attempts to re-run the configuration always resulted in an error. Uninstalling and then reinstalling the software didn’t solve the problem either. Finally we uninstalled the server and then manually deleted all the leftover files and directories before reinstalling again: we were then able to set up a server that worked. This aspect of the product could definitely use some improvement.

Hardware requirements The Teamware server is supported on most major distributions: Caldera Open Linux, Red Hat Linux 6.1 or later, TurboLinux 4.0 or later, or SuSE Linux 6.3 or later. Kernel 2.2.12 and glibc 2.12 are required. We installed it under Linux Mandrake 7.1 and, once the installation difficulties had been overcome, it ran just fine. Teamware recommends a minimum of 64MB of RAM for the server, and 100MB of disk space for the server. A further 120MB of disk space is needed for the web server cache. This is small beer these days, and unlikely to present a problem to most users. The name of the product may be “Teamware Office for Linux“ but this refers to the server.


Teamware has no native Linux clients. The product is accessed under Linux – as it can be from any platform – using a web browser. Unfortunately, certain administration functions – in particular, the facility for adding new users – are not yet supported using the web browser interface. Although a command line tool that runs on the server can be used to add users in bulk, this isn’t very easy to do. The only alternative, however, is to attach a Windows box to your network and use the Windows administration client software. Again, few users in the real world will find this to be a problem. Before you can add any users you must set up some user templates. These determine what users can do and avoid the need to specify each user’s rights individually as you add each one. This is fairly easy to do using the Windows-based administration client. However, quite a lot of work is needed to set up a useful working Teamware Office server, and quite a bit of thought needs to go into your planning before you even start. The lack of any wizards or other setup aids, nor any demo data, means that this isn’t a product you can simply sit down, set up some dummy users and play with for the purposes of a trial evaluation. Although you can download the software and try it out using a 30-day evaluation key, you can get a good feel for what it does and how it works without all the installation hassle by accessing Teamware’s live demo system on the Internet at Teamware’s web interface supports even textonly browsers so it can be accessed using Lynx or devices like the Nokia Communicator PDA. The interface is pretty intuitive although it certainly benefits from the ability to display graphics. One thing that the Internet demo shows very well is that the product is very usable over a low-bandwidth dial-up connection, making it ideal for use by companies whose employees travel a lot. Teamware’s web server is a secure server so there’s no need to worry about data and passwords travelling across the Net. Incidentally, you can’t use another web server such as Apache to serve up Teamware Office pages so if you want to install Teamware on an existing web server you’ll have to configure it to use a different HTTP port to avoid conflicts.

Services Teamware Office users have access to a range of services that provide all they need for effective collaborative working. Teamware Mail is an electronic messaging service that can be accessed using standard Internet mail clients via the POP3 or IMAP4 protocols, or using a web browser. The browser interface provides a simple but functional mail client. There’s an Inbox, an Outbox and a Wastebasket and you can create your own message folders. Teamware Mail supports Internet standard, X.400 and SMS messaging. It uses its own message routing protocol which can be used to link Teamware


servers within an organisation, and can transfer mail to and from the Internet either by way of sendmail or, if you install the Teamware Connector for MIME module, direct SMTP. Teamware Mail is linked to Teamware Directory which provides a personal address book for storing email addresses and other details of contacts. Teamware Directory supports LDAP allowing external directories to be searched. From the Directory you can also view another Teamware user’s schedule which they have updated using Teamware Calendar, the personal time management module. This module provides all the features you would expect, such as the ability to find periods of unallocated time shared by a group of people in order to schedule meetings. It also includes a resource calendar which allows you to manage the availability of resources such as meeting rooms and overhead projectors. Teamware Library is a document management system with a search capability that uses fields in each document’s profile, and support for version control. You can set up one or more libraries, under which documents are organised in a multi-level tree structure. Last, though by no means least, is Teamware Forum. This is an electronic discussion board to which users can post messages that can be read and responded to by other participants. Forum supports NNTP, allowing selected newsgroups to be made available to Teamware Office users.


Teamware’s web service can be used with a text-mode browser


Teamware Office may lack the bells and whistles of some of its better-known rivals but it has everything you need for effective collaborative working, and the fact that the server runs on Linux must give it the edge when it comes to reliability. It’s easy to use through its web browser interface, if somewhat less easy to install and initially configure. If you work in a team and currently use email as your sole form of electronic communication you should find Teamware Office a big improvement. ■

Price: $1,000 for 100-user server package Teamware Tel: 01344 472068 Teamware Office Demo ■

2 · 2000 LINUX MAGAZINE 37





When I started using Linux about four years ago one of the most requested applications was a good quality office suite. Back then, everyone knew the technical sophistication of Linux, but productivity on the desktop still hadn't been addressed. This of course changed in the succeeding years as people collaborated to bring the productivity that a system such as Microsoft Windows offered, but with the many benefits of Linux. The first project to bring a desktop environment to Linux was started by the KDE Project and its' founder Mattias Ettrich. At the time there were some licensing issues with the Qt toolkit (the software that KDE is built upon) and as a result of this the GNOME project was founded to provide an alternative. Although Troll Tech, the company behind Qt, has resolved many of the licensing issues (such as making Qt GPLed) both projects are now developing at full steam. Since KDE was first started, developers have been encouraged to write integrated software that fits into the KDE environment both in look and feel. This is so that if a user knows how to operate one KDE application he or she should have no problem with the others. One such project that was started was the KOffice project. The aim was simple; to create a suite of productivity applications that run natively under KDE and integrate well. Currently KDE is heading towards its 2.0 release with the fourth beta being released at the time of writing. KDE 2.0 should be ready as you read this. Part of the 2.0 release will be the first public stable release of KOffice. KOffice currently sports a number of applications that the business and home user will find useful. These include KWord (a DTP/word processor), KSpread (a spreadsheet package), KPresenter (a presentations package), KIllustrator (a vector-based illustration package), KChart (a tool for creating graphical charts) and the KOffice Workspace, an 38 LINUX MAGAZINE 2 · 2000

application that seamlessly hooks all of the separate components together into one coherent package. Although KOffice already has the major components needed in an office suite, it doesn't stop there. A company called theKompany is currently sponsoring developers to work on KWord, and will also be developing the following applications (all, of course, open source): • KImageShop (renamed Krayon by popular demand) • Kivio (Visio type program - new to KOffice) • Rekall (MS Access type program - new to KOffice) Many more companies besides theKompany are paying developers to work on KOffice and KDE to ensure development can continue apace. Let us not forget that anyone can contribute to the KOffice or KDE project. Details about contributing can be found at

First Impressions When considering this review of KOffice we need to bear in mind that it is still development software currently whose developers are currently in bugfixing mode. I have been running KDE and KOffice out of the CVS at for a number of months now, so the version I am reviewing is the very latest copy at the time of writing. Although KOffice is development software, this is no excuse for the expected feature selection and design, but bear in mind that it is not yet finished.



Once KOffice has been compiled and installed an entry is placed in the application launcher that is is displayed when clicking on the "K" button on the KDE panel. This is a good start as it puts the suite in an easily accessible place. Inside the 'Office' entry is each application in KOffice as well as the KOffice Workspace. You can choose either to run each application seperately, or run the integrated Workspace application from which all the components are easily accessible. The KOffice Workspace can be seen in Fig 1. Each component in KOffice is called a part and as you can see in the figure, each part is shown down the left hand side. To load one of the parts you simply click on one of the icons. This then loads the relevant program seamlessly into the space on the right hand side. To review KOffice fairly, we must consider two aspects of its performance: • How each application works seperately • How all the applications work together to form a suite We will start by looking at the first aspect and seeing how each application in KOffice works independently. Please remember that KOffice wasn't finished at the time of writing, although but it should be finished by the time you read this review.

KWord When you launch KWord it presents you with a number of different templates that you can use as the basis for your documents. There are also options for opening an existing document or not using a template at all. At the moment, the templates seem rather limited and simply format the page size and columns. More sophisticated templates may be developed in the future. KWord offers most of the features that a user would need for basic text editing and also has a number of features for more advanced publishing work. It offers an advanced frames-based publishing platform which users of Framemaker will probably appreciate, and this works rather well. Although some of the features in KWord were buggy due to the fact that it was still development code, I found KWord to be an impressive application. One criticism you might make of KWord however concerns its interface design. Some features required a bit of hunting to locate. These issues will no doubt be fixed in future releases.

set as some other spreadsheets, but remember that this is a first release. I suspect a flurry of extra features to be added in the future. There are a good variety of mathamatical and formatting functions. Users of Microsoft Excel should have no trouble converting to KSpread as the interface is similar yet still intuitive.

[top] Fig. 1: The KOffice Workspace [above] Fig. 2: KSpread has powerful charting capabilities too

KSpread Out of all of the KOffice components one of the best is undoubtedly KSpread. One of the first things that hits you about it is its feeling of maturity compared with some of the other companants. KSpread is a full featured spreadsheet package that holds its own fairly well in comparison with the others. Admittedly it doesn't boast as complete a feature

KPresenter KPresenter is the KOffice application I find the most useful as I often have to make presentations. KPresenter supplies a number of useful features that are well designed into the interface. Most functions that the budding presenter needs are there including effects, drawing mode, autoshapes, and drop shad2 · 2000 LINUX MAGAZINE 39



Fig. 3: KPresenter – all you need to produce professional presentations

Another area that the KPresenter developers need to look at is the performance of screen presentations. Effects such as scrolling text when viewing the presentation are rather slow. Overall though, KPresenter is a good quality application with a bright future.

KIllustrator KIllustrator is a simple yet effective vector drawing tool. The program lets you assemble a variety of primitives including circles, rectangles and triangles together with text and other objects to form images. KIllustrator really comes into it's own when creating artwork for use in a KWord document.

Fig. 4: KIllustrator, a powerful vector drawing package

ows. One problem though is that KPresenter currently lacks additional views, unlike a program such as Powerpoint. This can be a shortcoming if you want to see all of the slides to get a general overview

StarOffice and Open Source As is now well known, Sun Microsystems, which recently purchased StarOffice, has now made it open source. The question is, what does this mean for you, and for KOffice? One of the many benefits of Open Source is that code from one project can be usedin another project. This encourages a culture of sharing. The Linux community has always believed that it is futile to re-invent the wheel. By open sourcing Open Sourcing StarOffice Sun has made it possible for developers to look under the hood and see how something has been implemented. This is a benefit to both users and developers as elements of StarOffice could now be used in other projects. One area that would be an obvious benefit is that of import/export filters which are used to achieve file compatibility with other document file types. The open sourcing of StarOffice was a far-sighted move that will benefit the development of both KOffice and gOffice. It makes it likely that we shall see some interesting developments in these office suites in the future.

40 LINUX MAGAZINE 2 · 2000

Fitting the pieces together With any office suite, the whole should be greater than the sum of the parts. The applications should be integrated so that you can share data between each component and embed this data into any type of document that you can create using the suite. An example of this is if you are writing a report in KWord and need to include the contents of a spreadsheet created in KSpread to provide some background figures. KOffice implements this feature very well. Some explanation of how KOffice does it may be of interest. X Windows (and the common Linux implementation XFree86) has limited support for sharing complex information between programs so it is largely left up to the desktop environment or window manager to deal with this. KDE was one of the first desktop environments to successfully implement the sharing of information between applications and the KDE team have developed a number of technologies to cater for this – one of the reasons why KOffice



Info The KDE home page: KDE mailing lists: ■

Fig. 5:KWord, showing an embedded spreadsheet and graphical image

is so fast. KOffice gets its lightning speed from using these advanced library functions in the code to share and manage information that the applications are dealing with. Some other office suites such as StarOffice share data between applications by implementing a sharing framework (called an object model) within the suite itself. The problem with this is that it is incredibly slow. KOffice is considerably faster than virtually every other office suite I have tried. The practical method for sharing information between applications is as simple as clicking on an icon and selecting the KOffice component. You can then select the size of the area where the embedded part will live (see Fig. 2) and either create a new document in the part or load an existing document. Each embedded part is fully functional as if you were using the application standalone. Another area where KOffice works particularly well is in integrating tightly with KDE itself. KOffice was built specifically for KDE and therefore shares the same theme that you are running. KOffice integrates into your application starter and co-operates well with other KDE applications. This is a benefit over other office suites if you use KDE, just as gOffice works well with GNOME.

Overall Conclusion KOffice looks like a firm foundation for an office suite and performs well in terms both of speed and usability. But although KOffice is good, it isn't (yet) great. KOffice lacks some features that some users may take for granted if they are used to other office suites. Of course, this review is based on a development version of KOffice, so by now the features I found to be missing may have been added. KOffice could well develop into a serious office suite that will rival the current ruler of the roost. I

would recommend that you keep your eye on this open source project as I think we will see some interesting developments in KOffice over the coming months. ■

Running KOffice If you would like to give KOffice a whirl there are a couple of options available to you: • Install the pre-compiled binaries • Compile it from the source code Bear in mind that KOffice requires KDE2 to run or one of the KDE2 betas. You will also need Qt2.x.x. See the KDE home page for more details on installing KDE2. Installing the binaries This is the easiest method. Go to and download the relevant package for your distribution. Using your package installer, you can then install the KOffice package. For example, for RPM on Red Hat: rpm -i koffice-1.0.rpm Note: the package name may differ from that shown above. Compiling the source code If you choose to compile the source code you have a further two options. The first is getting a source code package. You do this in the same way as getting a binary package as shown above. Get the package from and install it. You then need to go to the directory where the source code is and compile it by typing the following commands: ./configure make make install The other way to get the source code is to use anonymous CVS access to the KDE CVS server. There isn't the space to describe this procedure here, but instructions can be found on the KDE home page. If you have any questions or queries you can ask using IRC on #kde at (my nick is [vmlinuz]). You could also try subscribing to one of the KDE mailing lists.

2 · 2000 LINUX MAGAZINE 41



High Availability Linux


In the telecommunications industry computer systems are required to exceed 99.999% (five nines) availability. The Motorola Computer Group is a major manufacturer of embedded system platforms for this market and now offers Linux as one of the operating systems that may be used. Larry G. Cruz describes the architecture of this hardware and the changes and additions that were made to Linux so as to make it into a high availability operating system. Fig. 1: A Motorola CPX8216 CompactPCI system

The Motorola Computer Group (MCG) manufactures a variety of embedded system platforms for the telecommunications industry. The CPX8000 family of CompactPCI systems are application ready platforms that are designed to meet five nines availability. CompactPCI provides the primary bus architecture for these platforms, but additional hardware and software is required to provide a system that can be deployed in critical telecommunication infrastructure applications. One of the operating systems offered for these systems is the Linux operating system. The CPX8216 systems are Network Equipment Building System (NEBS) and European Telecom Standard (ETSI) compliant platforms. They have built-in redundancy enabling them to be configured and programmed for Hot Swap and High Availability environments. Standard features include: 44 LINUX MAGAZINE 2 · 2000

• Dual 8 slot CompactPCI(r) backplanes (H.110 bus optional) • Hot-Swappable CPU and I/O boards • PowerPC or Pentium II processors • Up to three 350W Hot-Swappable power supplies • Three Hot-Swappable fans • Four Hot-Swappable drive bays • NEBS compliant, Hot-Swappable alarm status display panel • Hot Swap Controller providing CompactPCI(r) bus and slot control compliant with PICMG 2.1 rev 1.0 CompactPCI(r) Hot Swap Specification The CPX8216 has a dual host architecture (Fig. 2). The two independent CompactPCI(r) bus segments form redundant system domains that are cross connected via bridges. In a simple Active Standby I/O configuration the bridges are config-



ured such that CPU-A controls all twelve I/O slots in the system. If CPU-A were to fail the bridges would then be configured such that CPU-B would have control of all twelve I/O slots. The ability to switch from the active CPU to the standby CPU is obviously useful in the case where the CPU hardware or software fails. An additional benefit is the ability to perform CPU, software and firmware upgrades without taking the system down. To do this the CPU, software or firmware is first upgraded on the standby CPU. When completed the standby CPU takes control of the I/O bus segments and becomes the active CPU. The previously active, and now standby, CPU is then upgraded. Driven by the appropriate software, the CPX8216 can be configured and maintained at varied degrees of complexity and granularity ranging from simple Hot Swap implementations to Highly Available, “five nines” systems.

Hot Swap and High Availability Fundamental to the need for Hot Swap and High Availability is the premise that the ability to add and remove components from a live system reduces system downtime. A system without Hot Swap capabilities must first be shut down and powered off prior to the addition, removal or replacement of system boards and components. Systems with Hot Swap capabilities range in their ability to reduce this downtime from hours of system unavailability per year to less than five minutes per year. Predictably, as the availability of a system increases so, too,does the complexity of the hardware and software of that system. Although definitions abound for the terms “Hot Swap” and “High Availability” PICMG has characterised three different levels of Hot Swap capabilities: Basic Hot Swap, Full Hot Swap and High Availability. Each level builds on the capabilities of the prior level to increase system availability. Basic Hot Swap provides the fundamental capability to add and remove boards from an active system. The staged (differing length) pins in a CompactPCI(r) connector cause some pins to make connection to the bus before others when inserting a board. The reverse occurs when a board is removed. Special circuitry is provided that allows boards to be inserted and removed from an active bus without causing signal or DC power glitches. CompactPCI(r) boards conforming to the PICMG CompactPCI(r) specification PICMG 2.0 R2.1 are electrically Hot Swappable. Systems supporting Basic Hot Swap are fairly simple in their implementation and require operator intervention and direction to perform the Hot Swap activity. The operator must first access the system console and direct the system to, as gracefully as possible, stop using or “de-configure” the board to be removed. Once the operator sees that application and operating system software have terminated their use of the board the operator can instruct the

board to disconnect from the bus and power off. The board can then be removed and replaced. After this the operator must reverse these steps to complete the replacement of a board into the system. Full Hot Swap extends the Basic Hot Swap model with additional hardware and software. If the operating system can be notified automatically in advance of a board’s removal or insertion then the operating system can automatically de-configure or configure the board without requiring operator direction. This would reduce both the opportunity for mistakes and the amount of time required to perform the Hot Swap activity. To provide this capability a microswitch is added to the lower injector/ejector handle of the CompactPCI(r) board. To remove a board an operator must first press down on the lower ejector handle. This action activates the microswitch and causes the ENUM interrupt to be generated. The operating system must then field the ENUM interrupt, identify the board that is about to be removed and gracefully de-configure the board. An LED then lights on the face of the board to tell the operator that it is safe to complete the removal of the board. As a board is inserted into the bus the microswitch is activated and the ENUM interrupt is processed. In this case the operating system must determine the type of board being inserted and then configure the board for use. The LED on the face of the board is then turned off to indicate to the operator that the board has been accepted into the system. High Availability systems significantly expand the scope of the operating system’s involvement in handling events occurring in the system chassis. Middleware (user level software outside the O/S kernel) is added to manage the configuration and allocation of system resources. This middleware also provides an interface to user applications through published APIs.

Fig. 2: CPX8216 Dual Host Architecture

2 · 2000 LINUX MAGAZINE 45


Fig. 3 HA-Linux Block Diagram


In a High Availability (HA) system software can control the state of chassis components via the addition of the Hot Swap Controller (HSC). With this hardware addition automation and fine granularity of status and control are possible. The HSC developed by Motorola provides CompactPCI(r) bus and slot control as well as chassis alarm and status registers and is itself Hot Swappable.

High Availability Linux In recent months there has been explosive growth in the use of Linux in commercial applications. Motorola’s decision to develop High Availability software for the CPX8000 family of platforms using Linux was driven by many factors. Not only does Linux provide a full featured OS with proven reliability, but the open source model enables lower cost, greater control and simplified licensing. Linux also fitted in well with Motorola’s existing service and support model for UNIX systems. “HA-Linux” is the name given to the collection of Motorola’s added value extensions and enhancements to Linux that implement High Availability features. Motorola’s HA-Linux is not “yet another” new Linux distribution. In fact, one of the design goals of HA-Linux was to create as portable an implementation as possible. The CPX8000 HA platforms are available in configurations using both PowerPC and Intel(r) processors. Today, HA-Linux runs on two Linux distributions: Red Hat 6.1/6.2 for Intel processors and a derivative of LinuxPPC for PowerPC processors. Fig. 3 presents a high level block diagram of the HA-Linux components. HA-Linux consists of 46 LINUX MAGAZINE 2 · 2000

enhancements to the Linux kernel, additional kernel modules, user level programs, utilities and APIs. A CPX8000 system with HA-Linux in place is ready for customer applications to be added to run in a High Availability environment.

HA-Linux Components The following is a brief description of the HA-Linux components. Motorola is open sourcing all components covered by the GNU General Public License (GPL). Additionally, Motorola intends to publish any component integrated into the kernel including drivers that are not covered by the GPL.

PCI Kernel Services Motorola has developed a revision of PCI kernel services for 2.2.x versions of the Linux kernel. Since PCI devices are removed from or inserted into an HA-Linux system the operating system must be able to dynamically maintain data structures that represent the current state of the bus. The 2.2 version of the Linux kernel does not provide a model for the dynamic allocation and de-allocation of PCI bus resources. Motorola’s revision of PCI kernel services provides this capability through enhancement of the existing GPL code.

ENUM Driver The ENUM driver is implemented as a kernel module and is responsible for the asynchronous event notification of Hot Swap activity. The ENUM# interrupt is generated when the lower ejector handle of


a CompactPCI(r) board is pressed down. The interrupt is also generated when a board is inserted into the bus. The ENUM driver processes this interrupt and determines the source of the interrupt.

Hot Swap Controller Driver The Hot Swap Controller driver is implemented as a kernel module and supplies an interface to the HSC. The HSC controls all of the system infrastructure and provides asynchronous event notification of chassis component state changes. Access to user level applications is through a published API that provides these facilities: • Visual and audible alarms on the chassis alarm panel can be set or queried for status. • Individual board slots can be reset, powered off or on or queried for status. • Disk drives can be powered off or on. • Cooling fan speed can be adjusted, powered off or on. • Power supplies can be monitored, powered off or on. • Bridges can be configured for domain control. • Chassis environmental conditions can be monitored. Although the API for the HSC is published, most user applications will not drive the HSC directly. Instead, higher level HA-Linux software will interact with the HSC on behalf of the user application.

HA Aware Drivers and the EM Driver HA-aware drivers are either existing drivers that have been enhanced or new drivers that have been written to conform to the HA-Linux “HA Driver Specification.” An HA-aware driver adheres to the PCMCIA specification enabling drivers to cope with the appearance and disappearance of individual devices. Drivers may also be enhanced with fault detection and diagnostic capabilities. If applicable, these drivers might also be configured to perform low level device failover. For example, the ethernet driver can be configured such that it views two physical connections as one active and one standby connection. If the driver detects that the active connection has failed it can switch activity to the standby connection prior to notifying the system of the failure and thereby reducing the possibility of data loss. The Event Manager driver is implemented as a kernel module and provides an interface to the Event Manager for HA aware drivers.

VTERM Driver and Utilities The VTERM driver and utilities implement a very useful and “nifty” feature in an HA-Linux system. Through VTERM a virtual terminal interface can be established between the CPU board and any of the I/O boards to interact with the I/O board’s firmware


(BIOS) via the PCI bus. User applications can interact with, configure and control the firmware of an I/O processor board. Diagnostics can be run, MAC addresses can be read, boot options can be configured and network booting can be initiated. In short, almost anything that can be done via firmware at the I/O board’s console port can be done via VTERM.

Event and Alarm Managers The Event Manager is a user level process that serves as the brains of an HA-Linux system. The Event Manager is responsible for the system configuration and event management of the system. The Alarm Manager in conjunction with the Event Manager controls the system LEDs and alarms. The Event and Alarm Managers are easily configured and extended by users to meet their specific application needs.

SNMP The Simple Network Management Protocol (SNMP) agent included in HA-Linux is an implementation of the open source UCD SNMP v3 agent. The agent supports MIB-2 and UCD agent extensions for processors, disks, memory, load average, shell commands and error handling. HA-Linux extends the SNMP agent with a MIB for the Event Manager.

ISCS The Inter-System Communication Services (ISCS) process provides a method for data communication between the two domains in the CPX8000 chassis. ISCS provides a robust interface between the two domains with redundant serial connections and a network interface. ISCS is used by both the Event Manager and user applications to communicate between domains. Through published APIs, application to application messaging and data transfer capability is available to all applications. Utility programs for file transfer, remote program execution and logging are included in HA-Linux.

System Configuration and Event Management The primary purpose and function of HA-Linux is to provide system configuration and event management capabilities within the chassis. The software components described above work together to achieve this. There are four major functions of System Configuration and Event Management: System Setup • Configure system and application objects at boot time or on a running system. • Possibly combine physical devices into logical devices comprised of redundant sets of physical devices. 2 · 2000 LINUX MAGAZINE 47



Respond to Change • Automatically reconfigure the system after a failure. • Take actions specified by subsystems when the system state changes. • Notify a system administrator or service center of a failure. • Execute operator requests to add or remove objects and/or devices. • Execute operator requests to re-integrate objects and/or devices following repair or replacement. Display System State • By illuminating lights or activating alarms on the system alarm panel. • By illuminating lights or indicators on the object or device. • By updating the display of a graphical user interface. Maintain System History • Keep a record of system configuration and events in system or remote logs. • Update system configuration maintained by the standby CPU. HA-Linux implements all of these functions, and more, relieving the application developer from having to manage low level devices within the chassis and allowing him to focus on the specifics of the application.

Achieving High Availability

Info Motorola Linux website puter/Linux Motorola Computer Group ■

Many aspects of the system come into play in order to achieve high availability within a chassis. To maximise the effectiveness in reducing system downtime, applications must be aware of and interact with HA-Linux. The amount of interaction required is of course dependent on the complexity of the application and failover models desired. HA-Linux is built upon a collection of simple text files. Editable text files exist that: • describe the configuration of the system; • define objects to be managed; • establish rules and policies to be followed when the state of an object changes; • specify the actions to be taken based on state changes. Each of these files is implemented using a simple scripting language. HA-Linux comes complete with configuration, definition, rules and action files for the objects that exist in a CPX8000 chassis. Users can easily modify, extend or add definitions, rules and actions for any software or hardware objects of interest to their application of the system. There is also an API for the Event Manager that when linked with applications allows them to interact with the Event Manager so as to manage configuration, control, status and event notification. In addition to tailoring system configuration and event management, users can easily integrate the network management of the system via SNMP. All

48 LINUX MAGAZINE 2 · 2000

the functions that are available to programs via the Event Manager API are also available via the Event Manager MIB. The Inter-System Communication Services can be used to send messages between applications running on the active and standby CPUs. Again, this facility is provided via simple utility programs or published APIs. The frequency, volume and type of data transmitted is entirely up to the application program. The ISCS interface can be used to implement software upgrade scripts allowing the active CPU to drive the upgrade process on the standby CPU. Through a combination of user applications, HALinux, the Linux operating system and the CPX8000 platform high availability is achieved. This is essential to enable network operators to achieve the reliability, performance and scalability they require to compete in the telecommunications market.

The Future Future releases of HA-Linux will focus on advancing the scalability and availability of CPX8000 systems. Clustering, availability management, backplane messaging and network management will be added. Although HA-Linux systems support ethernet and ATM, additional communication protocols will be supported with HA-aware drivers. As the CPX8000 family of systems grows and the underlying architecture and hardware is enhanced, HA-Linux will also grow to take advantage of these changes. The direction will always be toward achieving higher and higher availability while providing new features and functions that further enable the “application readiness” of the platform for the telecoms industry.

Summary With HA-Linux it is possible to remove and replace system components without interrupting the operation of the system. In fact, with HA-Linux an operator can pull the active CPU from a CPX8000 system and observe the system switch to the standby CPU with minimal or no interruption of system processing. Operating System software, application software as well as CPU board firmware can be upgraded without experiencing any system downtime. Even the firmware on the active CPU can be upgraded while Linux is running. HA-Linux provides the full system configuration, event management and interfaces required to achieve in excess of 99.999% availability. Motorola’s implementation of HA-Linux on the CPX8000 family of CompactPCI systems provides an application ready five nines platform for the telecoms industry. Motorola has developed an open system solution that makes application integration simple and easy. This high availability solution is proof that Linux is ready for deployment in these demanding environments. ■


Universal Serial Bus



Within a short time, the Linux USB driver implementation has blossomed from "barely existing" to "suitable for everyday use." With the coming 2.4 kernel series the use of USB devices with Linux will be regarded as standard. In other words, it's time we took a look at the current state of affairs.

Since 1994, the “Universal Serial Bus” has been promoted by its initiators (Compaq, Intel, Microsoft and NEC) as the new standard interface for peripherals requiring low to medium data rates. However, the developers of peripheral drivers frequently had only a single target operating system family in their sights. Consequently for Linux the support remained nothing more than an interface for what was regarded up to that point as the “Useless Serial Bus.” The multimedia USB architecture enabled other developers to quickly develop drivers for the widely available and attractively priced USB devices, as long as the hardware manufacturers followed the USB 1.1 specification or were prepared to hand out data sheets and manuals. Originally USB support was only included in the experimental developer series 2.3.*/2.4.0-test*. However, in the meantime there was a well-maintained “Backport Patch” for Kernel 2.2.16. Impatient distributors have (at the request of impatient users!) already included the driver software in their current distributions. Thus SuSE, Mandrake and Caldera already offer USB support “out-of-the-box.” Sadly, the support offered by the distributor rarely goes beyond configuration of the USB mouse



USB Principles The “Universial Serial Bus” has a strictly hierarchical structure and is managed by a host controller. The host uses a master/slave protocol to communicate with the USB devices that are connected: in other words all data transfers are initiated by the host. USB devices cannot communicate with each other, which is of no great significance to peripheral equipment and saves on costs, as the hardware and software need no great intelligence. Also, this means that problems like collision detection or bus arbitration don't arise. At the moment, a USB enables a maximum of 127 devices to be connected, providing a bandwidth of 12MBit/s via the four-pole connecting cable (+5V, ground, data+ and data-), but this can only be used to 90 per cent capacity. In the case of the USB 2.0 specification recently shown for the first time, up to 480MBit/s should be possible. To enable the synchronisation of multimedia dataflow, such as audio or video, USB transactions are embedded in a frame structure. One frame lasts exactly 1 ms (12,000 bits). Normally, the required USB host controller is integrated into the motherboard. Older circuit boards can be retrofitted with suitable PCI cards. In spite of the multitude of USB chip sets in existence, the manufacturers fortunately stick to two standards only: the “Universal Host Controller Interface” (UHCI) developed by Intel and the “Open Host Controller Interface” (OHCI) from

Compaq and Microsoft. This makes no difference for USB device drivers (rather as in the case of SCSI host adapters.) What are known as “class specifications” exist for frequently used USB devices such as modems, bulk storage, keyboards, mice, joysticks, monitors, audio devices, printers and USB-to-IrDA converters. Therefore, it should be the case that as many devices as possible with the same functions will work with the same driver. Unfortunately, there are no open class specifications in existence for webcams, digital cameras, scanners and USB/RS232 converters and a special driver is needed as a rule. In the case of Linux this causes familiar problems. Although the programming models of typical USB devices certainly don't display any technical peculiarities some manufacturers are trying not to give out specifications or will do so only if a non disclosure agreement is signed. Therefore, the

and keyboard. As before, knowledge of the Universal Serial Bus and the Linux USB driver architecture is necessary when operating scanners, printers and multimedia devices, but as a user of the one of the abovementioned distributions you do at least save yourself the kernel installation procedure described in the box “USB installation and configuration” as all the necessary driver modules will have been “factory compiled”.

Kernel modules The structure and dependencies of the device drivers for the Linux USB system are shown in Figure 2. The basis for all USB drivers is the combination of the USB core system (usbcore) with at least one host controller driver (usb-uhci or usb-ohci). Apart from 50 LINUX MAGAZINE 2 · 2000

development of drivers for these devices is difficult or even downright impossible and at best only possible with time-consuming reverse engineering (with the help of tools like “USB-Snoopy” and “Playback” – see “Info” box.) The functions of USB devices are logically subdivided into so-called “interfaces”. An interface covers all communication with a particular part of the device. An interface can have different operating modes, known as “Alternate Settings”. Only one alternate setting can be active at one time. Audio input and output interfaces, for instance, use alternate settings in order to distinguish different scanning formats. An interface can include several end points. An end point is to a certain extent comparable with a TCP/IP port. End points are, however, unidirectional, which means that data can only be transferred in one direction.

Fig. 1: The world turned upside down. In reality, the universal serial bus has a star structure and the hubs work physically like switches. The drivers communicate with one another using the end points of the interfaces of a device.

the integrated hub driver, the core driver offers the option of creating an information structure in the /proc directory (“USB device file system”.) It is with this in user mode that drivers can also access USB devices. The usbdevfs also enables tools such as lsusb to output a list of connected devices (analogous to lspci for plug-in cards in a PCI bus). After compilation and installation of the modules the drivers can be integrated as usual into the current kernel using insmod: insmod usbcore insmod usb-uhci # or: insmod usb-ohci You can check the success of this action with dmesg. If no USB device is connected, at least the host controller should signal and output the number of USB ports detected. Further hubs are also


detected by the USB core driver and automatically initialised, which in the case of most hubs is signalled by the lighting up of one or more LEDs. Connected or newly plugged-in end devices are also noticed by the USB core driver, but cannot be used without a special device driver (usb-storage, joydev/input/hid etc.). In the case of many distributed products with USB support built in at the factory (SuSE, Mandrake and Caldera), a simple script run at system boot-up takes care of the integration of device-dependent


driver modules. As a rule, only mouse and keyboard driver modules are loaded regardless. The modules for printers, joysticks, etc. added later must be loaded manually – as long as suitable entries are not present in /etc/modules.conf. For example, for a 3 1/2” USB disk drive, the entry might look like this:

[top] Fig. 2: Linux USB implementation is greatly modularised. [above] Fig. 3: Connecting new USB devices causes a chain reaction.

alias scsi_hostadapter usb-storage You can now address the disk drive via /dev/sda (as long as the appropriate SCSI modules have been compiled, of course). If you also adapt the entry in

Audio, Communication, Joysticks, Hubs and Mice Vendor Audio equipment Philips Canopus Dallas Semiconductor Roland Communication equipment Vendor 3Com 3Com 3Com Compaq Diamond Diamond Digicom Elsa Entrega Technologies Linksys Lucent Technologies MELCO Metricom Netgear/LiteOn OXUS RESEARCH Sirius Technologies Telecom Device Zoom Telephonics Joysticks and Game Pads Vendor CH Products CH Products CH Products Gravis Logitech Logitech Microsoft Microsoft Microsoft Rockfire



PCA 646BC Canopus DA-Port USB USB DAC MA-150U

Microphone for Webcam D/A converter Loudspeaker Loudspeaker

Product Comment OfficeConnect Analogue Modem U.S. Robotics ISDN Pro TA U.S. Robotics Model 5605 USB Modem Diamond Supramaz 56k Usb (2890) Supraexpress 56k USB Modem Tintoretto USB Microlink 56K USB Hub3U1E USB100TX CNet SinglePoint 56Kbps V.90 Fax Modem LUA-TX Ricochet G2 ea101c/LNE100TX OXUS-B ISDN NetComm Roadster II 56 USB TCD-UFE100 2986L

56k Business Modem ISDN Terminal Adapter 56k Voice Fax-modem ACM modem Atlas Modem Board (Lucent Technologies ) SUP2780 Modem ISDN USB V.90 Modem 4 -Port-Hub / Ethernet 10/100 USB-10baseT Ethernet Adapter Atlas Modem Board USB 10/100M LAN Adapter wireless modem USB to Ethernet Adapter ISDN Modem 56K Flex/V90 Modem USB 10/100M LAN Adapter V90 Fax-modem

Product Comment CH 3-Axis 10-Button+POV USB Joystick CH Pro Pedals USB FlightSim Yoke LE Game Pad Pro USB, Model #4211 WingMan Extreme Digital 3D Wingman Game Pad Microsoft SideWinder Plug & Play Game Pad Sidewinder Game Pad Pro Sidewinder Precision Pro RM-203u (USB-Nest)

F-16 Combat Stick aircraft pedals aircraft control lever Digital Joystick 6-axis, 7-button USB/Gameport Joystick USB HID 2-axis, 11-button game pad Microsoft SideWinder Game Pad (2 axes, 6 buttons) USB HID (2 axes, 6 buttons) 6 axes, 9 buttons USB/Gameport converter 2 · 2000 LINUX MAGAZINE 51



vers in advance is not what the inventor was thinking of nor is it particularly efficient. Consequently, in the 2.4 version of the kernel there is a module loading mechanism for “hot pluggable devices” such as PC cards, USB and CPCI. With: echo "/sbin/hotplug" > /proc/sys/kernel/hotplug you can inform the “hardware agents” which command should be called after detection of new hardware. /sbin/hotplug is included with the “usbd scripts” package, which you simply unpack to /etc/usb after the download. hotplug must be moved to /sbin and, as shown in Figure 3, calls /etc/usb/policy as soon as a USB device has been inserted or removed. It is only decided at this level which driver is to be loaded and whether or not additional configuration measures are to be taken. The README of the “usbd scripts” package requests that the /etc/usb/rc.usb be accepted into the boot mechanism. In the case of Red Hat for example in /etc/rc.d/rc.sysinit after activation of the virtual memory (swapon -a >/dev/null 2>&1) by means of: # Set up USB if [ -x /etc/usb/rc.usb ]; then action "Starting USB" /etc/usb/rc.usb start fi Fig. 4: There are also colourful tools for Linux to display the USB devices currently connected.

/etc/mtools.conf (replace “fd0” with “sda” and delete “1.44m”), the USB drive can be accessed using mtools. However, the driver is not all that stable for bulk storage yet (something we were able to confirm using a Sony Vaio N505X notebook.)

Agents and policy Sadly, the loading of device-dependent USB modules cannot be fully automated with /etc/modules.conf, but having to load a dozen dri-

The rc.usb script mounts /proc/bus/usb for system programs such as usbview (see Figure 4) and loads all USB modules specified in /etc/sysconfig/usb. Anyone wanting to fully automate the subsequent loading of modules must add the devices to the system using the manufacturer ID, the device code and possibly its version number, if necessary. The “Red Shooter USB Joystick”, for example, has the manufacturer number 663 and device number 9805 (the version number is irrelevant here). Otherwise, the

Troubleshooting The most frequent cause of non-functioning USB devices are missing drivers or missing/incorrect entries in the /dev directory. Missing drivers are detected by the USB core driver and noted in the kernel log, so a check can be made at least to see whether the device is supported by a driver that has been loaded. Given the rapid rate of Linux USB development it can also happen that possible existing /dev entries have the wrong major/minor number. This can be clarified by taking a glance at the relevant documentation (/usr/src/linux/Documentation/devices.txt). As a result of the virtually unlimited possible combinations of host controller and USB devices, occasional hardware problems can occur, such as communication errors during the detection process. In such cases, the USB mailing list archive offers further help. Many USB devices and above all hubs can be pretty sensitive to electrical disturbance such as may occur when plugging in or switching on main adapters or fluorescent lamps. Normally, this leads to the switching off of the hub and the “loss” of all the USB devices that are connected to it. The Linux hub driver, as opposed to Windows, is so smart that it notices this and reactivates the hub (message: “already running port %i disabled by hub (EMI?), reenabling…”). If the individual USB drivers support reconnection (such as the mouse driver, for instance), the USB device will only be incapable of being addressed for a few seconds. It is possible, however, that the associated user program will still have to be restarted.

52 LINUX MAGAZINE 2 · 2000


code numbers can be determined using dmesg, lsusb or usbview. The appropriate entry in /etc/usb/ drivers/hid will then look like this: 663/9805/*) if $MODPROBE joydev >/dev/null 2>&1 then message ... loaded joydev DRIVER=hid fi ;;

Devices supported At the moment, a large number of USB devices are already supported by Linux because an astonishing number of manufacturers are abiding by the prede-


fined class specifications. But as usual, exceptions confirm the rule. Before acquiring a USB device it is worthwhile taking a look at the constantly growing list of devices supported. Tables 1 to 3 give a brief overview, but make no claims to being complete: in particular the USB mice and keyboards not listed here shouldn't present any problems with Linux. A tip when using USB mice with X: instead of specifying a device file of a specific mouse (e.g. /dev/input/mouse0) in the file XF86Config it is better to use /dev/input/mice. USB mice will then let you plug them in and out during operation without having to restart the X server. Worth mentioning as an especially exotic USB device at this point is the Prolific PL-2302 USB-to-

USB installation and configuration Before putting USB devices into operation under Linux you must first surmount the obstacle of kernel compilation. To do this you need up-to-date kernel sources and the backport patch for the 2.2 kernel series. In the case of the 2.4 test series the USB drivers are already included, but very few distributed products are adapted to the new kernel or even offer it as an option. cd /tmp wget -c wget -c cd /usr/src mv linux linux.old bunzip2 -cd /tmp/linux-2.2.16.tar.bz2 | tar xf cd linux gunzip -cd /tmp/usb-2.4.0-test2-pre2-for-2.2.16-v3.diff.gz | patch -p1 make menuconfig Apart from the usual options, the kernel USB configuration menu offers quite a few selection possibilities. The easiest is to activate all the USB drivers as a module: any drivers definitely not needed can be left out and compiled later if necessary. However, if you have a USB keyboard the “Support for USB” (usbcore), the driver for the host controller (UHCI or OHCI), as well as “Keyboard Support” should be permanently included in the kernel compilation, otherwise it is only with tricks (e.g. with the “Initial Ramdisk”) that you can guarantee the keyboard will work in critical exceptional cases. In the case of other kernel menu options a look at the default settings of the kernel configuration or the distributor's manual will help. After a: make clean && make dep && make bzImage make modules && make modules_install the freshly baked kernel must be transferred as usual to /boot and integrated into the boot process using LILO, Loadlin, Grub or similar mechanisms. Two entries are still missing at the start of the file /etc/conf. modules (or /etc/modules.conf in the case of newer modutils): keep path[usb]=/lib/modules/`uname -r` The first line tells the commands insmod, modprobe and depmod to maintain the existing path list and to add the following path configuration to the existing list. In our example the existing path is simply extended to the directory that has the modules for USB devices. The following entry in /etc/fstab ensures that the USB kernel interface will be accessible for various USB system programs after the next reboot: none /proc/bus/usb usbdevfs defaults Alternatively, the USB device file system can be integrated manually: mount -t usbdevfs none /proc/bus/usb

2 · 2000 LINUX MAGAZINE 53



Supported mass storage and serial interfaces Vendor Mass storage devices AIWA Caravelle Castlewood Hagiwara Sys-Com Hagiwara Sys-Com Iomega Iomega LaCie Microtech International Inc SanDisk Corporation Shuttle Technologies, Inc. Sony Sony VIPowER Y-E DATA Serial interfaces Vendor ConnectTech Digi International HandSpring Keyspan Keyspan Keyspan Keyspan Keyspan

Product Comment TD-U8000 RW-448USB ORB2SE00 FlashGate CF FlashGate DUAL USB Zip 250 Zip Drive USB Hard Disk USB-SCSI-DB25 ImageMate CompactFlash USB HP USB CD-Writer Plus MSAC-US1 Spressa USB Plus (CRX100E/X2) USB MobileRACK FlashBuster-U Product Comment WhiteHEAT Acceleport USB 4 Visor PDA Adapter USA-19 USA-19W USA-28 USA-28X

Webcams, Digital Cameras, Keyboards, Printers Vendor Webcams Philips Askey Askey AvCam Creative Labs Lifeview Maxxtro Mustek Philips Philips Terratec Tevion Trust Zoom Telephonics Digital Cameras Vendor Canon Kodak Mustek RICOH Sony Toshiba Printers Vendor Aten Brother Canon Epson Epson Hewlett-Packard Hewlett-Packard Hewlett-Packard Lexmark Scanners Vendor Microtek AGFA Acer Colorado USB Epson Hewlett-Packard Hewlett-Packard Umax

Product PCA 646VC VC010 (Type 1 & 2) VC080 (CMOS Sensor) AvCam 600 USB Webcam 3 (CT6840) Robocam PC Camera (OV511-based) VCAM-300 PCA645VC, 645VC PCVC675K, 680K USB Camera (CPiA-based) Model 9308 (ALDI-Webcam) Spacec@m Lite ZoomCam USB 1595 Product Powershot S10 DC 220, 240, 260, 265, 280, 290 MDC 800 RDC-5000 DSC-070, DSC-F505, DSC-F505V PDR-M4, M5 Product UC-1284B Printer Cable Adapter HL1250 BJ F300, NJC-3000 Stylus Color 670, 640, 760 Stylus Photo 1270 DeskJet 810C, 812C, 840C 880C, DeskJet 895C, 970Cse, 1220C Photosmart P1100 Optra S 2450, E 310 Product X6USB Snapscan1212u_2 Brisa 640U 9600 Perfection 1200 U / P Scanjet 4100C, 5200, 5200C Scanjet 6200C, 6300C Astra 1220U

54 LINUX MAGAZINE 2 · 2000

Tape Drive CD-RW 2.2GB Removable Media Hard Drive CompactFlash Reader/Writer PC-Card & SmartMedia Reader/Writer 250 MByte 100 MByte OEM Shuttle SCSI Controller. Compact Flash Reader SDDR-31 Memorystick Adapter CD-RW 4x/4x/6x USB/IDE Adapter 3 .5-inch Floppy

4 x Serial 4 x Serial Serial Emulation 1 x Serial 1 x DB9, 57600 bit/s 1 x DB-9, 230400 bit/s 2 x Din-8, 115200 bit/s 2 x Din-8, 230400 bit/s

o o o o o o o o o

o o o o o o o

USB adapter. This is a type of null modem cable for connecting two computers via USB. The driver installs itself as a network device (plusb*). You can then set up a connection with ifconfig plusb0 pointopoint The transfer rate is approximately 5 MBit/s.

Programming with URBs Now we'll take a brief look at USB driver development. The basis of communications is the so-called “USB Request Block” (URB). The URB design is orientated towards the Windows API, but of course does not use Windows code and has been improved in many respects and made compatible with the Linux environment. A URB is a data structure which transfers to the USB subsystem all the information necessary for a transfer (device, interface, end point, user data). The transfer specified in the URB is started by the host controller driver and runs in the background until it is finished. In other words, after the URB has been sent the caller doesn't have to wait until the end of the transaction but can carry on with other work at the same time (asynchronous behaviour). Once a transfer has been finished later, either successfully or with errors, the system skips to the callback function (“Completion routine”) specified in the URB, which can then further process the data and also trigger new transfers. With the aid of the kernel function usb_alloc_urb(), URBs can be allocated and later linked so that the successful execution of a USB request automatically triggers the next one. This is especially useful for continuous data streams as the completion routine only has to deliver or retrieve the data (usb_submit_urb()) – everything else takes


place automatically in the USB subsystem. If a USB request runs into a timeout, the associated URB can be deleted (usb_unlink_urb()). However, for many USB devices it is no longer necessary to program a “real” kernel driver as almost all functions can be addressed when in user mode by means of the USB device file system (usbdevfs), thereby considerably simplifying driver development.

Developing your own USB hardware If you wish to develop your own USB devices the 8051 derivatives by AnchorChips are very useful. (They were also used, for instance, in the Digital Audio Broadcast USB project.) These modules can easily be programmed using a USB interface. An example of this can be foundin the “usbstress” package.

Integrating USB drivers Apart from the normal driver entry points (open() etc.), each USB driver module has two further functions, which are called on plugging in (probing) or disconnecting a device. During its initialisation with usb_register(struct usb_driver*), a USB device driver must preferably register in the area reserved for USB drivers (major-180) – the minor start value is defined in the usb_driver structure. For each new device plugged in and its interface, the probe functions of all currently loaded USB kernel drivers are called with a pointer to the device structure and the interface number, in the hope that a driver accepts this interface. In this case it returns the value !=ZERO and is thus connected to this device and interface. Probing takes place for each interface individually, so that multifunctional devices can also be recognised and used. With “hot-plug support” activated (in the 2.4 kernel only) it is possible to carry out further actions with /proc/sys/kernel/hotplug (see Figure 3). On unplugging the device, the disconnect function is called. This is a critical point, because the associated device file could still be opened by an application. Further accesses must result in an error message instead of letting the kernel driver crash with zero pointers.


USB in the future USB for Linux has emerged from the hacker phase and already provides kernel support for many common USB devices. As a result of the very stable URB API a new driver comes into being virtually every week. However, putting USB peripherals into operation under Linux is still not so easy and the user may well need further assistance. ■

USB driver in user mode The USB device file system offers an extensive range of options for addressing USB devices when in user mode, without requiring a special kernel driver. The usbdevfs is usually mounted on /proc/bus/usb. • /proc/bus/usb/devices This file lists all the USB devices connected and their characteristics. A select(input) on this file triggers the application called as soon as a USB device is plugged in or removed. • /proc/bus/usb/drivers This file lists all currently loaded kernel USB device drivers. • /proc/bus/usb/ bus number>/ device adresse> Using this file, transfers can be initiated to and from USB devices. The API is heavily modelled on the kernel URB API. Instead of calling kernel functions such as submit_urb(), ioctl calls arise and the parameters are transferred as pointers to a structure. A good, simple example which illustrates this mechanism for all types of transfer is the “usb stress module”.

Info * Universal Serial Bus Specification, Revision 1.1 (September 1998), USB Implementer's Forum * Open Host Controller interface (OHCI) Specification, 1996, Compaq * Universal Serial Bus Device Class Definition for Audio Devices (Release 1.0, March 1998), USB Implementer's Forum * Universal Host Controller interface (UHCI) Design Guide, Revision 1.1, March 1996, Intel Corp. “USB-Snoopy”: “Playback”: lsusb and libusb: List of USB devices supported: Mailing list archive of 8051 microprocessors with USB: Homepage of the Linux USB project: Linux USB Guide: Homepage of the USB for Linux project ■ 2 · 2000 LINUX MAGAZINE 55



Setting up Debian GNU/Linux 2.2 – “Potato”


The new version of Debian’s distribution is just about to arrive. Version 2.2 will be released for the Intel 386, Alpha, Sparc, M68k and PowerPC. What advantages does it have to offer? What has changed? Is it worth upgrading? What problems will you be letting yourself in for?

It’s often said that Debian GNU/Linux is “the distribution for professionals only“ or “not for beginners“. There’s also a commonly held view that installation can be pretty complicated. The reason that Debian GNU/Linux is used by a great many Linux professionals, however, isn’t because you need to be a professional to manage to install it. No, it is the high quality of the distribution and its independence from any commercial supplier that is convincing more and more people to try it. Just like the Linux kernel, Debian GNU/Linux is supported worldwide by more than 500 maintainers Installation of Debian GNU/Linux isn’t accomplished using a user-friendly graphical interface. Whilst this may be viewed as a disadvantage, particularly by newcomers to Linux, there are two good reasons for dispensing with such a luxury. Firstly, if 56 LINUX MAGAZINE 2 · 2000

you are setting up a server, a graphical interface is unnecessary and often undesirable. Secondly, this means that Debian GNU/Linux can be installed on many older, lower-performance systems on which trying to run X11 would be no fun at all. If you are familiar with Debian from older versions you’ll know that upgrades to a newer version can be accomplished without any difficulty. Thus, there is rarely any reason to perform a new installation which perhaps would warrant a lavish graphical installation routine. In general, a Debian GNU/Linux system is installed once only. At first glance very little has changed on the installation side since version 2.1. Installation is still largely undertaken by the program dbootstrap. If you have already installed version 2.1 (slink) you will find things very familiar. During the installation


many different options are available at each step. For example, you can boot from an IDE CD-ROM and then continue the rest of the installation from a SCSI CD-ROM if you want to. How many other distributions let you do that? The diversity of options available during installation has contributed to the reputation Debian GNU/Linux has for being difficult to install. However, for the great majority of installations most of the options won’t even be required. The default values are nearly always the most sensible and correct choice allowing you to install Debian GNU/Linux by repeatedly pressing the Enter key.

Installation improvements Network configuration has been simplified in comparison to version 2.1, as long as a functioning DHCP server is present on the network. In addition, the program taskselect has been integrated into the installation sequence. Another change since Debian 2.1 is that the option for choosing a server or a workstation system during the installation is no longer irreversible. At any point (even after installation of the base system) you can now restart the installation program and make more selections from the list of packages. There have been many other detail improvements to the installation routine. For example, at long last the program now keeps a record of the identity of the CD-ROM drive so that you don’t have to reselect this each time. When installing Debian GNU/Linux on different hardware platforms the only difference you will notice concerns the partitioning of the hard disk and the initial start-up of the Linux kernel. This is unavoidable as there is no version of the i386 program fdisk available for other architectures. However, once the installation program has started it behaves in exactly the same way on all platforms. Because of the different hardware, of course, there are still differences to be observed, but then this is also the case with i386-based systems. For example, i386 systems may have different types of mouse, which are connected to different devices. On i386 systems mice may use serial or PS/2 connections (/dev/ttyS0, /dev/ttyS1 or /dev/psaux). On the Apple Macintosh (m68k and powerpc) the mouse is connected using the so-called ADB port, the device for this being /dev/adbmouse. Later i386 computers and PowerMacs use the USB connection for operation of the mouse (/dev/usbmouse).

Update of Debian GNU/Linux 2.1 (slink) Of course, you don’t have to re-install your entire system if you simply want to utilise the latest packages from Debian GNU/Linux 2.2. An existing Debian GNU/Linux 2.1 can be brought up to date quite easily. However, it’s important to note that in


10 step installation As far as installation goes, Debian GNU/Linux differs from other distributions only in a few details. Most notably, you have a great deal of freedom and can control the process to a fine degree of detail if you want to. It’s up to you. • 1. Space: You must have one unassigned or spare partition, or one completely empty hard disk. If you don’t have a spare partition use the fips program (which is found on the CD under install/) to reduce the size of an existing partition. • 2. Boot: When you restart the computer you can either boot directly from the CD-ROM or create boot diskettes from the files rescue.bin and root.bin as well as driver-1.bin, driver-2.bin and driver-3.bin which you will find in the directory dists/stable/main/disks-ARCH/current/disks-1.44/ on the CD-ROM. Here, ARCH is to be replaced by the name of the respective architecture, for example i386 or m68k. • 3. Questions: Respond to all questions from the Debian GNU/Linux installation program with the Enter key if you are unsure of the correct reply. • 4. International: Select the desired keyboard layout. • 5. Partition: Create at least two partitions in the free hard disk space. Use one as swap and the other as the root partition. • 6. Kernel: Install the kernel and the modules from the CD-ROM. • 7. Elements: Select the required modules for your system. • 8: Base: Install the base system from the CD-ROM or another medium. • 9. Time: Select the appropriate time zone. • 10. Shutdown: If you are starting the system for the first time then after the kernel is loaded the installation program will ask you to specify a password for the root user (superuser) and create a new ordinary user. Install the rest of the packages after you have chosen one of the precompiled configurations.

the course of developing Debian GNU/Linux 2.2 very many of the system libraries and programs were replaced by later versions. An update of these libraries using dselect will fail because of the complex package inter-dependencies. What you must do instead is this. Mount the first Debian GNU/Linux 2.2 CD as Superuser manually and change the file /etc/apt/sources.list such that installation can be done via a direct access to the CD. The entry for this is: # - CD mounted manually under /cdrom deb fiU le:/cdrom/debian unstable main Now execute the command apt-get update. This reads in afresh the list of packages available on the CD. After this you can install the new libraries and programs with apt-get dist-upgrade. Don’t worry if the command fails, just start it again. When the process has been successfully completed you can update all the remaining packages with dselect. If you are satisfied with your existing Debian GNU/Linux 2.1 system, maintained over many months, and don’t consider having the very latest packages as all that important, naturally you don’t have to upgrade to version 2.2. However, there are a few packages that are worth bringing up to the latest version. Firstly, many errors and gaps in security 2 · 2000 LINUX MAGAZINE 57



Fig. 1: Package installation using aptitude

Fig. 2: Package installation with GNOME-Apt

have been found and eliminated in the past months. These bug-fixes include Y2K bugs. Secondly, you might want to install the latest version of XFree86 or GNOME. If you want to do this, Debian GNU/Linux provides a way. On the Web page “Vincent’s Bazaar“ you will find links to all of the programs and packages just described, that are appropriate to Debian GNU/Linux 2.1. If you use apt you can bring your system up to the latest state of development using the following entries in /etc/apt/sources.list: deb y2k-update mU ain deb slink-updatU e main deb xfree-updatU e main deb stable updates Alternatively, you can copy the packages to your hard disk and then install them manually using dpkg 58 LINUX MAGAZINE 2 · 2000

-i package.deb. As an alternative to your Web browser you can use the program wget to do the download. If you want to use an up-to-date Linux kernel 2.2.x with Debian GNU/Linux 2.1 it may be necessary to install some packages from Debian GNU/Linux 2.2. Debian GNU/Linux 2.1 came out when the Linux kernel 2.2.x was still in development so consequently not all of the packages contained in it are completely compatible with newer kernels. To determine the particular packages concerned, refer to the web page: “Errata: Linux kernel 2.2.x with Slink“. However, we would recommend a full update to Debian GNU/Linux 2.2. You will very quickly notice that the many new programs justify the installation effort.

What’s new? On now to the new packages in version 2.2. To present every one of the new packages would, of


course, be beyond the scope of this article, so we will just take a look at the most interesting ones. New among the basic tools is the program APT for package management. Apt represents the next generation of Debian GNU/Linux package management. The back-ends – the programs that run in the background and carry out the actual work – are largely complete. However, the front-ends, the programs with a user interface, are still in the course of development. Apt stands for “A Package Tool“. It’s a program designed to assist the system administrator (in other words, you) in the installation and management of programs. The first step in using Apt is to modify the configuration file /etc/apt/sources.list. This file contains information about the location of each package. Apt supports a large number of different installation sources. Currently these are cdrom, file, http and ftp. Each one of these sources is specified on a separate line in the file. In doing so the order is also taken into consideration: entries placed higher up have a higher priority. The format of the entries can be specified as follows: deb url distribution [component1] [componentU 2] [...] The url field specifies the installation source and the path to the root directory of the Debian distribution. On a CD-ROM the path is normally the directory /debian. You use the distribution field to select the version that you want to install. Normally, a choice is made between stable (for the current, stable version) and unstable (for the developer version.) The component fields refer to the different classifications of files in the distribution. Examples are: main, contrib, non-free, non-US. One or more entries are permitted, separated by blanks. Here are some examples. Bear in mind that you are at liberty to use a number of these entries in the configuration file at the same time. deb \ stable main contrib This entry uses the archive on with the classifications stable/main and stable/contrib. deb \ unstable main contrib non-free This entry fetches the files via FTP from the directory /debian. The as yet not completed (unstable) version of Debian GNU/Linux is used and access is made to the areas main, contrib and non-free. deb file:/home/fr/debian \ stable main contrib non-free This entry uses a local copy of the data on the hard disk. (This may also be a directory mounted through NFS.)


apt-setup and apt-get The program apt-setup provides a user interface for adding to the entries in the file /etc/apt/sources.list. With apt-setup you can use the “http“, “ftp“ and “filesystem“ methods. For the “cdrom“ method use is made of the program apt-cdrom. The menu entry “edit sources list by hand“ invokes the editor vi. A single option of “probe“ can be supplied at the start. This causes a CD-ROM that has been inserted to be read immediately. apt-get is the actual user interface for management of packages. This command-line oriented tool is very easy to use. It has the following syntax: apt-get [options] [command] [package ...] Since you are not likely to need some of the options very often we will just mention the most important ones: • update – this updates the overview of the available packages, reading in the information from the files packages.gz of the respective distribution.

The rise and rise of Debian • Debian 0.01 to 0.90 (August-December 1993) • Debian 0.91 (January 1994): this version had a simple package management system, with which packages could be installed and deleted. A couple of dozen developers worked on Debian at that time. • Debian 0.93R5 (March 1995): At that time one or more developers were responsible for each package. The package management was handled via dpkg, which is utilised after the basic installation. • Debian 0.93R6 (November 1995): dselect was introduced. This was the last Debian version that was still based on the a.out binary format. Around 60 developers were working on Debian GNU/Linux. • Debian 1.1 Buzz (June 1996): The first version with a so-called code name. Like all subsequent ones, this code name originated from the film “Toy Story“. This idea was introduced by Bruce Perens, who at that time was the project leader. This version used the new ELF format as well as kernel version 2.0, and comprised 474 packages. • Debian 1.2 Rex (December 1996): 848 packages, 120 developers. • Debian 1.3 Bo (July 1997): 974 packages, 200 developers. • Debian 2.0 Hamm (July 1998): The first Debian version which, in addition to the i386 architecture, also supported the m68k range of computers, that is the Amiga, Atari and Macintosh. This version, with Ian Jackson as project leader, was already based on the libc6 library. It had more than 1500 packages and approximately 400 developers compiled the packages. • Debian 2.1 Slink (09 March 1999): From this version on, support was provided for the Alpha and Sparc architectures as well. The project leader was Wichert Akkerman. 2250 packages were included in this version, which was supplied on two official CDs. Additionally included in the delivery schedule from this version on was apt, a new program for package management. • Debian 2.2 Potato (2000): Apt is a key element of this version. The number of packages has virtually doubled when compared to version 2.1. Also noteworthy is the integration of GNOME, glibc 2.1, Kernel 2.2.1x, XFree 3.3.6. • Debian x.x Woody: The code name for the next version was defined with the code freeze of Potato on 15.01.2000. Development has begun…

2 · 2000 LINUX MAGAZINE 59



Fig. 3: taskselect – pre-configured package selection

• upgrade – you can use this option to update all of the packages installed on the system to the latest version. All packages already installed, for which a newer version is available, will be updated. • dist-upgrade – this is actually an enhancement or extension to upgrade. By choosing this option you will be ensured that packages of greatest importance for the system are installed first. • install – this option is used to install packages. Each package (the package name, such as sendmail, suffices here) is fetched and installed according to the apt configuration. • remove – this option is used to remove packages.

Front ends aptitude is a text-based tool for installing packages. It follows a rather different philosophy to dselect. A strict distinction is drawn between installed, noninstalled, virtual packages and packages for which a

SPI – Software in the Public Interest SPI is a non-profit organisation that was founded to assist projects which develop software in the public interest. It encourages programmers to use the GNU General Public License (GPL) or any other licence that permits free dissemination and free usage of software. SPI encourages hardware developers to publish the documentation to their work in order that drivers may be written for their products. SPI was founded on 16 June 1997 as a non-profit organisation in Teilstaat, New York, USA. Since then it has become an umbrella organisation for various community projects. The statutes and certificate of incorporation define the objectives and the way that SPI works. SPI has a board of directors consisting of four members: a president, vice-president, an owner and a treasurer. SPI currently supports the following projects: Berlin, Debian, GNOME, LSB, Open Source, Open Hardware. If you would like to make a donation to SPI or wish to contact it other than by electronic means you can reach it at the following address: Software in the Public Interest, Inc. PO BOX 273 Tracy, CA 95378-0273 USA. Further information about SPI can be found on the Internet.

60 LINUX MAGAZINE 2 · 2000

newer version is available. Within these four groups all packages are shown in a tree structure, which also represents the directory structure within the Debian archive (for example: main/admin or nonUS/non-free.) With aptitude most of the keyboard functions are the same as for dselect so you should feel immediately at home. With its GTK+ -based interface, gnome-apt fits perfectly into the latest Debian-GNOME desktop. This program, too, helps you to deal with all the important tasks that arise with package management. Under the “File“ menu are two entries for configuring the program, as well as the entry “Quit“ which terminates the program. “General Preferences“ allows you to remove the package description from the program’s main window (“Show package details in main window“). This gives you more space to display the package list with the various categories. Here you can also alter the order of the columns in the package list. More interesting is the second entry “Sources“. Here you can specify the sources from which the package information as well as the actual packages are to be installed. You no longer need to use a text editor to edit the file /etc/apt/sources.list. Instead, you can use a graphical user interface. The second menu entry “Actions“ contains the actions that you want to perform on the package list. “Update“ reads in the package files afresh and updates the overview. If the files are not available on a local medium (CD-ROM or hard disk) they are fetched using FTP or HTTP from the specified server. “Complete run“ installs the chosen packages or removes packages that are no longer required from the system. “Mark upgrades“ is equivalent to aptget upgrade and marks all new packages so that they are updated the next time “Complete run“ is selected. “Mark smart upgrade“ on the other hand is used to update a Debian system to the next version. This corresponds to apt-get dist-upgrade.


On the “Package“ menu the most important menu entry is certainly “Search“. Concealed within it is a truly powerful tool for locating individual packages from the extensive list. Bear in mind that searching is done here not just by the names of packages but also on the package descriptions and other information. Unix experts could achieve this with a skilful grep on the appropriate files but now everyone else can do it too. From the “View“ menu you should take a look at the last two entries. Here you can control the order in which packages are shown in the package list. Using “Group“ you define how packages are displayed. You can choose to have them sorted alphabetically, by section, by priority or by status. Sorting by section is similar to dselect with packages shown according to their association, for example admin or x11. Sorting according to status will show you the installed and not yet installed packages, i.e. two groups. In addition, packages can be displayed according to priority, classified into: Extra, Important, No version available, Optional Required and Standard. The “Order”menu entry is used for sorting within the groups just mentioned. Again you can select from among the four sort orders mentioned above.

Task packages If you are a newcomer it isn’t easy to choose which of the more than 4,000 packages to install from your Debian GNU/Linux packages distribution. But even professionals can lose track of things. If you don’t want to read the specification of every package to decide whether it is to be installed or not you can put yourself in the hands of the Debian devel-


opers and install carefully pre-assembled groups of packages. A whole series of task packages are available. All of these packages have a name beginning with task-, so that you can easily search for them in dselect (with the / key ) and install them. As an alternative to selecting these task- packages using dselect or Apt there is also a special program specifically designed to take care of them. Using the tasksel program you can access all of the available task packages. At the start you are shown a list of the available packages. Using the cursor keys you can switch between packages and with the Enter key or spacebar you can select a package for installation. Pressing the key again cancels the selection.

From a user viewpoint

Infos Vincent’s Bazaar: UNOFFICIAL Debian updates Errata: Linux-Kernel 2.2.x with Slink SPI: Software in the Public Interest ■

Debian GNU/Linux 2.2 includes the GNOME graphical desktop, as well as many applications appropriate to it, and based on GTK+ or GNOME. Top of the list is, of course, the GIMP, a powerful graphics editing package. For the office user there are such interesting applications as Abiword (word processing) and Gnumeric (spreadsheet). Unfortunately, KDE has once again failed to find its way into the Debian distribution. This would undoubtedly benefit users who could make their own choise as to their favourite desktop. Unfortunately, the revised licence from Qt (the library on which KDE is based) is still not yet compatible with the Debian guidelines. For those interested in knowing more about the background to this already very old story, take a look at 2000/06/17/961300740.html. ■ Fig. 4: GNOME Desktop with applications

2 · 2000 LINUX MAGAZINE 61



Dynamic routing protocols


Dynamic routing protocols are very important for computers that must be accessible to the world at all times. What these protocols are and how they are configured is the subject of this article.

There are many computers that must always be accessible even when there is a breakdown in the network. Examples are mail servers, database servers and e-commerce systems. Secondly, maintaining fixed tables of routes between networks on a constantly-changing Internet would be an impossibly complex task. Anyone who needs a resilient network that can find its own way around any breakdowns or bottlenecks will need dynamic routing protocols. Routing protocols are protocols that enable two routers to exchange notes with each other as to which networks can be accessed through them. By this means, and some clever algorithms, routers are able to do this job all by themselves, without administrative intervention, adapting the routes used whenever the network changes. In most cases routing protocols run on special hardware and software. But it is possible to achieve something similar under Unix/Linux.

The theory The many demands imposed on routing protocols, plus the fact that the problem has been around a long time, has led to a whole range of protocols 62 LINUX MAGAZINE 2 · 2000

being developed. The main difference in terms of demand is between “internal“ and “external“ routing protocols. Internal protocols are designed to manage and distribute routing data within a small – or not so small – system of routers and/or computers. One example could be the network of a company with several departments at various locations. The job of an internal routing protocol would be to inform the entire company network how, for example, the databank server can be accessed from any location on the network. If this company network was then to be connected to a larger network such as the Internet, it would be the job of an external protocol to distribute information across this larger network as to how the network of this company can be accessed. The company network is regarded from outside as one unit, and can be treated as an “autonomous system“. This working principle is similar in all routing protocols: A router has some kind of network connected to one of its interfaces. So it is also aware of how to access this network and informs its neighbours of this using the routing protocol. The neighbours then remember that they know someone that knows how to access this network and, in the man-


ner of village gossip (but more truthfully, we hope) they then in turn inform their neighbours. They remember that they know someone, who knows someone, who knows how to access the network and so on, until eventually everyone knows. On this principle, a router will quite often receive messages from several of its neighbours that they know a route to the target network. The routes may be different, though all may be correct. From these routes, the router must select one that appears to be most suitable according to certain characteristics. In so doing, it must take care to avoid so-called “routing loops“ which would result in the data going round in a circle. It must do this quickly so that the time taken until all the routers have the latest information – known as the “convergence period“ – is as short as possible.

Routing Information Protocol We will look in more detail at the three routing protocols RIP, OSPF and BGP because of their importance nowadays and their free availability. The “Routing Information Protocol“, RIP for short, is perhaps the best known of the three. It exchanges routing information at pre-defined intervals of time and regards a path as optimal when it leads to the target via as few intervening nodes (known as hops) as possible. The choice of paths is worked out using the distance vector algorithm. RIP has a number of disadvantages. Firstly, the pre-set time interval must elapse before RIP recognises and can act on an altered situation such as a failed connection. Secondly, the choice of routes may not be ideal if a diversion via several routers that have fast connections is competing with a route via few routers with slow connections. In this case RIP goes the slowcoach route and requires manual intervention to give preference to the diversion. Thirdly, RIP regards a router 16 hops away as unreachable, which means that the diameter of a network run using RIP cannot be larger than 15 routers. Fourthly, RIP in its old version 1 works only for TCP/IP address classes A, B and C without network masks. This makes version 1 useless for present requirements. Version 2 has at least resolved this last point, which is why RIP has been the most popular internal routing protocol until now.

Shortest Path First “Open Shortest Path First“, OSPF for short, is a more powerful internal routing protocol. “Open“ in this context is to be understood in the sense of “Open Source“ since OSPF is an open standard for the “Shortest Path First“ algorithm. OSPF is a socalled “Link State Protocol“. It is capable of processing network masks and can distribute data about the availability of connections faster than RIP. It takes into account, when selecting the optimal path, the speed of the connections in between,


Configuration of fred, susie and cisco Configuration of fred: The Ethernet: ifconfig eth0 netmask broadcast 5 up The serial link to susie: Because we used a store-bought null modem cable, we had to do without hardware handshaking: pppd /dev/ttyS0 57600 nocrtscts persist local lock nodefaultroute \ netmask > /dev/null & The dummy interface: ifconfig dummy netmask broadcast up Switch on IP forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward Configuration of susie The serial link to fred: pppd /dev/ttyS1 57600 nocrtscts persist local lock nodefaultroute \ netmask > /dev/null & The Ethernet: ifconfig eth0 netmask broadcast 5 up The dummy interface: ifconfig dummy netmask broadcast up Switch on IP forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward Configuring Cisco interface Loopback0 ip address no ip directed-broadcast interface Ethernet0/0 ip address no ip redirects no ip directed-broadcast no shutdown

and, furthermore, the size of the network can in principle be as large as you like. In order to be able to perform this task efficiently, OSPF sub-divides the system into three classes of domains. The first class is an area, which is a collection of just about any routers, networks and computers which exchange routing information with each other. The second class is the backbone, which connects all areas together into one autonomous system. Unlike areas, there is only one backbone. The areas are numbered, the backbone is then implicitly given number 0. The third class of domains are known as the stub areas which are domains from which only a single router leads to the backbone. The point of this sub-division is that the tables which must be maintained to control the 2 · 2000 LINUX MAGAZINE 63



routing information can be reduced in size. This means that not only less memory is needed, but also the data packets are processed more rapidly. In short, OSPF is more effective and more modern than RIP, but also a bit more complicated.

Border Patrol The “Border Gateway Protocol“, BGP for short, is an example of an external protocol. In this role it generally, though not exclusively, runs at the junctions (known as peers) between autonomous systems and processes data about the way in which other autonomous systems can be reached. Since, in so doing, it lists all the autonomous systems which have to be crossed on the way to the target, it is known as a path vector protocol. BGP has various options for selecting an optimal route which allow it to take into account not so much technical but rather politically motivated grounds such as, for example, the cost of using a particular connection. Two BGP neighbours start off by exchanging their entire routing tables. After that they will only transmit amendments and “keep alive“ messages, which are intended to monitor the availability of the connection between the BGP neighbours themselves. This method makes it possible for BGP to manage the routing information in a way that conserves resources. Nowadays BGP acts as the link in the Internet. It runs on most of the backbone routers of the big network operators.

In practice There are programs that run under Unix and/or Linux which can execute routing protocols and even do it at no cost. The best known is the program routed, which comes as standard with Unix and is dedicated to the execution of RIP. Less well-known, but far more powerful, is gated, which has its own web page from where it can be downloaded. Still at the development stage, but also worth mentioning, is zebra, which unlike gated is a GNU project. This also has its own web page. Because of the greater maturity of the program we will restrict our discussion to gated, and show by means of simple examples how you can configure the protocols RIPv2, OSPF and BGP in order to distribute routing information, and how you can replace a failed connection by means of a second connection without manual intervention. For this, a simple home network will serve, which in our example consists of a K6-400 running SuSE 6.2, a 486DX-80 running RedHat 6.0, each with its own 10BaseT network card, and a Cisco 2610. (Thanks to my boss for the 2610, and thanks to my girlfriend for putting up with all the mess in the living room!) For cabling we used a null modem cable to link the two PCs linked, together with crossed twisted-pair cable for the Ethernet interfaces. Before the free-style comes the compulsory section, and this means that the kernels of the Linux 64 LINUX MAGAZINE 2 · 2000

PCs must be prepared for the hardware in the form of the network cards, the routing of the IP packets and the operation of a serial cable connection using PPP. The requirements are essentially the same as those for a Linux PC which is intended to connect a local network via an analogue modem to the Internet. In order to have a bit more room to manoeuvre for the configuration of network addresses the item “dummy-interfaces“ should also be compiled. With the kernel thus prepared, it is time to move on to the installation of the gated software. Download the latest openly available source code from version 3.5: at the time of writing this was the file gated-3-5-11.tar.gz. (Source code is important because BGP is not supported by the precompiled binaries.) The code is unpacked using tar xzvf gated3-5-11.tar.gz, at which point you will have a new directory called gated-3-5-11. Unfortunately, gated3-5-11 doesn’t have an easy ./configure; make; make install, so for once it will be appropriate to actually read the file INSTALL. The fastest way to get going is to enter the command sequence: cd gated-3-5-11 mkdir src/obj cp src/configs/linux-2.0 src/obj/Config vi src/obj/Config In this file the comment symbol before the line: protocols

bgp icmp ospf rip egp

should be deleted and the line underneath commented out. After this, compile the program with a simple make. Then installation can start with a make install. Unfortunately the binary gdc is written into /etc, so it would be a good idea to move it using the command mv /etc/gdc /usr/sbin to a place where (in my opinion) a control program for a routing demon belongs. (Note that the version gatedpublic-3_6, which came out recently, has adopted the easy configure mechanism.)

Setup Dummy interfaces should be set up on both Linux computers. These are logical interfaces to which one can assign IP addresses and they have the advantage of not failing as long as the computer shows the slightest sign of life. These dummy interfaces are given the IPs (fred), (susie) and (cisco). Between the two connections, network connections are configured. The Ethernet of fred receives, the Ethernet of Cisco gets, the Ethernet of susie gets The serial interface of fred is given, the serial interface of susie gets We set the serial connections to run at 57600 baud (it can do more, but this is fast enough for our purposes.)


Having completed these preparations we now have a serial link between fred and susie and an Ethernet link, which we can construct with one crossed cable either between fred and susie or fred and Cisco. For the first two examples the Ethernet connection between fred and susie is to be used. Cisco can be switched off until then, which also provides some respite from its noisy power pack fan! The computers should now exchange the addresses of their dummy interfaces via the routing protocol, because they cannot find these out simply from the configuration of the Ethernet and serial interfaces. Refer to Figure 1. Setting up RIP between fred and susie is quick and simple. The files /etc/gated.conf of fred and susie are identical:


The command rip yes switches RIP on (this is the default anyway in gated.) Using the interface command, RIP is switched to the Ethernet. Next, we specify that we want to use RIP version 2. The command authentication simple followed by a string provides a simple way for the two computers to check each other, not as a security measure but to avoid any unintentional mis-configuration of a third router. The redirect no command at the end prevents the two computers changing the routes by means of ICMP redirects and thus getting our nice RIP all tangled up.

That’s about it: fred learns via RIP the information that is located on susie on the dummy interface and conversely susie learns that is on fred on the dummy interface. If you try a ping on these IP addresses it runs through. It is even more impressive with OSPF between fred and susie. In this case we have two connections between fred and susie: a fast Ethernet connection and a slow serial connection. What could be more obvious than taking the slow connection as an emergency backup if the fast one fails? OSPF can do that, because it also takes account of the speeds of the connections used. The files /etc/gated.conf on fred and susie can be seen in Listing 1 and 2 respectively. The command routerid defines the IP address under which the router sends its packets. If this is not specified, gated takes the IP address of the first interface it finds at random. In this instance we must take the IP of the dummy interface. If we were to take the address of the Ethernet interface and the Ethernet failed, the serial link could no longer leap in as an emergency solution because the packets are apparently being sent to the IP of the Ethernet adapter which in this scenario has just failed. Using rip no the RIP switched in by default is switched off since we want to play with OSPF now. Our computers fred and susie are not backbone, so they will form part of area 1. The whole thing should run on the interfaces eth0 and ppp0, again with a simple authentication string. At the end there is another export instruction. This is necessary because OSPF only passes on routes from home which it has learnt via OSPF. In order that it will also pass on the directly connected networks to the dummy interface, these direct routes have to be exported to OSPF.

Listing 1: /etc/gated.conf from fred routerid;

Listing 2: /etc/gated.confauf susie routerid;

rip no;

rip no;

ospf yes { area 1 { authtype simple; interface eth0 ppp0 { authkey “OSPF“; }; }; };

ospf yes { area 1 { authtype simple; interface eth0 ppp0 { authkey “OSPF“; }; }; };

redirect no;

redirect no;

export proto ospfase type 2 { proto direct { ALL; }; };

export proto ospfase type 2 { proto direct { ALL; }; };

rip yes { interface eth0 version 2 authentication simple “RIP“; }; redirect no;

Fig. 1: Simple configuration

2 · 2000 LINUX MAGAZINE 65



Fig. 2: A somewhat more complex situation

Info Merit Gated Consortium GNU Zebra ■

Now susie and fred again learn reciprocally via OSPF the IPs of the respective dummy interfaces. It gets exciting now, when we start a ping on fred. This runs through as expected. Now, we simulate a connection failure by simply pulling the Ethernet cable out of the computer. At first, there is no answer to the ping. After about thirty seconds another one turns up, but this time with a delay which is no longer just 1-2, but some 50 milliseconds. fred has learnt from OSPF that the way to the dummy interface of susie is no longer via the Ethernet, but the serial cable. This is certainly slower, but now the best possible way.

/etc/gated.conf on fred autonomoussystem 1;

Into the big wide world

redirect no;

To liven things up we shall now connect the routers as follows: fred with susie via the serial cable and fred with Cisco via the crossed Ethernet cable. This means we have three computers in a row. susie is meant to be autonomous system number 3, fred the one with number 1 and Cisco will be given the number 2. The whole thing looks like in Figure 2. /etc/gated.conf on susie autonomoussystem 3; routerid; rip no; bgp yes { preference 50; group type external peeras 1 { peer; }; }; redirect no; export proto bgp as 1 { proto direct; Configuration of Cisco router bgp 2 redistribute connected neighbor remote-as 1 no auto-summary

66 LINUX MAGAZINE 2 · 2000

routerid; rip no; bgp yes { preference 50; group type external peeras 2 { peer; }; group type external peeras 3 { peer; }; };

export proto bgp as 2 { proto bgp as 3 { all; }; proto direct; }; export proto bgp as 3 { proto bgp as 2 { all; }; proto direct; This is pretty similar to the previous OSPF configuration. Firstly, the membership of the autonomous system is defined on each computer. routerid defines the IP of the dummy interface as the source address from which the data packets are sent by BGP. RIP is switched off again and BGP switched on with bgp yes. The preference command sets the routes learnt via BGP to a somewhat higher preference than is used as standard so that the BGP routes are not overwritten (by ICMP redirects, for example.) Next to be defined are the IP addresses at which the respective neighbouring autonomous systems can be reached. Since the BGP implementation of gated doesn’t pass on the routes to other autonomous systems from home, we must force them to be passed on using export commands as are the directly connected dummy interfaces. For BGP this was already the case, after which, using ping and traceroute you will see that it is possible to reach each of the other computers from any one of them. ■



Linux Infra-red Remote Control


Video recorders, televisions, hi-fi systems and satellite receivers: just about every consumer entertainment device now comes with its very own infra-red remote control. Your Linux computer, too, can be controlled from the comfort of your armchair. To find out how, read on…

Infra-red is the widely-used standard for the remote controls supplied with consumer devices. Only a few up-market products use radio and thus do not have to rely on an unobstructed line of sight to the target device. Home computers, too, can be controlled remotely with the appropriate equipment. Before we examine the subject in greater depth, let’s first take a look at the fundamentals of infrared remote controls. In essence, the transmission process can be described as follows (and is illustrated in Fig. 1). When a key on the remote control handset is pressed, the value corresponding to this key is determined. This value, expressed as a binary number, is modulated upon a carrier signal (normally in the range 30 to 40 kHz) and leaves the handset via the infra-red transmitting diode. In practice, however, different manufacturers have pursued separate paths, for example in the

choice of carrier frequency. The frequencies 32, 36 and 38 kHz are all widely used, whilst Sony uses > 40 kHz. Different formats are used for assigning a binary value to each of the keys on the remote control. Last but not least, several different forms of coding are employed to modulate these data bits on to the carrier signal in a suitable form. For a fuller picture of what happens you can try an interactive tutorial on the Web where, taking a Sony remote control as an example, you can marvel at the signals generated. Having arrived at the receiver end, the infra-red beam must first pass through a filter that masks out all interfering light frequencies. After that it falls on 2 · 2000 LINUX MAGAZINE 67




IR Beam Receiver Diode


– 1






Band-pass Filter Demodulation

Data block Vol - Vol + 1 2 3 .........

000001 000010 000011 000100

Table with information about Key status; Vol+ is selected

Header incl. address = 0110101101 000010 Encoding

Address OK? Perfom action according to data block

Change volume

Modulation on carrier signal

Transmitter LED

Decoding 0110101 000010

IR Beam

[left] Fig. 1a: The remote control generates a code for the key that was pressed using a device-specific address and a binary code for the key [right] Fig. 1b: The receiver demodulates the signal, determines the original key code and initiates the corresponding action.

Types of encoding Let’s take a closer look at the transmission process. The remote control’s keypad is connected to a logic circuit that continuously checks for depressed or released keys. If a change takes place, the transmission chip picks out the bit code for the key concerned and very often adds on a further bit that indicates whether the key has been pressed or released. These bits might be described as the data part of the bit code. Also added to the code is the so-called address part. This is nothing more than a fixed bit sequence in the remote control and base unit. This code differs, even for different devices from the same manufacturer. This ensures that only the intended device responds to the command from the remote control. The resultant overall bit code is usually between 14 and 32 bits in length, the length of the data and address parts differing from manufacturer to manufacturer. Converted into a stream of high and low signal levels, the result is a self-synchronising serial data stream. These level states – also referred to in this context as mark (high) and space (low) – may not directly correspond to the ones and zeros of the binary bit pattern. The encoding that is used always utilises both levels for each bit. It is the encoded binary value that modulates the low-frequency carrier frequency (in theory this is nothing more than an AND operation between the data stream and the carrier generator) which is finally emitted via the infrared transmission diode. Figure 2 shows some details of the three most frequently used encoding methods. The first two methods each vary the time during which the level of the data stream is high (Pulse Width Modulation) or low (Pulse Interval Modulation) to differentiate between the bit states 0 and 1. However, the third method is the most commonly used. It represents the two bit states through different signal edge changes (BiPhase coded). To help the receiver identify the start of a new bit code, some models of remote control also send a long pulse and a space before the encoded data itself is transmitted.

68 LINUX MAGAZINE 2 · 2000


the infra-red receiving diode. On the output of this device the modulated low-frequency carrier signal will appear. To guarantee this, a band-pass filter is connected in series with the receiving diode, which more or less ensures that only the frequency of the carrier signal passes through. Other infra-red disturbing influences can also be effectively filtered out in this way, such as solar radiation. Finally, decoding of the received bit-code takes place, so that at last the desired action may be carried out such as changing the volume level.


Pulse Width Modulation - Zero and One are distinguished by the length of the signal high state







Pulse Pause Modulation - the time between pulses represents the information (frequently called REC-80, used by Panasonic)

Computer control If you want to use a remote control on your PC, there are a number of options. Various TV cards offer possibilities. Many come supplied with both a remote control and a matching infra-red receiver (for example, the Hauppauge WinTV/Radio.) This makes matters really simple and saves you having to build yourself an appropriate receiver. Apart from this, you can exploit the TV functionality of cards that have the Brooktree 848/878 chipset. Note that none of these TV cards come supplied with suitable Linux drivers for the remote control. There are open-source drivers for the various models, but it must be admitted that some of them need some further development. One further remark concerning the remote control of TV cards. A short while ago Hauppauge cards were bundled with a remote control for connection to the serial port, the product in question being the Anir Multimedia Magic from Animax. The infra-red receiver of this remote control can be driven in almost the same way as the serial receiver just described and can also be used under Linux. Those with no TV card at their disposal and those who prefer to take out the soldering iron can always make use of the serial interface (Fig. 3) by connecting an infra-red receiver to it. You can build a suitable receiver yourself with very little effort. In this case we make use of the fact that the serial port’s UART triggers an interrupt whenever the DCD signal line changes level. A special driver is needed to respond to this. As a third choice the parallel port can be used with a similar self-built receiver, although greater effort is involved. Circuit diagrams for both types of receivers can be found on the LIRC home page. Some infra-red receiver designs that are connected to the serial interface don’t require a special driver since control is done through /dev/ttySx. The number of components required in the receiver is only marginally greater compared to that of the basic DCD receiver. However, the design uses a PIC16xxx microcontroller. This is an obstacle for many would-be builders as they will not have the necessary equipment to program the device. Using a dedicated infra-red receiver isn’t the only option. Some of the IrDA ports on PCs can be coaxed into transmitting and receiving the signals







BiPhase Coded - the transition (low to high or high to low) determines 0s and 1s. (also known as RC5 and used by Philips)








of an infra-red remote control. Due to the lack of hardware I have not yet been able to acquire any personal experience with this. Further information about it can be found on the LIRC home page, but in the end there is no substitute for actually trying it. Another possibility is IRman. Available as a commercial product for 33 US dollars, this device is also connected to the serial port and uses a PIC as decoder. A series of drivers are available from the manufacturer for specific (Windows) programes. If you are interested in finding out how to control the device you may take a look at the Perl code from Cajun. One option for building your own receiver is to use the SFH506 series integrated circuits from Siemens (now Infineon). These are chips containing an infra-red receiving diode, band-pass filter and demodulator (Fig. 4). Because of the integrated




Pin Pin Pin Pin Pin Pin Pin Pin Pin

#1 #2 #3 #4 #5 #6 #7 #8 #9






Fig. 2: The three commonly used types of modulation, Pulse Width Modulation (Sony), Pulse Interval Modulation (REC-80, Panasonic) and BiPhase Coding (RC5, Philips).

Fig. 3: The serial interface also provides the infra-red receiver with the supply voltage via RTS and DTR.



Data Carrier Detect Receive Data Tr a n s m i t t e d D a t a D a t a Te r m i n a l R e a d y Signal Ground Data Set Ready Request to Send Clear to Send Ring Indicator

2 · 2000 LINUX MAGAZINE 69



functionality there are a number of types: SFH50630, SFH506-36, SFH506-38 which correspond to the carrier frequencies 30, 36, 38 KHz. It may be difficult to get hold of an SFH506 now since Siemens has stopped producing them. There is a similar integrated circuit though with the same functional scope: the TSOP1736/38/40 [4]. Whichever one is used, SFH506 or TSOP17xx, the operation is identical. You apply a supply voltage to two of the three connector pins and obtain the serial data stream on the third pin when an infra-red signal with the correct carrier frequency has been received. This data stream can now be passed to DCD so that using the UART interrupts the duration of the pulse and space can be determined. Once the durations and level states plus the encoding method are known it is possible to deduce the bit code that was sent.

Fig. 4: Besides an optical filter (black housing) commercially available receivers such as the SFH506 or TSOP 17XX contain a band-pass filter, gain controller and the output stage.


gmp, ... /dev/lircm 8 (named pipe (FIFO) ==> one connection)

lircmd (lircmd.conf)

irexe, xawtv ... (~/.lircrc

Readers who are familiar with the serial interface will already know that a supply voltage can be obtained from this itself, which you can use to power the integrated circuit (serial mice do this for example.) This can be achieved by connecting RTS which provides a voltage level of 8 to 12 volts. This voltage can be reduced to the required 5 volts using resistors or, better, a voltage regulator. You need to be careful to keep the load on the interface low, even though a few milliamps are all that are needed. The serial interfaces of notebooks can cause problems as the voltages they provide may be too low to use. Of course, the hardware is only half the story. Software is also required to enable the received signals to be decoded. For the Linux user, this is where LIRC can help. LIRC stands for Linux Infra-red Remote Control and provides a way to decode and implement actions as a result of infra-red signals received from remote controls. LIRC also provides functionality for the transmission of infra-red signals, though only at the driver level. Further software for this can be found in the xrc packages which you will find on the LIRC home page. In earlier versions of LIRC only the serial type of receiver was supported. Since then there are now also kernel modules for several TV cards as well as the IrDA port. In the archive lirc-0.6.0.tar.gz all these drivers can be found in directory drivers/. Using ./configure; make; make install, the configuration and installation are carried out in the usual way. After entering ./configure a dialog-based shell script should appear. Here you select the infrared receiver type and where necessary adjust a few compile-time settings. The subsequent make should then generate the binaries. By running invoking make install as root you can now copy the files to the usual places under /usr/local/. The entry:


alias char-major-61 lirc_driver.o

/dev/lircm 8 (socket ==> several connections) lircd (lircd.conf), irrecord, mode2

User space

(character devices driver ==> one connection) Kernel space /dev/lirc



LIRC devices diver (including ioctl interface)

serial/parallel port

TV card

Linux serial driver


LIRC is neatly constructed as a layer model. Only a few programs will need exclusive access to the hardware devices; a number of clients can connect simultaneously to the lircd socket. 70 LINUX MAGAZINE 2 · 2000

in /etc/conf.modules ensures that the LIRC driver is linked automatically into the kernel through kerneld/kmod if necessary (»driver« here represents the actual driver name, for example <«serial« for lirc_serial). The majority of infra-red receivers require their own driver in the form of a kernel module. (Fig. 5). Here the driver takes over direct communication with the hardware and hands over the received data to a daemon. This attempts to assign these signals again to a key on the remote control. A further service program from the LIRC package allows other programs to be started, depending on the particular key that was pressed. I will go into that in detail later. Let’s take a closer look at a kernel driver for LIRC, taking lirc_serial as an example. Normally, this driver is loaded as a kernel module, hence the first function invoked is init_module(). There is, of course, a corresponding function that runs the kernel on removing the module, cleanup_module().


First of all the code checks whether the I/O region of the serial interface is still available at all. If so, this is initialised. The init_port() function implements this for us. In the next step, the lirc_serial module registers itself in the kernel as a character device via the callup register_chrdev(). Here the lirc_fops structure contains the pointers to the individual functions that can be carried out on the device (open, close, read, write, ioctl). Once all of this has been successfully completed, the driver can start its real work. The irq_handler() plays a central role in all of this. It responds to interrupts triggered by a change in level on the DCD pin of the serial interface. Each time the interrupt handler is called it establishes whether it is dealing with a pulse or space and determines the time in microseconds since the last call. It combines these two values into an integer (a pulse/space-timestamp) and places it in a ring buffer which the function lirc_read() is able to read out. By this means the data passes from the kernel into the user space. The TV card drivers use a different format. They are not supplied with time differences by the hardware but instead receive the bit codes ready for decoding. This means that lircd no longer has to deal with this conversion. With an ioctl() call, lircd ascertains the particular format that the driver delivers. For more information about kernel module programming refer to the Linux Kernel Module Programming Guide. As already mentioned, LIRC can also transmit infra-red signals. This also takes place via the serial interface, or more precisely, via DTR, with the aid of the lirc_serial driver. For this you have to compile it with #define LIRC_SERIAL_TRANSMITTER. As a rule though that is set up via ./configure. For transmission we require three things: a carrier frequency, the serial data stream that contains the encoded bit code and an infra-red transmitting diode. There are two possibilities here. First, you could output the modulated carrier frequency via DTR straight away and would need only to pass this signal to an infra-red transmitting diode. However, at 38 kHz it is difficult to get the exact timing. Second, you could output the serial data stream via DTR and have it modulated by hardware connected to it. The first option can be achieved by setting #define LIRC_SERIAL_SOFTCARRIER in lirc_serial (which is likewise settable via ./configure). The transceiver example integrated circuits on the LIRC home page use this method. If you look more closely at the source code of lirc_serial in the LIRC package (Listing 1), you will notice that only a warning is output if the I/O region is already allocated. This takes into account the fact that many users have compiled the generic serial driver into the kernel. It makes more sense to also compile the generic driver as a module and put lirc_serial.o before serial.o for »modproben«.


Listing 1: Some snippets of source code int init_module(void) { int result; if ((result = init_port()) < 0) return result; if (register_chrdev(major, LIRC_DRIVER_NAMU E, &lirc_fops) < 0) printk(KERN_ERR LIRC_DRIVER_NAME { ": register_chrdev failed\n"); release_region(port, 8); return -EIO; } return 0; } /* Sections of the IRQ handler for lirc_serial */ void irq_handler(int i, void *blah, struct ptU _regs *regs) { /* ... */ do{ counter++; status=sinp(UART_MSR); if((status&UART_MSR_DDCD) && sense!=-1) { /* get current time */ do_gettimeofday(&tv); dcd=(status & UART_MSR_DCD) ? 1:0; deltv=tv.tv_sec-lasttv.tv_sec; /* ... */ data=(lirc_t) (deltv*1000000+ tv.tv_useclasttv.tv_usec); frbwrite(dcd^sense ? data : (dataU |PULSE_BIT)); lasttv=tv; wake_up_interruptible(&lirc_waitU _in); } } while(!(sinp(UART_IIR) & UART_IIR_NO_INTU )); /* still pending ? */ } This inconspicuous while loop was not implemented in older versions of the driver, which had the effect that under unfavourable conditions the driver no longer executed its irq_handler(). This could only be overcome by removing and reloading the kernel module. As shown in listing 2, the Hauppauge TV card driver reads the received data. Note that the format of the data is different from that supplied by lirc_serial. lircd uses ioctl() to determine the data format in question. The interface between kernel driver and user space applications is the device file /dev/lirc, which make install creates with mknod /dev/lirc c 61 0. The receiver types that are driven via /dev/ttySx constitute an exception here. The program mode2 from the directory tools/ of the LIRC package is a simple example of how one communicates with a LIRC dri2 · 2000 LINUX MAGAZINE 71



ver: open /dev/lirc and read out integer values. The values read are to be interpreted differently depending on the particular type of driver: they are either pulse/space timestamps (serial drivers) or bit codes (TV card) – mode2 recognises only the first type and continuously outputs all received pulses/spaces. The LIRC daemon lircd is employed at this point to convert the received signals into a simple useable form. This daemon returns the names of the keys. For this, though, it also needs a configuration file containing the parameters of the remote control (this file is called lircd.conf, and it is expected in /usr/local/etc/). If no suitable file can be found in

Hauppauge TV cards Some manual intervention is necessary to coax the LIRC driver lirc_haup to work with Hauppauge TV cards (WinTV/Radio). A Red Hat 6.1 installation with kernel 2.2.15 and the package lirc-0.6.0 served as the base system for a test. To start with, the kernel has to be compiled with support for BT848 (Character Devices->Video For Linux) and MSP3400 (Sound->Additional low level sound drivers) in order to be able to use the TV card at all. It should be compiled as a module, otherwise the addresses of the I2C functions are not exported and as a result are not available to kernel modules. In that case, loading the kernel module lirc_haup would fail. As can be seen in the source code excerpt from read_raw_keypress, these functions are referenced. Before you compile the kernel, however, the file /usr/src/linux/include/linux/i2c.h still has to be patched: in line 99 you must replace the #if 0 with if 1. A patch file is available for this in the LIRC package under drivers/lirc_haup/patches/. If they are not already present you must also insert the following lines in /etc/conf.modules to allow the dynamic linking of the modules into the kernel through kmod: alias char-major-61 lirc_haup.o alias char-major-81-0 bttv.o pre-install bttv modprobe msp3400.o Once the kernel is more or less ready you can get on with the configuration of LIRC. Here you select “Hauppauge TV Card (old I2C Layer)“ as the driver. Under contrib/ you can copy a start script for /etc/rc.d/init.d/ and set the corresponding S- and K-links in order to start lircd each time the system is booted. So far so good. After this step we found LIRC worked wonderfully but the TV tuner would apparently no longer respond. This problem occurred only if the BT848 support (bttv.o) was installed as a module. As soon as it was permanently compiled into the kernel the TV tuner worked faultlessly. The disadvantage was that LIRC no longer worked due to the missing I2C symbols. However, with a small modification in /usr/src/linux/drivers/char/i2c.c we were able to get this problem under control. Somewhere near to line 430 is the beginning of the EXPORT_SYMBOL block. This is bounded by an #ifdef MODULE … #endif that also includes the functions init_module() and cleanup_module(). If, using cut and paste, you move the line containing #ifdef MODULE to be ahead of these two functions, the I2C symbols can then be accessed from other kernel modules, although the bttv driver itself is not present as a module. One further hint: when problems are experienced with the TV card drivers it is generally always worth checking out the latest CVS source tree, since there are usually more up to date versions of the driver to be found there.

72 LINUX MAGAZINE 2 · 2000

Listing 2: Read out of the received data static __u16 read_raw_keypress( struct lirc_haup_status *remote) { /* ... */ /* Starting bus */ i2c_start(t->bus); /* Resetting bus */ i2c_sendbyte(t->bus, ir_read, 0); /* Read 1st byte: Toggle byte (192 or 224) */ b1 = i2c_readbyte(t->bus, 0); /* Read 2nd byte: Key pressed by user */ b2 = i2c_readbyte(t->bus, 0); /* Read 3rd byte: Firmware version */ b3 = i2c_readbyte(t->bus, 1); /* Stopping bus */ i2c_stop(t->bus); /* ... */ }

directory remotes/ you can create one yourself with irrecord (which is found in the daemons/ directory normally installed below /usr/local/bin/). The only parameter that you have to enter here is a filename into which the configuration file is to be saved. Since /dev/lirc normally has file mode 644 (which allows users only read access to the device) you must either start irrecord as root or change the file mode. Should irrecord complain with the message »Something went wrong« and not be able to generate a configuration file you can also specify the additional switch - - force on the command line to obtain a configuration file in RAW mode. This should work in any event. It is worth taking a closer look at the file generated. First of all, it can be seen that all the data for the remote control is encapsulated in a „begin remote … end remote” block. This allows troublefree placement together in one file of several such blocks that are to be used when analysing the received signals. In addition, the particulars of the remote control’s timing parameters are very informative. Since we now have a file in /usr/local/etc/ containing the relevant timing parameters of the remote control used, there is nothing to prevent a start of lircd (daemons/, /usr/local/sbin/). With the program irw (tools/, /usr/local/bin/) you can now check whether lircd is going about its work correctly. As soon as you press a couple of keys on the remote control, something similar to that shown in listing 3 should be displayed on the terminal (in this case with the previously mentioned Anir Multimedia Magic and the corresponding configuration file). There are always four parameters per line: 1 - bit code, 2 - repetition counter (if the key remains depressed for an extended period), 3 - name of the key, 4 - name of the remote control. Listing 3: Parameters of Anir MultiMedia Magic 00000000005ba4f0 00 CD_UP ANIMAX 0000000000de21f0 00 RADIO_DOWN ANIMAX 0000000000de21f0 01 RADIO_DOWN ANIMAX 00000000005ea1f0 00 RADIO_UP ANIMAX 0000000000dc23f0 00 TV_DOWN ANIMAX


Communication between the daemon and irw takes place via the socket /dev/lircd. This allows a number of programs to connect to lircd simultaneously. The LIRC mouse daemon lircmd is one of them. As the name suggests, with this you can achieve the functions of a mouse via a remote control. The associated configuration file lircmd.conf is expected too, which like that for lircd should be placed in /usr/local/etc/. A glance at the already existing configuration files in directory remotes/ should give an idea of what the file should look like. A few points on this: • PROTOCOL IMPS/2 ensures that the IMPS2 protocol runs via /dev/lircm. If this directive is omitted, the Mouse Systems protocol is used. • ACCELERATE controls the speed with which the mouse pointer moves. • ACTIVATE defines the specific key via which the mouse functionality can be activated • All MOVE_ instructions assign the directions. • All BUTTON_ directives assign the mouse buttons • Two parameters are always specified with ACTIVATE, MOVE_, BUTTON_, the first being the name of the remote control (»*« for all remote controls), the second is the key abbreviation that lircd supplies. Listing 4: Simple example for xmms # Output text each time that 1 is pressed begin prog = irexec button = 1 config = echo "You’ve pressed 1" end # Start xmms in the background and switch intU o xmms mode begin prog = button config mode =

irexec = RADIO = xmms & xmms

end # All "begin ... end" blocks within "begin xmU ms ... end xmms" # are taken into consideration only if we areU in mode = xmms. # This makes sense if you intend switching beU tween a number of # keypad layouts. # Hint: if the mode is given the same name as U the programme then # this is selected automatically. begin xmms begin prog = xmms button = play config = PLAY end end xmms


• Example: gpm -m /dev/lircm -t msc, for the case where the Mouse Systems protocol is used. Once the kernel driver and daemon have processed the received raw data to the extent that the key abbreviation can be obtained directly from lircd, it is time to respond. For example, irexec does precisely this by starting various programs. As to be expected, the details as to what is to be started by which key abbreviation, are determined by a configuration file. This file is expected to be in $HOME/.lircrc. A few points on this too: • The file is split into „begin … end” blocks for each key abbreviation • Within these blocks »prog = irexec« must appear first of all. This allows several programs to be configured via.lircrc. An entry with »prog = xawtv«, for example, would be ignored by irexec • »button = BUTTON ABBREVIATION« for the relevant button that is to be responded to • »config = Program« whatever is to be executed • »repeat = 0« no response will be made to button repetition • With »mode« a different »begin mode – end mode« block in the configuration file can be activated To simplify communication with lircd and for easy read-in and processing of the file .lircrc, since lirc0.6.0 there has been a library (liblirc_client) which is also used by irexec. Further information regarding the functional scope and linking of liblirc_client in standalone programs can be found in doc/ in the LIRC package. A program that also uses the liblirc_client and ought to be of particular interest to TV card owners is called xawtv. With xawtv remote viewing is possible both under X and with the frame-buffer device. Through the support of liblirc_client you can sit comfortably in the armchair and zip things with the remote control. For simplicity’s sake xawtv includes its own .lircrc file with (contrib/dot.lircrc.). The key abbreviations that are to be found in the configuration file of the Hauppauge remote control are used there. Thus anyone who has assigned other key abbreviations needs to adjust them here. The XMMS plug-in will also probably be found useful since with this xmms can be operated as conveniently as xawtv. LIRC and xmms should already be installed prior to compiling the plug-in, otherwise the necessary header files will not be found. An example .lircrc file is also enclosed and can be appended to one that already exists (if need be adapt the key abbreviations to suit your own remote control). In contrast to xawtv with the xmms plug-in irexec must run in the background as well. Use is made of “mode“ in the configuration file and xmms itself is started only through irexec. This can be circumvented by commenting out the irexec entry along with the two lines “begin xmms“ and “end xmms“. ■

Info LIRC home page: Linux Kernel Module Programming Guide: Remote Control Tutorial: TSOP17XX data sheets: ets/optoelectronics/photomodules/TSOP17…html A self-built universal remote control: x.html Animax: xatw home page: IRman: ■

2 · 2000 LINUX MAGAZINE 73



A C/C++ development environment for UNIX


For some time now KDevelop has been the standard development environment under Linux and Unix for writing portable KDE/Qt programs in C++. This article shows that graphical IDEs can make programming much easier under Unix too – and not only when it comes to programming applications with graphical user interfaces.

KDE has become popular on a great many Linux and Unix systems, not least due to its usability and quality. The number of KDE applications that are available which aim to make the user’s life easier is rising all the time. Most KDE users don’t bother to ask how these applications come about or why KDE is so popular with programmers (if it wasn’t, there wouldn’t be this huge number of programs.) However, there are some very good reasons for it. The easy-to-learn API of the KDE and Qt libraries and the portability of KDE/Qt programs to most Unix platforms are two of the main reasons for the popularity of KDE. Qt even allows developers to compile their programs for MS operating systems. Added to this, KDE provides the relevant development tools that make this fast growth possible in the first place. One of the most important of these tools is KDevelop, an integrated development environment (IDE) which has not only established a name for itself among KDE developers but is 74 LINUX MAGAZINE 2 · 2000

increasingly being used for other programming projects too. KDevelop has provided the developer with a comfortable working environment, from which all KDE users will eventually benefit.

Objectives of KDevelop The KDevelop team, which, like many other open source projects, consists of individual international programmers who participate in the project on a completely voluntary basis, is currently financed entirely by the developers themselves. This includes the cost of websites and development hardware. Contributions are welcome, so please get in touch! Of course, each member of the team has his own, entirely personal reasons for participating in such a project. An important objective for some is to create applications that encourage a large number of ordinary users to become enthused about Linux/Unix. However, for many the aim is to make life easier for developers in particular. On the one hand


there are the developers who have discovered Linux as a development platform for free projects, not least thanks to KDevelop, and who wish to give something back to the Linux community. On the other hand, there are those who believe a high quality development environment is needed in order to convince programmers from the business world that it isn’t just cool to program for Linux but that it is really simple and user-friendly too. Finally, of course, there is the purely selfish motivation that by creating such a fine tool as KDevelop the team make it easier and quicker to develop other programs themselves. The more developers who are encouraged to develop a particular tool or start on a major project, the larger the number of available applications becomes over time. The old adage that there is no application software for Unix has finally been laid to rest. With KDE 2.0 approaching, we are currently in one of the most exciting free software development phases for the desktop environment, and everyone can be part of this by making a contribution. KDevelop aims to provide the broadest possible support for compilers and platforms. This is actually becoming very easy as the use of autoconf/automake-compatible project frameworks and the universally available GNU tools always ensure that, firstly, KDevelop runs on most Unix systems (at least on those on which KDE also runs, i.e. Linux, BSD, SCO UnixWare, HP-UX, AIX,..) and, secondly, that the applications developed using it can be compiled and run on all these systems. With this in mind, the fact that KDE is at the top of our list of priorities should be self-explanatory. The KDevelop environment provides complete support for the development cycle of KDE applications. From the generation of a basic KDE program through the creation of a graphical interface to the production of documentation, localisation and packaging, the IDE covers almost every need. KDevelop is now developed using KDevelop itself. With around 80,000 lines of source code this provides a convincing demonstration of the power of KDevelop.


IDE (Integrated Development Environment) Integrated development environments (IDEs) combine under one interface all the tools required by a programmer such as compilers, linkers, debuggers and editors. On top of this, they make the developer’s work as easy as possible by automating repetitive stages of the development cycle. They take on tasks such as project management and the generation of makefiles, and provide help in searching for errors. IDEs allow programmers to develop their programs as rapidly as possible, reducing the “time to market” compared to those using conventional development tools. IDEs really come into their own when you are developing applications that have a graphical user interface. Programming a graphical interface by manual coding is time-consuming and tedious. IDEs allow the interface to be designed visually, and then automatically generate the required code.

If you compile KDevelop yourself you will ensure at the same time that you have as few problems as possible later on when you generate your own applications. Usually, all you need to do if you encounter a problem is to install missing packages such as library and include files.

What can you develop? You can use KDevelop for any task to do with C and C++. Although KDevelop is specifically designed for KDE programming, there are a number of developers who use KDevelop for other projects. You might be interested to know, for example, that the inte-

Fig. 1: Selecting the project type using the KAppWizard

Installation We will now provide a few tips on installation so that you can get started with KDevelop later on in this article. Users running SuSE 6.4 or SCO UnixWare will probably experience the fewest difficulties. These distributions already contain version 1.1., which is similar to the latest version and can be used for development immediately without any problems. If you do not have KDevelop on your distribution or would like to use the latest version, you can download the source code, and in many instances precompiled RPMs too, from the KDevelop home page. The website also contains up-to-date information on the state of KDevelop, PostScript versions of the complete documentation, the Webforum and addresses of developers and mailing lists. 2 · 2000 LINUX MAGAZINE 75



grated class parser doesn’t have any problems with the Linux kernel code! Development using KDevelop always begins by generating a project. To make this easy a “Wizard” is available. This is launched using the Project, New menu. On the first page of the dialog you select the type of framework, or program template, to be generated. Programmers have at their disposal a total of 13 different types which immediately create executable applications and can be edited in KDevelop after they have been generated. The project types are divided into several groups: • KDE applications: This group contains all the skeleton programs available for development for both KDE 1.x and the forthcoming KDE 2.0. Possibilities include a mini-application that consists only of a main window, an SDI (Single Document Interface) framework for a standard application with menu panel, tool bar and status bar (based on a document view model, as is common in GUI application development) and, a real gem, a MDI (Multiple Document Interface) framework for KDE 2.0, with which users can create Windows-style programs that manage several documents and their views simultaneously in one main window.

GNU tools, configure and make In autoconf/automake-compatible software packages that are distributed as source code, a project is usually generated using the following instructions: % make -f Makefile.dist The command make, retrieves the instructions in the file Makefile.dist, usually the GNU tools automake, autoheader and automake. It uses them to generate from Makefile.ams the Makefile.ins; and from the file and the autconf macros the configure script. In the case of KDE/Qt projects, the perl script automoc or am_edit inserts the calls for the Qt-MOC (Meta-Object Compiler for the C++ signal/slot extension) into the Makefile.ins. % ./configure This executes the configure script, which checks the system to see that the paths for include files, compilers, linkers and libraries needed are present so that the generation of the project is guaranteed. It then generates the Makefiles from the files. % make The command make executes the instructions in the Makefiles generated by configure. In other words, it retrieves the MOC compiler, the C/C++ compiler and the linker in order to compile the source codes and link them to create a program or library. % make install This command executes the instructions under the “install” tag of the Makefiles. The effect should be that the programs, header files, libraries and documentation etc. are installed on the system (For this to work, you should be root.)

76 LINUX MAGAZINE 2 · 2000

• Qt applications: The same skeleton programs are available as for KDE, the difference being that they are based exclusively on the Qt library. The use of the pure Qt API allows developers to implement applications that can also be compiled and used under Microsoft Windows because Qt, as a cross-platform toolkit, supports not only Unix but (in the “pro” version) the “other” operating system too. An example of this is the circuit board layout program Eagle, the next version of which is currently being developed with Qt in order to guarantee availability on all platforms as efficiently as possible. • GNOME applications: To the amazement of the GNOME developer community and contrary to all the prophecies of doom, KDE offers its hottest competition the opportunity to profit from the advantages of an IDE too. KDevelop provides a framework that is based on the gtk+ library and can be used as a fully adequate framework for creating a GNOME application. Therefore, anyone who, for whatever reason, cannot acquire a taste for KDE as a Linux desktop does not have to forego the comfort of KDevelop if they wish to develop for GNOME. • Terminal applications: Even if you only wish to write a command line tool you will be in good hands with KDevelop. For C and C++ there are frameworks with which minimal “Hello World” programs can be generated and then extended as you wish. These frameworks also offer beginners an easy way to take their first steps into programming under Unix. • Other: This is how KDevelop defines its own project. For those of you who have already developed an application and would now like to edit it using KDevelop, this option provides a framework that allows you to convert to KDevelop without any major migration problems. There are already a number of projects that have achieved this quite successfully. On the next page of the Wizard we define the project-specific options such as the program name, version, directory, author and so on, and what we want the Wizard to generate, from a complete framework down to the “blank” autoconf/automake layer. On the third page you are offered the opportunity to develop the project using the version control system CVS. However, this is only possible locally when generating a new project. Users who wish to import their source code from a CVS server can do this with the help of the Cervesia program provided by co-developer Bernd Gehrmann, and then activate CVS support in KDevelop. CVS is used as standard on free projects in which more than one developer is involved. Otherwise it would not be possible to develop these projects independently at the same time and based on the same source code. However, the use of a CVS system can also be a sensible option for lone developers in certain situations. Any-



one who has already experienced the distress of accidentally destroying their project knows how valuable a CVS version can be as it can restore the program within seconds. The usual CVS functions such as add, delete, update and check are available later on in the file tree of the KDevelop project directory. The next two Wizard dialog pages define the file headers which insert the date, author and licence information into the source text so that you can be identified as the author of the source at any time and your rights are covered. When you get to the last page, a click on the “Generate” button is all that is needed: the rest is trivial (or magic depending on how you view it!). As soon as the generation has been completed, you can exit the KAppWizard using the “Exit” button. The new project is then loaded automatically and ready for editing from in the source code editor. Although it won’t do very much you can compile and run the program to check that everything works.

The Linux magazine browser KDevelop has too many features to cover them all in the space of one short article. The best approach is to demonstrate the typical procedure for creating an application using a small example. The example program is rather nice – it is a web browser: one that is limited to viewing one particular web page, without any graphics, but a web browser nevertheless. This example will introduce you to the world of KDE programming without very much prior knowledge. Our program will be developed and extended step by step with the emphasis on how to use KDevelop while editing source code. We will also take a closer look at the concept of the Qt signal/slot mechanism, which makes GUI programming decidedly easier. First of all, start KDevelop either via the “K” menu or by entering “kdevelop” in a terminal window. If you are starting KDevelop for the first time you will be guided through the basic configuration of the IDE by a setup assistant. You will immediately see which programs or packages still have to be installed. If you are unsure, KDevelop always offers you context sensitive help and, in the assistant, a “help” button using which you can move immediately to the description of the installation procedure in the manual. To create the example program use the KAppWizard in KDevelop. Select “KDE-Mini” as the application type and enter “LinuxMagazine” as the project name. Fill in the other fields with your own details. After the program framework has been generated, KDevelop presents you with the example program. (Note: if you see any errors during the generation stage, you will need to install the missing packages indicated by the errors, delete the partly-generated project and try again.) You can now compile the program to test whether everything is working as it should. To do

this, use the “execute” button on the tool bar, represented by a small gear-wheel like the one on the “K” menu. KDevelop then retrieves make, which executes the commands in the makefiles. These files contain instructions for controlling the compiler and linker, enabling you to generate the program correctly. Once the compile procedure is finished, the program is started automatically and you have stepped successfully into a new world of program development! (Again, if the program doesn’t compile, the reason is probably because of libraries and include files that are required and which have not been installed.)

[top] Fig. 2: KDevelop showing our example project [above] Fig. 3: The documentation browser: all the information you need at a glance.

Information is all Now we turn our attention to the finished product. You will soon realise that not only do you need to know what your program is supposed to do, but also how to find the information you need to create 2 · 2000 LINUX MAGAZINE 77



the program as quickly as possible. Therefore, information is everything! Without documentation you will not be able to generate a program with KDE/Qt. But KDE would not be KDE if there were not a clever solution to this. With the help of the documentation tool KDoc, a set of HTML documentation (the API documentation) is generated from the header files. KDevelop relieves you of this task too. In the case of SuSE 6.4 and the KDK (KDE Development Kit, provided by the KDevelop team and also available from the KDevelop home page, this is fully precompiled and installed for KDE 1.1.2). What’s more, you can generate your own new set of documentation (for the KDE 2.0 API for example) at any time using KDevelop. The documentation provided by TrollTech with the Qt library is integrated automatically. The next step is to update the KDevelop search index so that you can get access to the documentation and can quickly find the descriptions of particular topics, classes and functions when you need to. You can also choose to use htdig or glimpse as a search engine. Commercial developers rely on htdig as glimpse is only freely available for non-commercial usage and is no longer shipped on most distributions. KDevelop itself contains another five manuals in the form of online help, which should provide you with further help in almost any situation – at least as far as programming is concerned. User manuals, tutorials, programming manuals and KDE references provide you with a solid basis on which to learn about the IDE and how to use it efficiently to create more complex programs. A copy can be ordered in book form from the KDevelop home page.

Fig. 4: The KDevelop dialog editor

78 LINUX MAGAZINE 2 · 2000

Tree views... There are a few navigation tips you should learn if you want to become a proper KDevelop power user right away. In the left-hand section of KDevelop you will find tree views placed on tabs with the following captions: • CV (ClassViewer): This is the class browser for your project’s classes and functions. You can use it to skip to class declaration or function implementation. Using a context menu you can access additional functions such as skip to method declaration, add files, classes, methods and attributes etc. • LFV (Logical file tree): This is where you can sort your project’s files in folders based on their filename extension so that you can access them more quickly. Note: in the KDevelop setup you can set the option Autochange (automatically changes tree view during programming) either to class browser or logical file tree. • RFV (Real file tree): This is where you can view the project directory as you can in the file manager. It gives you access to all the files. Using a context menu, RFV provides you with extended functions such as delete, add to the project and the CVS commands. • DOC (Documentation tree): The documentation view offers you access to the online help included with KDevelop, the KDE/Qt documentation and your project’s documentation. In the KDE/Qt documentation you can also browse down to the functions of the classes available and find the information you require without a lengthy search. • VAR (Variable tree): This view is available while applications are being debugged. It indicates the status of the runtime variables you use.


...and output views Below the tree views are the output views. These are divided into windows: • Messages: In this window KDevelop shows all the output of external tasks such as make, the compiler etc. If an error message is shown, a click on it takes you to the error (automatic output localisation). • StdOut: Displays the program output during a debugging run at command line level. • StdErr: Standard error output of project application during debugging runs. • Breakpoint: Displays the breakpoint of the debugger and number of hits during a debugging session. • Frame Stack: Displays the application’s assigned frame stack during debugging. • Disassemble: Assembler output of program code.

Work view The work view is where you find the editor window for header, resource and other files, the source code editor window, the documentation browser and the tool window in which KDevelop starts any external programs that are required (such as KIconEdit for editing toolbar icons.) From the tool menu you can add other programs for use within KDevelop.

The dialog editor You can call up KDevelop’s dialog editor from the view menu or using the relevant tool bar button. Using it you can design the graphical interfaces of your applications and have them output as C++ source code. You can then edit the classes produced


in the source code editor with the help of the class view. Currently the dialog editor in KDevelop only supports Qt 1.4x and KDE 1.1.2 APIs. If you only use standard components, however, you should not have any problem editing KDE 2.0 projects.

[left] Fig. 5: The Project Options dialog box [right] Fig. 6: Adding a member function

Implementing the browser So to the creation of our example application.The Linux Magazine browser should be a simple HTML browser which, when started, loads the Linux Magazine home page at and displays it. First, the derivation of the main widget must be changed from QWidget to KHTMLView, and the khtmlw and kfm libraries must be linked to the program. Replace the derivation in the declaration of class LinuxMagazine and in the function stack of the relevant constructor. You must also change the include file from #include <qwidget.h> to #include <htmlview.h>. Next, open the project options via the “Project” menu and switch to the “Linker options” tab. Enable the “khtmlw” and “kfm” checkboxes. After the dialog has been closed by clicking “OK” the configure script is automatically run so that the makefiles can be newly generated. Now allow the program to compile from scratch. When everything is running we can start implementing the actual functions. We implement a new function which downloads and displays an HTML page. At the same time we declare this as a “slot” so that we can continue to browse from this page by clicking on hyperlinks. The HTML widget provides a signal, which we can link with this slot. The only things to note are the parameters of the signal: they must match the slot. Let’s take a closer look at the API of the class KHTMLView (you will find these in the documenta2 · 2000 LINUX MAGAZINE 79


Info Kdevelop homepage: KDE homepage: Homepage for KDE developers: TrollTech AS (Qt library): ■


tion browser in the khtmlw library). You’ll see that a URLSelected() signal is available there. If the user of the program selects a link with the mouse, this signal supplies us with the URL. To generate the new function, select the “add method” function via the class LinuxMagazine in the class browser. When the dialog appears enter void as a return value and showURL(KHTMLView* view, const char* url, int, const char*) as a method name. The class browser adds the final semicolon automatically. Some description explaining what the function does won’t hurt either. Finally, we select “public” and “slot” as a modifier. OK the dialog box and you will be placed in the code implementing the new function. We must now consider how we wish to load the page. As KFM fortunately already provides this function, we can simply use it. For brevity’s sake, only the general procedure is described. KFM loads the HTML page for us into a temporary file which we open using “QFile” and read into a string using “QTextStream”. We then execute the view functions of the HTML widget using this string and remove the temporary file. Finally, we have to execute this function using the URL of the Linux Magazine home page, which we execute in the constructor. We are not bothered by the fact that the method is also a slot as slots can be used as normal functions, the only difference being that a signal can also be used to execute them. Lastly, we insert a connect in the constructor, which links the URLSelected() signal to our slot showURL().

Fig. 7: Debugging the Linux Magazine browser with KDevelop

80 LINUX MAGAZINE 2 · 2000

You will find the source code of our example in the following listings. We hope that the brief example has given you your first insight into KDevelop and KDE programming: a taster before your first program of your own. Another small tip to finish on: If you are looking for ideas for your own applications, take a look at the source codes of other programs, which can be downloaded from the KDE home page. If your program will also be released under the GPL you can, of course, re-use this source code directly: you don’t need to re-invent the wheel. You can now start your first KDE program. If you like, you can install it on your system by opening a console, switching into the project directory and entering the command make followed by a su c “make install”. After you have restarted the K-Panel you can access the Linux Magazine browser from the K menu.

Debugging included Another technical highlight of KDevelop is the internal debugger. This can be seen in action in figure 7. Tool bar buttons to the top right of the window let you execute the program being tested a line at a time, stepping into or over function calls and so on. These buttons can be used as a floating tool bar, giving the advantage that you don’t have to constantly switch between the IDE and the program window when debugging GUI applications. The program window remains in the foreground even if you debug step by step through the source code.


In the tree view the status of variables and the function stack can be observed in the VAR window. In the output window you can find windows for observing breakpoint hits and stack frames. The program can also be executed at machine instruction level using the disassembly window. Information about the contents of memory and processor registers can be obtained using an additional view in which the status of the libraries linked to the application can also be viewed. For advanced users, KDevelop offers the option to set breakpoints in library calls via what are known as “Pending Breakpoints” even if the libraries are not loaded yet.

KDE 2.0 and KDevelop 2.0 Let’s finish with a quick look at the near future. With KDE 2.0 approaching, there is going to be some more action in the Linux/UNIX domain, and not only in terms of innovation and speed. The KDevelop team is currently developing the second version of the IDE and is, of course, making substantial use of the new technical opportunities presented by KDE 2.0. This mainly concerns the user interface, Listing 1: main.cpp #include "linuxmagazine.h"

which will support an MDI interface in future. The tree and output views can also be separated from the main view and used as self-contained windows. This will particularly please those developers who use XFree 4.0 in Multi-Monitor mode as they will be able to distribute KDevelop to all monitors. Work is also being done on the interchangeability of the editor so that vim fans can use their one true love. The fact that KDevelop 2.0 isn’t ready yet shouldn’t stop you from developing for KDE 2.0. KDevelop 1.2 supports it already. To make getting started easier, the tutorial supplied contains a KDE 2.0 application which you can try out straight away, so allowing you to keep your finger on the pulse. We wish you great success, and hope to see your program soon on the list on the KDevelop website where all the programs created using the tool are listed. ■ Listing 3: linuxmagazine.cpp #include "linuxmagazine.h" #include <kfm.h> #include <qfile.h> LinuxMagazine::LinuxMagazine( QWidget *parent, const char *name) : KHTMLView(parent, name) { showURL(this,"http://www.linux-magazine.coU .uk/",1,"test");

int main(int argc, char *argv[]) { KApplication a(argc, argv, "linuxmagazine"); LinuxMagazine *linuxmagazine = new LinuxMagazine(); a.setMainWidget(linuxmagazine); linuxmagazine->show(); return a.exec(); }

Listing 2: linuxmagazine.h #ifndef LINUXMAGAZINE_H #define LINUXMAGAZINE_H

connect(this, SIGNAL(URLSelected( KHTMLView*,const char*,int,const char*)), SLOT(showURL(KHTMLView*, const char*, int,const char*))); } LinuxMagazine::~LinuxMagazine() { } /* opens the url with KFM and displays it. */ void LinuxMagazine::showURL(KHTMLView* , const char* url,int,const char*) { QString str,text; KFM::download(url,str);

#include <kapp.h> #include <htmlview.h>

QFile file(str) ; if(file.exists()){

class LinuxMagazine : public KHTMLView { Q_OBJECT public: /* construtor */ LinuxMagazine(QWidget* parent=0, const char *name=0); /** destructor */ ~LinuxMagazine(); public slots: // Public slots /* opens the url with KFM and displays it. */ void showURL(KHTMLView* widget, const char* url,int,const char*);; QTextStream t(&file); while ( !t.eof() ) { QString s = t.readLine(); text.append( s ); } begin( str); parse(); write(text); end(); show(); KFM::removeTempFile(str);

}; #endif


} }

2 · 2000 LINUX MAGAZINE 81



GNOME programming


The GNU Network Model Environment (GNOME) is supported by a range of programs. But what could be better than writing your own? In this article we'll show you how.

GNOME is a graphical user environment for Unix systems. Its GNU General Public License status means that it is absolutely free. Perhaps its best attribute is its standard look and feel which creates a consistent appearance and behaviour (in the case of errors for example.) Furthermore, GNOME programs are intended to interact with one another easily. To edit an HTML file for instance, all you need do is drag and drop the file from the file manager to the editor. This is a familiar feature of Microsoft Windows. The uniform appearance of applications may seem boring but it is one of the main reasons an operating system like Windows has become so widely accepted. GNOME's attempt to bring the advantages of standardisation to a free operating system should therefore be supported. The best way to do this is to create your own applications for GNOME. Here we show you the basic procedures.

Where to begin? You won't have to download any major files from the Internet to begin programming. Development can begin with the GNOME version shipped with


your current distribution. They all come with a complete GNOME including the developer files. However, if you really can't resist, you'll find newer packages in various formats on the GNOME website (listed below.) Anyone with a distribution based on the package manager rpm can easily install the downloaded packages (don't forget to uninstall the old packages first!) Install them using the command: rpm -U <packagename.rpm> Normally, that should be it. Anyone who intends to compile larger projects should prepare themselves for a time-consuming orgy of downloading and compiling that will not be hassle-free. You'll get the software from the sites below, or from mirror servers. If you want an installation that just contains the basics you'll be perfectly happy with glib, gtk+, imlib, ORBit, gnome-libs, libgtop and the gnomecore packages. Installation is described in the README files of the source package. It generally consists of the well-known commands: ../configure make make install

Making a start The unimaginative but customary "Hello World" program is a standard first program. Linux programmers have already covered this ground, so we'll write a small program that produces just one window. We'll then take this solitary, single window as a cry from GNOME, a call into the big wide world. This approach saves us creating an output function. Listing 1 shows the simplest (and, as we have seen, the most philosophical) program for GNOME (mini.c):


er, gtk_main still has to be called up to start the "main loop" of the program. The small program is compiled using the following compiler call: gcc mini.c `gnome-config --libs --cflags gnoU meui` -o mini The finished program is, in the best gtk+ style, a 200x200 pixel window without any content whose only purpose is to be closed with the relevant button on the window. What happens in these few lines? Firstly, the application itself is declared as a GtkWidget. Then gnome_init initialises the program and reports it to GNOME. This feature is required for session management, which ensures that after the next restart, all the programs that are open when you log off are available again in the same place on the desktop. The two character strings represent the name and the version number of the program. argc and argv are, as usual, the arguments which are transferred from the command line to the program when the program is started. These can be edited with popt, the library GNOME uses as standard. The call for gnome_app_new creates the widget, i.e. the application itself, by carrying out various initialisations and providing memory. Again the program name and the title are transferred as they are to appear in the header line of the window. The function gtk_widget_show () shows the window on the screen. The program then goes into the main loop and waits for events. The strange way in which the compiler is called isn't new to established gtk+ programmers. The small utility gnome-config forwards the necessary include parameters (e.g. "-I/opt/gnome/include" or "-L/opt/gnome/lib") to the compiler. This saves the user some typing and is particularly useful if you wish to convert your program to a simple configuration and installation with GNU autoconf. It is now time to close the window again.

Listing 1: mini.c #include <gnome.h> int main (int argc, gchar *argv[]) { GtkWidget *my_application; gnome_init ("gnomovision", "0.0.1", argc,U argv); my_application = gnome_app_new ("gnomovisU ion", " gnomovision "); gtk_widget_show (my_application); gtk_main (); return (0); } /* end of mini.c */ As you can see, it is no longer necessary to call up gtk_init or add gtk.h, as with gtk+. GNOME encapsulates almost all the necessary calls for gtk. Howev2 路 2000 LINUX MAGAZINE 83



Listing 2: first.c /* * * * * * * * * */

Example program for the gtk+ library from the article for Linux Magazine (c) 2000 Thorsten Fischer first.c, compile with gcc first.c `gnome-config --libs --cflags gnomeui` -o first

#include <gnome.h> gint create_about_box (void) { GtkWidget *aboutdialog; gchar *authors [] = { "Thorsten Fischer <>", "Your name <your name@your provider >", NULL }; gchar *abouttext = "gnomovision: The definitive program for GNOME!\n This program is subject to the GPL. It may be used andU passed on without any restrictions as long as this copyright noticeU remains in place. You can find further information in the file COPYING."; aboutdialog = gnome_about_new ("gnomovision", "0.0.2", "(c) 1999 Free Software Foundation", (gpointer) authors, abouttext, "./image.png"); gtk_widget_show (aboutdialog); return; } gint end_program (GtkWidget *widget, gpointer data) { gtk_main_quit (); return; } static GnomeUIInfo menu_file [] = { GNOMEUIINFO_ITEM_STOCK ("Exit", "exit gnomovision", end_program, GNOME_STOCK_MENU_EXIT), GNOMEUIINFO_SEPARATOR, GNOMEUIINFO_END }; static GnomeUIInfo menu_help [] = { GNOMEUIINFO_ITEM_STOCK ("About gnomovision", "About gnomoviU sion", create_about_box, GNOME_STOCK_MENU_ABOUT), GNOMEUIINFO_END }; static GnomeUIInfo menu_main [] = { GNOMEUIINFO_SUBTREE (N_("File"), menu_file), GNOMEUIINFO_SUBTREE (N_("Help"), menu_help), GNOMEUIINFO_END };

84 LINUX MAGAZINE 2 路 2000

static GnomeUIInfo menu_toolbar [] = { GNOMEUIINFO_ITEM_STOCK ("Exit", "exit gnomovision", end_program, GNOME_STOCK_PIXMAP_EXIT), GNOMEUIINFO_SEPARATOR, GNOMEUIINFO_ITEM_STOCK ("About", "About gnomovision", create_about_box, GNOME_STOCK_PIXMAP_ABOUT), GNOMEUIINFO_END }; gint show_popup (GtkWidget *widget, GdkEvent *event) { GtkWidget *popup; popup = gnome_popup_menu_new (menu_help); gnome_popup_menu_do_popup_modal (popup, NULL, NULL, NULL, eU vent); gtk_widget_destroy (popup); } int main (int argc, gchar *argv[]) { GtkWidget *my_application; GtkWidget *abox; GtkWidget *abutton; gchar buf [40] = "Left click for popup menu!"; gnome_init ("gnomovision", "0.0.2", argc, argv); my_application = gnome_app_new ("gnomovision", " gnomovisioU n "); gtk_widget_set_usize (GTK_WIDGET (my_application), 200, 100U ); gtk_signal_connect (GTK_OBJECT (my_application), "delete_evU ent", GTK_SIGNAL_FUNC (end_program), NULL); abox = gtk_hbox_new (FALSE, 0); gnome_app_set_contents (GNOME_APP (my_application), abox); abutton = gtk_button_new_with_label (buf); gtk_box_pack_start (GTK_BOX (abox), abutton, TRUE, TRUE, 0); gtk_signal_connect (GTK_OBJECT (abutton), "clicked", GTK_SIU gNAL_FUNC (show_popup), NULL); gnome_app_create_menus (GNOME_APP (my_application), menu_maU in); gnome_app_create_toolbar (GNOME_APP (my_application), menu_U toolbar); gtk_widget_show (abutton); gtk_widget_show (my_application); gtk_main (); return; } /* end of first.c */



What's going on here? Anyone who, after closing mini, views the list of current processes with ps will see that the small program still seems to be hanging around in memory. This is unsurprising, as we only closed the window with the help of the window manager. To exit the program we need to call up the exit program function specifically. In order to bring about events, GNOME (once again, and not for the last time!) uses gtk+. This toolkit uses what are known as callbacks to call up functions at the events defined by the programmer. The individual widgets which make up a window are linked with the functions to be called based on certain conditions via the function: gtk_signal_connect (); This function ensures that the loop gtk_main responds to the events. If our window has a widget such as an exit button by the name of exitbutton, the program could contain the following line based on its definition: gtk_signal_connect (GTK_OBJECT (exitbutton)U , "clicked", GTK_SIGNAL_FUNC (gtk_main_quit), NULL); Now each time the user clicks on the exit button the program's main loop is interrupted, allowing the user to exit the program. It makes sense when these events occur to call up a function you have written yourself showing a dialog box that offers to save changed and still open files. The last parameter, NULL, can be filled with any pointer which can transfer data to the function. Each widget can respond to standard events (is clicked on, is closed etc.) Most widgets also have their own specific events, which the user can take advantage of. The widget GtkListItem, which describes the individual entries in a list, responds to the selection of individual entries in the list â&#x20AC;&#x201C; a property which would serve no purpose for a simple button.


Navigation Naturally, a graphical user interface program cannot manage without a menu. In most cases, a toolbar with icons which provide shortcuts to the most frequently used functions is also desirable. GNOME has a simple system for creating menus. This system restricts the programmer's actions to defining the names of menu entries and icons and their functions. The structure in which the menu entries are defined is called GnomeUIInfo. The following call creates a main menu containing a file and help menu: GnomeUIInfo menu_main [] = { GNOMEUIINFO_SUBTREE (N_("File"), menu_file), GNOMEUIINFO_SUBTREE (N_("Help"), menu_help), GNOMEUIINFO_END }; Based on the same model, the two structures menu_file and menu_help are defined and displayed using the call for the function gnome_app_create_menus. You proceed in exactly the same way with a toolbar, which you can define as follows: 2 ¡ 2000 LINUX MAGAZINE 85



The function lists the authors, the program name, version number, copyright notice, an explanatory test and may display a pixmap. Everything is laid out in an attractive dialog box. The exact procedure is shown in Listing 2 (first.c), which also repeats everything covered so far.

Info GNOME home page GNOME Developers web site

All finished

GTK Information ■

GnomeUIInfo toolbar [] = { GNOMEUIINFO_ITEM_STOCK ("Exit", "Exit gnomU esite", end_application, GNOME_STOCK_PIXMAP_EXIT), GNOMEUIINFO_SEPARATOR, GNOMEUIINFO_ITEM_STOCK ("Help", "Help! HelU p!", end_application, GNOME_STOCK_PIXMAP_HELP), GNOMEUIINFO_END }; The call for GNOMEUIINFO_END, which defines the closure of the menu or toolbar, is always important. GNOME generates a wealth of pixmaps – small images and icons – which should be used in a consistent way to obtain the standard look and feel. Table 1 lists some of the pixmaps used for toolbars. A fully comprehensive list would be too exhaustive. Pop-up menus are produced in a similar way. They appear at the click of a mouse and only in certain parts of the window – i.e. in particular widgets. The procedure uses the aforementioned method of signals and callback functions. We simply link the widget in which we wish to be able to call up the pop-up menu with the function which creates the menu. Menu entries can be easily added to the menu, removed and switched on or off using the relevant commands. We can also make "intelligent" or context-sensitive menus which display certain functions only in certain situations. A "Save" menu, for instance, only makes sense if some data has changed since the last save.

About us GNOME applications also display dialog boxes to display information or allow users to set preferences. One of the better known examples is the About box, which offers information about the program and its authors. This standard box is created using a call of the function: gnome_about_new (); Table 1: Pixmaps for toolbars GNOME_STOCK_PIXMAP_NEW GNOME_STOCK_PIXMAP_OPEN GNOME_STOCK_PIXMAP_SAVE GNOME_STOCK_PIXMAP_SAVE_AS GNOME_STOCK_PIXMAP_CUT GNOME_STOCK_PIXMAP_COPY 86 LINUX MAGAZINE 2 · 2000

New (e.g. for a new file) Open (e.g. an existing file) Save (e.g. a changed file) Save as (a new file name) Cut (e.g. a piece of text) Copy (e.g. a piece of text)

That was a lot to grasp! All the basics discussed here are packed into one example program. It is compiled in exactly the same way as the first example, except of course that you should not enter "mini.c" in this case. As you can see, I have given the gnomovision program the version number 0.0.2. This leap to the next version seems justified in view of the increased functionality. We shouldn't leave this listing as it is without some final comments. The small buffer buf contains the text to be put on the button. Also, I define a box with abox. The reason for this is that gtk+ uses what are known as "Container" widgets in order to pack and organise other widgets. Not all widgets can function as containers, whilst others are designed exclusively as containers. At least one box is needed to pack and display other widgets. Packing (shown here at the button abutton) is executed quite easily with the call for the function gtk_box_pack_start. gnome_app_set_contents informs GNOME that the interesting part of the program is executed within the box. The only new thing is the function create_about_box (described above). In the same directory as first.c there must be an image named image.png, which improves the visuals of the dialog box. Other graphics formats may also be used for the image. It is good practice to define or create the widget first in gtk+ or GNOME. Then define its properties and link it with the relevant signals. Finally, show it. When you show it you should do the reverse of what you did when you created it. The least important buttons and so on come first and the window widget last. This prevents ugly cross-fading effects while waiting for the window to be displayed.

More documentation Although developer documentation on the GNOME project is available, it is still extremely disorganised. However, the project organisers have now set up a special website for developers. This will hopefully help to concentrate all the documentation in one place. In this respect, gtk+ is also undergoing improvements. ■



Tuning for peak performance


So you’ve installed your Linux distribution from the CD. Maybe you’ve got your sound card working and some applications too. But the day will come when you find you’re running out of disk space or the system doesn’t seem to be running as fast as it used to. How do you find out what’s causing the Packages: A package is a collection of programs and files which generally forms a complete application. Under Linux a package may contain binary files (which are ready to run) or source code (which must be compiled before you can install and run it.) Partitions: Partitions are areas of your hard disk. Under Windows each partition has a separate drive letter. Under Linux you must have a swap partition which the operating system uses for memory paging and a root partition which holds files. Often, you will have a separate small boot partition, and another partition for user files. Because of the way partitions are mounted under Linux, it all looks like one big hard disk. Filesystems: The term ”filesystem” describes the way in which data is stored on an area of disk. Under Linux, swap partitions have their own format which isn’t readable in the usual way. Data partitions generally use the ”ext2” filesystem but other types are available for use by those in the know. ■

problem? And what can you do about it?

The chances are that, as a new Linux user, you defaulted many of the options offered by the installation program, such as which packages to install and the way your partitions and filesystems are set up. By default, most installation programs will install a large number of packages that you probably don’t need, such as multiple window managers, utilities, development software, etc. This gives you immediate access to a wealth of different applications which you can try out in order to establish your preferences. But those you won’t ever use again are now just wasting memory and disk space. For example, once you’ve established your favourite desktop environment why keep the others? If you prefer KDE you can safely ditch Gnome, fvwm and the rest. How is your filestore laid out? Did you choose the easiest installation option of one large root filesystem? If so, it’s even easier to run out of space. Below, we’ll address these issues one at a time. Note that many of the configuration functions and commands described can be carried out with a configuration manager such as Linuxconf. However, having an understanding of the underlying commands and methods is always a good thing.

and free space along with the percentage utilised for each filesystem. If your root filesystem is getting full, you’ll either need to delete things or create some more filestore. Let’s look at making some room on what we’ve got first.

Disk Space

Some packages have dependencies, in which case rpm will tell you so. There is a parameter to force the removal of a package even if other packages are dependent on it. It is not advisable to do this because you may make your system unstable. If you come across a dependency, see if you can delete the dependent package first, then go back

To find out your disk space usage type: df -k at a console prompt. You’ll get a display similar to that in Figure 1, which shows the amount of used

88 LINUX MAGAZINE 2 · 2000

Removing Packages To remove unwanted packages you can either use a graphical package manager like gnorpm (see Fig: 2) from your desktop environment to see what’s currently installed, or use the rpm command to produce a list like this: rpm -qa > packagelist (Use -qai to list descriptions of each package as well.) Once you’ve run the command have a look through it using more: more packagelist Once you’ve decided which packages you want to remove you can either use your graphical package manager to remove them or the rpm command, like this: rpm -e packagename



and delete the original package. A graphical package manager will also tell you of any dependencies, and in some cases will offer to remove the dependent packages for you if you wish.

Making more Space If you’ve done this and you’ve still not got enough space (or if there are no packages you want to delete) and you’ve got some spare room on your hard disk you could create some more filestore. To do this you could use fdisk or sfdisk to create another partition and then mkfs to create a filesystem in it, like this: mkfs -t ext2 /dev/hdxn where x is the disk id and n is the partition number. Note that if your disk is a SCSI disk it will have a name of the format /dev/sdxn and also that mkfs will work out the number of blocks required automatically. Once you’ve done that, you can mount the partition like this: mount -t ext2 /dev/hdxn /mountpoint (eg:- / hU ome) Note that the mount point must already exist. If it doesn’t, just use mkdir to create it. To get the new filesystem to mount automatically when the system loads, you need to edit the file /etc/fstab and add in the new partition and filesystem. To make it easy, just copy one of the other /dev/hdxn lines and change the device name and mount point appropriately. Once you’ve got some extra space you could make some room in your root filesystem by moving some things from it to your new filesystem. A good thing to move is all the on-line documentation held in /usr/doc. The following commands would move all these files to your new /home filesystem and create a link in the original location:

[top] Fig. 1: Finding out your disk space usage [middle] Fig. 2: Finding out what packages are installed [left] Fig. 3: Using top to view your resource hogs

cd /usr mv doc /home ln -s /home/doc doc This will free off a lot of space but still allow programs that expect the documentation to be in /usr/doc to still find it.

Compression Another possible alternative is the use of compression. You could easily use gzip to compress documentation or gzexe to compress executables which will then decompress automatically on loading. (There will be a performance penalty if you do this, of course.) There is also an experimental system called DouBle being developed which will compress all files in a filesystem transparently (similar to Stacker or DriveSpace under Windows.)

Temporary Files Finally on the subject of disk space we come to the temporary work files created in /tmp or /var/tmp by many programs, some of which don’t clean up after themselves. These should be cleared on a regular basis by using a find command such as: find /tmp -daystart -atime 30 -type f -exec rU m {} \; This will delete all regular files in /tmp last accessed 30 days ago. To have this done regularly you could place the commands in a script in directory /usr/local/bin (for example) and then change root’s crontab file to exe-

Dependencies: If a package requires other packages to be installed on a system before it will work, it is said to be dependent on these other packages. Mount point: This is a directory to which the drive or partition will be mounted and via which its contents are accessed. ■

2 · 2000 LINUX MAGAZINE 89



cute them automatically using the command: crontab -e This will allow you to edit any existing crontab using the vi editor. Just add a line such as: 0 1 * * * /usr/local/bin/myhousekeepingscript [top left] Monitoring performance using KDE Task Manager [top right] Using ksysv to manage services and startup scripts [above] Using linuxconf to do the same job

This will run your script at 01:00 every morning.

Memory and Processor Usage If you run the command: ps aux you will see a surprisingly long list of processes that are being run by your system (even if it isn’t doing anything at the moment.) All of these are

Optimising Memory Using a command like this: tail -200 /var/log/messages | grep Memory: If you do, you should see something like: Aug 30 10:48:00 hostname kernel: Memory: 46868k/49152k available (1032k U kernel code, 412k reserved, 792k data, 48k init) The first figure following ‘Memory:’ is approximately the figure you will see in the monitoring programs and is the amount left after the Linux kernel has reserved some for itself (as shown later in the message.) The figure after the ”/” should match your total RAM (e.g.: 48Mb * 1024 = 49152Kb.) If it doesn’t, you can force Linux to use all your system’s memory by getting the boot manager lilo to pass the kernel an additional parameter by means of the ‘append’ command in the file /etc/lilo.conf (see Figure 8.) The value 128M would be used if your system has 128Mb of RAM. The reason you may have to do this is that the default upper limit for most kernels is 64Mb. Supplying this parameter overrides the default value. Note that you can either supply it as a global parameter by specifying it as in Figure 8 or on a kernel by kernel basis by placing it after an ”image=” entry. (Note: Don’t forget to update lilo afterwards by running /sbin/lilo.)

90 LINUX MAGAZINE 2 · 2000

using system resources, especially memory. Some of them may be using CPU (processor) power too, depending upon what they’re doing. However, most of them are just daemons waiting for requests to do something useful. These usually use minimal CPU power. A useful utility is top (see Figure 3) which lists the current top resource hungry processes. It’s part of the procps package which contains a number of other useful programs such as free, vmstat, uptime, w and watch. Make sure you install it if you want to look at your system’s performance from the command line. Top reveals a lot of useful information about your system. In Figure 3 you can see that there are 62 processes but only two are running, one of which is X (the X window system) and the other top itself. In fact this server also had 8 NFS daemons running even though it wasn’t actually using NFS!

There are a number of other programs which give similar information. KDE Process Manager (kpm) and KDE Task Manager (which uses ktop) provide not only the process level information that top itself provides, but also graphical representations of different types of resource usage such as memory usage broken down into program, buffer, cache and swap, and CPU usage broken down into user, nice and system (see figure 4.) It can also display the processes in a tree structure showing which processes spawned (started) which others. (The program pstree gives a similar display from the command line.) The GNOME System Monitor (using gtop) has similar facilities. An important thing to note is the memory and swap usage figures at the bottom of the display. The system shown has 48Mb of RAM and from the numbers in Figure 4 we can see that Linux is using most of it and that the used and free figures added together are close to the total RAM. If they’re not, you’ll need to investigate further and maybe take action (see the box ”Optimising Memory”.)


Swap Space As far as the swap usage figures are concerned, the ones on this system look very healthy. You’ll see that out of approximately 66Mb there is around 57Mb free. If the free figure was getting low, then it would be advisable to increase the amount of swap space. You could do this by creating another swap partition using fdisk or sfdisk, then calling mkswap to create a swap area in it and then swapon to bring it into use. The commands would look like this: mkswap /dev/hdxn swapon /dev/hdxn (where x is the disk ID and n is the partition number. A call of: swapon -s


Fragmentation The ext2 filesystem attempts to keep fragmentation to a minimum by keeping all blocks in a file close together, even if they can’t be stored in consecutive sectors. Ext2 effectively always allocates the free block that is nearest to the other blocks in a file and it is for this reason that it is seldom necessary to worry about fragmentation. If, however, after a while you suspect your filesystems do need defragmenting there are two ways to go about it. First, you can copy the contents of a filesystem somewhere else, (either another filesystem on disk or to tape), empty it and then copy everything back. The second method is to use a defragmentation program such as defrag . The archive is currently called defrag-0.70.tar.gz although the version number may change over time. WARNING: You must always work on UNMOUNTED filesystems when defragmenting, never on one which is in use. If you need to defrag your root filesystem you’ll need to boot up from an emergency boot disk.

should then list all the swap areas being used (including, hopefully, the new one.) To ensure the new swap area gets used automatically when the system is reloaded, add it into the file /etc/fstab. Copy the syntax of the swap area that’s there already. During reloads the system will call swapon a which brings into use all swap areas in /etc/fstab.

Typical contents of /etc/rc.d/rc5.d showing the startup and closedown scripts

Disabling Services If you’ve removed all your unwanted packages but still have a load of services running that you don’t use very much, you can disable them so that they won’t start up automatically when you reload the system. This can be done very easily with a program like ksysv (see Figure 5) or linuxconf (see Figure 6.) If you haven’t got either of those programs on your system or you simply prefer to do it manually you can also do it from the command line quite easily. Just cd into the appropriate rcn.d directory, (usually /etc/rc.d/rc5.d for a system running X) and use the ls command to list the contents (see Figure 7.) Disabling a service is simply a matter of renaming its startup script, usually by changing the first letter from capital S to a small s. For example, to disable NFS on the system shown use:

A typical lilo.conf showing the append parameter


Enabling a service is simply a matter of reversing this process. Note that these files are just symbolic links to the real scripts in /etc/rc.d/init.d. The filesstarting with K are shutdown scripts and are used to kill off services upon system shutdown.

One way in which you can relatively easily improve your filestore performance is by having your swap space on a separate disk to your normal filesystems. If you’ve got more than one swap area, place them on different disks. Similarily, if you’ve got a heavily used database, move it to a separate disk. Consider getting a second or third disk for these reasons alone: it can make a big difference.

Filestore Performance


Anyone who has used Windows for some time knows how the system performance gradually deteriorates as applications are installed and deleted. You’ll know the solution to this is to defragment the hard disk at regular intervals. With Linux this is much less of a problem, especially with ext2 type filesystems (see the box ”Fragmentation”.)

As we’ve seen, there are many ways of improving or optimising your Linux system, whether it be a single user PC or a large server with many users on it. As we mentioned at the start, many of the tasks can be simplified by using a configuration manager such as Linuxconf. So there’s no reason to settle for less than tip-top performance from your system. ■

mv S60nfs s60nfs

Defrag ■

Daemons: Daemons are programs that run in the background and perform various tasks when required. Typically they are server programs (e.g. a web server or file server.) Nice: nice is a program that runs at a different priority level from normal. ■

2 · 2000 LINUX MAGAZINE 91



Dr. Linux


Your Linux operating system may

Dr. Linux: The checking of a Linux ext2 file system is normally performed every twentieth boot-up. The interval can be changed using /sbin/tune2fs. Before you can use this command line tool you must know which partition contains the file system in question. If the partition is one that is automatically mounted by Linux at start-up you can find this out by typing the command mount without further options. If you only have one Linux partition, the only "genuine" file system is that attached to the root directory "/" in the list:

sometimes be in less than perfect health. Dr. Linux monitors the case load, makes out prescriptions and dispenses expert advice.

Stress busting Question: When I power up my computer the following messages appear repeatedly: /dev/Drive has reached maximal mount counU t, check forced.

In the file /etc/fstab ("Filesystemtable") is a list of which partition is automatically mounted where. Here the root directory "/" is listed with its file system:

Can I influence the frequency with which my file system is checked?

Partitions and file systems: Before data can be stored on a hard disk, one or more logical areas or partitions must be defined. Their locations are recorded in a partition table. Within each partition a file system suitable for the operating system used must be set up. This enables the operating system to manage files and folders. The most widespread Linux file system is the "Second Extended File System" ext2fs. A device name is specially allocated to each partition, thus enabling the operating system to access the data in the file system. Linux uses letters and numbers to describe hard disks and partitions. The letters indicate which hard disk is used. For IDE hard disks the following applies: • hda: the master hard disk on the primary controller, • hdb: the slave hard disk on the primary controller. In the case of SCSI devices the device names are as follows: • sda: first SCSI disk, • sdb: second SCSI disk. Each hard disk can have a maximum of four partitions, which would be named, for example, hdax, where x is the partition number. One partition can be used as an extended partition which may itself be divided into several logical partitions. Values of x of 1 to 4 are only used for primary partitions. Logical partitions are always numbered 5 and higher. ■ 92 LINUX MAGAZINE 2 · 2000

user$ mount /dev/hda6 on / type ext2 (rw) proc on /proc type proc (rw) devpts on /dev/pts type devpts (rw,gid=5,modU e=0620)

/dev/hda5 swap swap defaults 0 0 /dev/hda6 / ext2 defaults 1 1 /dev/hdc /cdrom iso9660 ro,noauto,useU r,exec 0 0 In this example the root file system is quite clearly located at /dev/hda6. The command to change the check intervals could therefore look as follows: root# tune2fs -c 5 /dev/hda6 The option -c sets the maximum number of mounts that may occur between two file system checks. The value 5 sets the number of boot processes that may occur until the next test. In this example the system would in future be checked on every fifth boot-up. When you run this command tune2fs confirms its execution like this: tune2fs 1.17, 26-Oct-1999 for EXT2 FS 0.5U b, 95/08/09 Setting maximal mount count to 5 If your system was not properly shut down, an automatic check of the file system occurs as if you have


set the system check interval to 0. In addition to the maximum number of mounts which are set using tune2fs, the valid bit is stored in the superblock of an ext2 file system. When the system is booted up the file system is checked by running the program e2fsck. If the valid bit it is set to 0, e2fsck tries to save what data can be saved by performing an automatic repair of the file system. Before you completely suspend the checks of your system using tune2fs, you should be aware of what fsck and e2fsck do. • Initially, a comparison of the information in the superblock occurs with the current state of the system. • A check is made as to whether every Inode entry is valid and can be allocated to a folder entry. • Then a check is made on whether the pertinent data blocks exist for all files and are unambiguous. • The link number in all folders is compared with the internal link counter in the inode. • Finally, the check takes place for whether the whole number of all blocks is equal to the number of occupied plus free blocks. You may find all of this terribly theoretical and boring: if so, the best thing would be to just allow Linux to continue to check the file system regularly. Another reason for leaving things as they are is that you can not just decide to check the file system whenever you like. If the partition is writably mounted, no file system check may take place. Your system checks the root directory when it only has read entitlement, which you can easily observe on system start-up: [...] Partition check: hda: hda1 hda3 <\<> hda5 hda6 > VFS: Mounted root (ext2 file system) read onU ly. Freeing unused kernel memory: 60k freed INIT: version 2.76 booting Running /sbin/init.d/boot Mounting /proc device done Activating swap-devices in /etc/fstab... Adding Swap: 128484k swap-space (priority -U 1) done Checking file systems... Parallelizing fsck version 1.17 (26-Oct-1999) /dev/hda6: clean, 139696/1038336 files, 222U 5322/4152771 blocks done Mounting local file systems... [...] Only after this has been done are the partitions fully (i.e. both readably and writably) mounted in the system. Note how much more sensible this is than the situation under Microsoft Windows 9x, which cannot prevent writes to disk from occurring whilst a file system check (ScanDisk) is run, and which must, therefore, restart the entire check each time a disk write occurs. Dr. Linux recommends that you should only try to check a file system "manually" after the man pages for tune2fs and e2fsck have been read and fully understood.


Lost Property Bureau On my system I have discovered the lost+found folder. But there is nothing in it. What is it for? Dr. Linux: A folder called lost+found exists on all ext2 file systems. Files are moved here which have "failed" in a file system check. This is to give you the opportunity to try to save something of corrupted files. For example, the folder entry of a file may be missing but its content may still be present. Such faults in the file system are caused when the system has not been shut down correctly, as in the case of a power failure or accidentally pressing the reset button. Because information may be cached and only written back to disk when the system is shut down your Linux system should never be shut down or restarted except by means of the shutdown command.

Nocturnal activities I'm finding that at regular intervals my hard disk suddenly bursts into activity. Using top I have ascertained which processes are running. In particular I have found that find and mandb run. What irritates me is that root is obviously carrying out automatic actions whilst I am not logged in at all as root. It also bothers me that in the middle of the night the hard disk whirrs on account of a find process (which is logical, as something is being searched for). My question is: How can I change or cancel these automatic processes? Do they serve a useful purpose? Dr. Linux: Such automatic processes are executed by the program crond (often also called just cron). The Cron Daemon is a program which is started when the system is booted by the cron(d) – sometimes also cron.init – initialisation script in one of the folders /etc/rc.d/init.d, /etc/init.d or /sbin/init.d. If you would like to know the precise origin of your daemon enter the command:

Superblock: When the computer is switched on, the first block of a partition or diskette is read and evaluated. This block may contain a program for loading an operating system, a so-called boot loader. For this reason this block is termed the boot block. The real file system begins with the second block; this is the socalled superblock. Valid bit: In the superblock of an ext2 partition, the valid bit is deleted or set at 0 before the partition is mounted. When shutting down the system, all files in the main memory are saved and the partition is unmounted from the system. The valid bit is stored again or set at 1. The next time the system is booted up, its value is checked again. If it is set at 0, this means that the system has not been properly shut down. A check and repair is then initiated by running the program e2fsck. Inode: An inode is a data structure in which information such as the size, properties, group and access rights are kept for all files, directories and links in the system. In addition, each inode contains seven pointers (or "references") to data which belong to the file, folder or link. ■

root# locate cron The program cron(d) runs – like all daemons – in the background and checks certain files in the system to see if tasks are stored there which are to be executed at set times. Look in your folder /etc. There you will find the file /etc/crontab (also called Systemcrontab) and usually the following folders: • cron.daily: Here the executable scripts for daily tasks are stored. • cron.hourly: In this folder lie scripts for tasks which are to be processed hourly. • cron.weekly: Scripts of tasks to be executed weekly are stored here. • cron.monthly: Store for scripts which contain tasks to be executed monthly. The tasks or cron-jobs here will be carried out at the times indicated in the /etc/crontab. The /etc/crontab of 2 · 2000 LINUX MAGAZINE 93



an SuSE Linux system differs quite considerably even at first glance from that of a Red Hat Linux distribution. For example, the /etc/crontab in the case of SuSE 6.3 in extract looks as follows: # check scripts in cron.hourly, cron.dailU y, cron.weekly and cron.monthly # -*/15 * * * * root test -x /usr/lib/cron/rU un-crons && /usr/lib/cron/run-crons 0 0 * * * root rm -f /var/cron/lastrun/U cron.daily 0 0 * * 6 root rm -f /var/cron/lastrun/U cron.weekly 0 0 1 * * root rm -f /var/cron/lastrun/U cron.monthly … whereas the /etc/crontab of Red Hat has the following entries: # run-parts 01 * * * * root 02 4 * * * root 22 4 * * 0 root 42 4 1 * * root

Figs. 2a and b: The out for cron?

run-parts run-parts run-parts run-parts

/etc/cron.hourly /etc/cron.daily /etc/cron.weekly /etc/cron.monthly

Let's look at at the format of an /etc/crontab with all the puzzling asterisks and figures.

Fig. 1: Set up cron jobs under root using linuxconf

Each line is divided into three areas: when, who, and what. In the "when" area the time indications are set separately in each case by blank in the sequence minute(s), hour(s), day, month, weekday. An * stands for every possible value in this field. The following are permitted: • the minutes 0–59 as well as *. An entry like 059/15 * * * * causes the subsequently set command to be executed every 15 minutes. • the hours 0–23 and *. Here too you can indicate areas: 0 13-18 * * * * results in the following command being executed each full hour between 13 and 18 hours. 0 10,12,16 * * * *, on the other hand, executes the appropriate cron job at the full hour at 10, 12 and 16 hours. • the days of the months 0–31 and *. 15 12 6,20 * * stand for the 6th and 20th of each month at a quarter past 12. • the months 1–12 and * as well as • the days of the week 0-7 and *, it being possible to identify Sunday either by 0 or by 7. If you have been paying attention you will have noticed that the "-" at the start of the example SuSE crontab was not explained. If you use a different distribution perhaps you have already searched unsuccessfully for an explanation of it in the man pages. No wonder, for on most Unix and Linux systems this is an invalid entry. Information on its purpose can, however, be found in the SuSE man page for crontab(5). The "who" area of the file is separated by blanks from the "when" area, and contains the name of the user as whom the cron job will be carried out. After more blanks the "what" area begins. If, for example, you wish to download your news to a local news server using the program fetchnews you would enter here the full path to the command, for example: /usr/sbin/fetchnews. A completed entry in /etc/crontab which causes the user news to fetch news articles Mondays to Fridays at 20.00 hours would look something like: 0 20 * * 1-5

news /usr/sbin/fetchnews -vv

Note that the user specified must have the rights to carry out the command assigned to it. If no network connection can be established at the scheduled time, cron sends you a mail: 94 LINUX MAGAZINE 2 · 2000


Date: Sun, 2 Jul 2000 11:00:01 +0200 From: Cron Daemon root@max> To: root@max Subject: Cron news@max> /usr/sbin/U fetchnews -vv 1.9.4: verbosity level is 2 Trying to connect to ... faiU led. But a successful message is also delivered: Date: Sun, 2 Jul 2000 13:00:03 +0200 From: Cron Daemon root@max> To: root@max Subject: Cron news@max /usr/sbin/U fetchnews -vv 1.9.4: verbosity level is 2 Trying to connect to ... coU nnected. Getting new newsgroups from Read server info from /var/spool/news/leaf.nU ode/ comp.os.unix.linux.newusers: considering arU ticles 128807-128875 comp.os.unix.linux.newusers: 69 articles fetU ched, 0 killed [...] Disconnected from In each case you can read in the executable scripts in the folder /etc/cron.daily. These are the automatic processes that make your hard disk whirr daily. On most systems only root has the right to read these files. By entering: user$ top you can follow what processes are started and finished on a command line while the cron jobs are running. Depending on your system and the way it has been configured you may observe some of the following actions: • mandb renews the manpage database. • The folder /tmp is emptied. • updatedb updates the database which the search command locate accesses. locate works similarly to find, only more quickly because it accesses a database and does not examine the system


directly. In order to obtain a correct result from search queries, this database must be up to date. • The log(book) files are monitored so that they do not grow endlessly. Depending on the system and its security settings, the cron jobs (pre)configured for you are executed by the user root or nobody.

Fig. 3: Finding the settings for cron is not quite so simple. Fig. 4: You configure the scripts in cron daily with Yast like this

Time savers Is there no simpler way of editing the system crontab? Dr. Linux: If you use linuxconf while running as root you can select System->Cron jobs and use this to enter new tasks. Under System->Services you can also select crond from the list and completely deactivate the cron daemon. (If you run the SuSE distribution you can also configure cron using Yast.) Deactivating the daemon is advisable if you are running Linux on a laptop on battery power mode or when burning CDs. Figure 4 shows where you can change the settings under SuSE Linux using Yast. After Administration of the System choose the entry Change configuration file (Figure 3). There you determine what tasks the scripts in the directory /etc/cron.daily should process: • CRON: Here you specify whether the cron daemon is activated or not. • MAX_DAYS_FOR_LOG_FILES: How long do you want to archive the automatically compressed log files for before they are deleted? • MAX_DAYS_IN_TMP: Insert here how long unused files may remain in /tmp. • MAX_DAYS_FOR_CORE: How long may Core files occupy space on the hard disk before they are deleted? • RPMDB_BACKUP_DIR: Inform the system here where the backups of the RPM database should be stored, or whether there should be backups at all. With MAX_RPMDB_BACKUP you specify the maximum number thereof. • RUN_UPDATEDB: If you would like to renew the database for locate daily, enter yes here. • REINIT_MANDB: A yes at this point causes the manpage database to be updated daily. ■

Core files: If a process terminates unexpectedly a core dump is produced. This is intended to help programmers reproduce the fatal moment when faults are experienced. For non-programmers these files only use up storage space unnecessarily. RPM database: When installing, updating and uninstalling packages distributions like SuSE, Red Hat, Caldera or Mandrake update a database that contains detailed information about the software installed on the system. ■

2 · 2000 LINUX MAGAZINE 95



Setting up kppp


kppp configuration Step 1

Getting connected to the Internet using your modem is easy under Linux. The graphical application kppp lets you set up connections to multiple ISPs and pick any one of them when you want to go online. This step-bystep guide shows you how to do it.

Under Linux, dial-up Internet connections are usually made using a chat script to dial the ISP, and pppd to run the connection. Setting up these command line utilities can be quite tricky, not to say frustrating, particularly given that you can never be quite sure if failure to connect is due to a fault in your configuration or a temporary problem at the ISP. The graphical utility kppp does the same job, but it does it without the need to go anywhere near a console window. If you’ve set up dial-up networking under Windows you’ll find kppp surprisingly similar, and just as easy to set up. If you’re not running one of the latest Linux distributions the first thing you should do is upgrade kppp to the latest version. There were some bugs in kppp version 1.6.2 that will make achieving a connection impossible with many ISPs, so it isn’t worth trying to use an old version. You should also make sure that you have a modem that works with Linux. If you have an external modem that connects to the serial port you don’t have a problem. An internal modem that plugs into an ISA bus slot is probably OK too. If you have a PCI modem, however, particularly one that was included in the price of the computer, it may be a software modem, also known as a “Winmodem”, which requires special drivers that are usually only available for the Windows operating system. For more information on this type of modem and whether it can be made to work under Linux visit

96 LINUX MAGAZINE 2 · 2000

1. Start kppp from your application menu (you’ll probably find it under “Networking”) or by typing the command “kppp” in a console window. When the program window appears, click “Setup”.

Step 2

2. The “kppp Configuration” dialog box should appear. Select the “About” tab to check the version number of your copy of kppp. Now go back to the “Accounts” tab. You need to create an account for each ISP you wish to use. To create a new account click “New…”.


Step 3

3. The “New Account” dialog box will appear, with the “Dial” tab showing. Enter a name by which this ISP connection will be known, enter the phone number to dial, and select the authentication method. This will usually be either “PAP” or “CHAP”. Try to find out from your ISP which one to use. Check “Store password” if you don’t want to be asked for your password each time you connect to your ISP. The three “Execute program…” options allow you to tell kppp to run a program when you get connected, or just before or after you disconnect. This is really a “power user” feature. The “Edit pppd arguments” button lets you specify special options to pppd, the PPP daemon, which might be needed to get kppp to work with some ISPs.

Step 4


Step 5

5. Select the “DNS” tab. Enter a domain name. This will typically be the same as what follows the “@” in your email address. You should now add the IP addresses of your ISP’s primary and backup domain name servers (DNS). You may not know this information as Windows allows the addresses to be allocated automatically. Try to find out the addresses from your ISP. If you can’t find out, use the addresses “” and “”. This is a free public DNS: for more information see You should also ensure that “Disable existing DNS Servers” is checked, although if this is a standalone PC you probably don’t have any existing DNS servers.

Info Linux Winmodem Support

Step 6 The kppp Configuration Archive Granite Canyon Public DNS ■

4. Select the “IP” tab. Unless you are using one of the (very rare) ISPs that allocates you a fixed IP address, you should select the “Dynamic IP Address” radio button.

6. Select the “Gateway” tab. The “Default Gateway” radio button should be checked. You should also check the box against “Assign the Default Route to this Gateway.”

PPP daemon: This is the program pppd which runs in the background and routes Internet traffic to your ISP via your modem using the Point-to-Point Protocol (PPP). It has many settings, which can be defined using the file /etc/ppp/options. However, using kppp you can specify extra options that apply to a specific connection. In-depth knowledge is needed in order to understand the effect of these options, so only change them if you are advised to do so by someone who knows the requirements of your ISP. ■ 2 · 2000 LINUX MAGAZINE 97



Step 7

Step 10

7. Select the “Login Script” tab. If your ISP uses PAP or CHAP authentication then the lower part of the tab should be blank. If your ISP uses a manual authentication procedure this can be automated using a script, which can be created by selecting script commands from the top part of the tab and clicking the “Add” button. Because so few ISPs still use this outmoded method of authentication we won’t go into it here.

Step 8 8. The “Accounting” tab lets you enable the accounting feature, which creates a record of time spent online and can provide an estimate of your online costs. It plays no part in obtaining a working connection and can safely be left disabled. Having checked this, click “OK” to complete the setting-up of this account.

10. Select the “Modem” tab. Here you can set the length of time that kppp should wait if the line is busy, before trying again. You can also set the volume level of the modem speaker so that you can hear what is happening when the modem dials up. Move the slider to the left if you would prefer kppp to connect in complete silence. However, it’s best not to do this until everything is working perfectly.

Step 11

Step 9 9. If this is the first time you have used kppp certain generic settings, such as those for the modem, need to be set up. This only needs to be done once. Select the “Device” tab on the “kppp Configuration” dialog box. Select the modem device from the drop-down list. If your distribution’s setup program detected your modem when you installed Linux it should have created a device called /dev/modem which you should use. If not, use /dev/ttyS0 for serial port 0 (known as COM 1 under Windows), /dev/ttyS1 for serial port 2 and so on. Flow control should be CRTSCTS, Line Termination CR and Connection Speed 115200. “Use Lock File” should be checked: this will prevent something else from trying to use the modem at the same time as kppp.

11. Select the “PPP” tab. The “pppd Timeout” value should be set to a sufficiently long period, in seconds, to enable the PPP daemon to establish an authenticated connection. Using some busy ISPs this can take quite a while. If you try to connect, and error messages indicate that pppd timed out, increase this value. (See also the “FAQ” section of kppp help for more possible solutions.) The remaining options on this tab control the behaviour of kppp itself and should be fairly self-explanatory. Once you have finished, click “OK” to close the configuration dialog.

Step 12

12. You are now ready to connect to the Internet. Type your username and your password into the appropriate fields and click “Connect”. If all is well, you should hear your modem dial your ISP and negotiate a connection, after which the kppp dialog box will minimise or dock to the panel and you will be online and able to surf the Web.

98 LINUX MAGAZINE 2 · 2000



Advanced modem configuration There are three buttons on the “Modem” tab of “kppp Configuration” that you will hopefully not have to worry about. The “Modem Commands” button lets you see the commands that are sent to the modem and the responses that are expected. You should not need to change these. This option lets you tailor the generic commands to suit specific modems. However, because the exact details would depend on your modem, we won’t go into this here. The “Query Modem” button sends a series of interrogatory commands to the modem and displays the responses. The “Terminal” button opens a simple terminal window that lets you type commands to the modem and view the responses. These two options are useful for troubleshooting purposes, since they will help you to establish whether kppp is able to communicate with the modem. You can also use the terminal window to change the modem’s default settings if required to make it work properly with kppp. This should not be necessary unless you have a strangely configured modem. If you experience problems, try resetting the modem to the factory defaults: your modem’s manual should tell you the command to use.

Troubleshooting If you are using an up-to-date Linux distribution and kppp was installed along with everything else, the program should work right out of the box. However, it’s difficult to take into account the different configurations that may be required by some of the thousands of ISPs that exist worldwide. If kppp doesn’t work, calling your ISP’s helpline may not get you very far as they may respond with “Sorry, we don’t support Linux.” Fortunately, there is plenty of help available from the Linux community itself, so you should be able to find out what is needed to get your Internet connection going. Your first stop should be the kppp Configuration Archive at website holds copies of kppp configuration files for a number of different British ISPs. The configuration file is named kpprc and is kept in the directory ${HOME}/.kde/share/config (where, of course, ${HOME} is your home directory.) Either copy the downloaded file into this directory and edit it to insert your own user name and password, or else compare the downloaded file with the one you already have and make changes as appropriate. If your ISP isn’t one of those listed in the Configuration Archive you’re going to have to find out the trick that’s needed to get it to work with the terminal equipment your ISP is using. If your ISP won’t provide the answer try searching the Web or Dejanews ( for files containing the name of your ISP and “kppp”. Someone may have been down this road before and come up with the solution. What is probably needed is a switch to be passed to pppd, which you can enter by clicking the “Edit pppd arguments” button on the Account Properties dialog box. One that quite often helps is “noauth”, though we’ve sometimes found ISPs that require “novj” and “novjccomp”. For the full list of these options and what they do type man pppd in a console window. As you can see, there are quite a lot of them so trying to find the right one by trial and error is a bit of a non-starter.

If you do get kppp working with an ISP that isn’t listed on the kppp Configuration Archive site, and particularly if you needed to change the default settings or add pppd arguments in order to do so, please help other Linux users by contributing a copy of your final kppp configuration file to the archive.

Problems with permissions If you get errors like “Sorry, can’t open the modem” or “Can’t create a modem lock file” this means that kppp doesn’t have the necessary permissions. kppp is usually run with setuid root which solves these problems. To ensure that kppp is setuid root, run the following commands from a console window:

Specifying pppd arguments for a dial-up account

su root chown root:root ${KDEDIR}/bin/kppp chmod +s ${KDEDIR}/bin/kppp exit It’s possible to overcome this problem without making kppp run with setuid root, but unless you are obsessive about security it isn’t worth the hassle. However, if you really don’t want to run the very small risk that this could be used to compromise the security of your system, the FAQ section of the kppp help outlines what you need to do.

Other problems The other error you might see is “The PPP daemon died unexpectedly”. Again, the kppp help lists a number of possible causes for this error. A common cause is that the file /etc/ppp/options contains settings that conflict with kppp. This can occur if your system is set up to make dial-up connections without using kppp. The solution is usually to delete everything in /etc/ppp/ options so that the file still exists, but is empty. There are many more possible errors, which are extremely rare, listed in the kppp FAQ. It is a very good troubleshooting resource. With its help, you should be able to get your Internet connection working no matter what modem and ISP you are using. Happy surfing! ■

setuid root: This means making a program run with the access privileges that would normally only be available to user root. Technically, this creates a weak point in the system’s security that could be exploited by anyone wishing to gain unauthorised access to the system. However, some programs have to be run setuid root, and these programs are carefully coded to make them as safe as possible. For more information see the article on permissions in Linux Magazine issue 1. ■

2 · 2000 LINUX MAGAZINE 99



How To: Create KDE Themes, Part 1


KDE gives you the ability to customise its appearance very quickly using what are known as Themes. In this series we will show you how you can create your own themes.

Theme: A Theme is a collection of multimedia elements which have a common theme as regards content. For example, if you are the fan of a rock group, you could use a digitised photo of the band as a background image and extracts from their songs as system sounds, creating a Theme. Desktop Environment: Linux has several graphical desktop environments. The best known are CDE, KDE and GNOME. In contrast to window managers, with which desktop environments are often confused, they usually provide additional functions such as Drag & Drop, Session Management and a panel bar. ■

Along with GNOME, the K Desktop Environment (KDE for short) has now established itself one of the standard graphical interfaces for Linux. In this series we will take a look at how you can change the appearance of the desktop and applications under KDE using Themes. We will take you through the process step by step using the "eclipse" theme as an example so that you can see what options you have as a creative user to tailor KDE to your own requirements. We will also describe some useful tools which make it easier for you to create your own themes. We don't promise to mention all the programs that could be used, nor can we guarantee to cover all the possible settings. However, the knowledge provided in these articles will provide a base from which you can go on to develop your own ideas. You will find more detailed literature on the KDE Themes Homepage. Note that this article relates to the most recent stable version of KDE – 1.1.2. At present, we cannot tell to what extent themes developed for this KDE version will be compatible with those of KDE 2. The only way to find out, when KDE 2 arrives, is to try it! Most of the tools you will need are contained in the most common Linux distributions as standard (for this article we used SuSE 6.4). If you find that in your case this is not so, you can download the source code from the Internet at the addresses shown and then

100 LINUX MAGAZINE 2 · 2000

compile it yourself. This will work regardless of the distribution you use. (See the box Installing Tools or the article on software installation in Linux Magazine, October 2000 for more information).

What can we change? The features provided by KDE can be divided into the following groups: • Start panel • Background image • Icons • Window keys • Window title panel • Window frame • System sounds • Colour diagram • KFM settings In this first article of the series we will be looking at what we can do with the start panel, the background image and the desktop icons. We will look at the other items in the following two articles.

And we're away First of all, create a sub-directory into which you can collect all the files belonging to your theme. We will do this using the command mkdir eclipse.


Change to this sub-directory (cd eclipse). The next step is to create the central configuration file for the theme (called THEMENAME.themerc) using any text editor of your choice. For example, you could type kwrite eclipse.themerc. When you have finished, the configuration file should consist of several sections. To start, there will be a [General] section where, as the section name suggests, we save general information (see Listing 1, lines 001 to 006). This section contains the information listed in table 1. If you look at the example (Listing 1) you will soon see that not all these details have to be present. In the example, the homepage entry is missing.


Table 1: Header of a theme configuration file Section name [General] Theme name name=Themename Theme's author author=Authorname Author's email address email=eMailaddress Theme's homepage homepage=Homepage Theme description description=description Theme version version=Versionnumber

Table 2: Start panel Section name Start panel's background image

[Panel] background=Filename

Colourful panel, or start panel with a theme Most users like to apply a graduated fill to the start panel. However, in principle, it is possible to use a graphics file too. It's really a matter of taste. In our example we use just a graduated fill. A good tool for creating attractive graduated fills is the image processing program The Gimp. The best way to proceed is as follows: • Start "The Gimp" (gimp) • Create a new file ([CTRL]+N or via the relevant menu) • Width = 1 pixel, height = 60 pixel (this is actually larger than necessary, but don't worry about it) • Set colours 1 and 2 (See figure 1) 1. To set the colours, double-click on the relevant colour to view the colour selection dialog (see Figure 2). To keep consistent with our example, define colour number 1 using a hexadecimal value #b0a48e and colour 2 using #696154.

Installing Tools Decompressing zipped archives • If filename ends in .tar.gz, unzip using gunzip FILENAME • If filename ends in .tar.bz2, unzip using bunzip2 FILENAME This creates a file with a name ending in .tar • Unzip the archive (tar xvf FILENAME) • Switch to the newly created directory (cd DIRECTORYNAME) • Configure the compilation process (./configure) • Start the compilation process (make) • Install the compiled program (make install) This last step requires root privileges (type the command su followed by the password.) Alternatively, you can start the program in the relevant subdirectory using ./PROGRAMNAME.

• Enlarging our image: 1. Select the enlargement tool by left-clicking on the relevant icon (see Figure 1) 2. Left-click on our new image until it is large enough • Using the graduated fill tool 1. Select the graduated fill tool by left-clicking on the relevant icon (see Figure 1) 2. Left-click on the top corner of our new image 3. Press the mouse button and move the mouse down 4. Release the mouse button • Saving the file 1. Right-click on our image 2. File/Save as 3. Enter file name panel.xpm 4. Click on OK As we are still in the eclipse directory, the file panel.xpm should appear in it too. If it doesn't, you will have to find the file and copy it to the directory. Now update the relevant section (see lines 011 and 012 in Listing 1) in the configuration file and the new start panel is ready. For convenience, this section can be found in table 2 again. Figures 3 and 4 show the wondrous change to the start panel when it is included in the eme. Actually, we have cheated a bit here. We have not yet defined any new icons. We will see how to do that shortly.

[right] Figure 1: The Gimp: Foreground and background colour [left] Figure 2: The Gimp: Colour selection dialog

[top] Figure 3: Start panel with standard KDE look [below] Figure 4: Start panel with theme

2 · 2000 LINUX MAGAZINE 101



Background image

Figure 7: Background image tiled

Figure 8: Background image mirrored

There are many different ways of creating a digital image (Scan, Render, draw etc.) that how you create the background image for your theme is up to you. We will concentrate on what you need to do to make it suitable for use as a background image. What you must do depends on whether the background consists of a single large image, a colour, a graduated fill or tiled small images. File wood1.jpg (Figure 5) is an example of a tiled image. It comes as standard with KDE 1.1.2 . The result can be seen in Figure 6. Our example theme uses a single background graphic which is stretched to fit the size of the screen. So that this works without distorting the proportions of the picture, background images must be of a size that is a multiple of the screen resolution. The following dimensions are recommended: Figure 5: wood1.jpg [below] Figure 6: wood1.jpg [right] tiled

Figure 9: Background image centre-tiled

Figure 10: Background image centred

Figure 11: Background image centred in front of wall

Figure 12: Background image centred with perspective

Figure 13: Background image centred and scaled symmetrically

Figure 14: Background image tiled

Figure 15: Background image tiled

Figure 16: Background image adjusted to screen

Figure 17: Background image top right-hand corner

Figure 18: Background image top left-hand corner

Figure 19: Background Figure 20: Background image bottom rightimage bottom lefthand corner hand corner

• 640 x 480 • 800 x 600 • 1024 x 768 • 1152 x 864 • etc. The parameters given in Table 3 are available in the theme configuration file so that the background image can be manipulated. (See lines 007-010 in Listing 1). The following modes may be used to determine how the background image is displayed: • Tiled (Figure 7) – The background image is shown tiled, starting in the top left-hand corner. • Mirrored (Figure 8) – The background image is placed in the top left-hand corner of the screen. If the screen is not filled, the image is mirrored along its edges. • CenterTiled (Figure 9) – The background image is shown tiled, starting from the centre of the screen. • Centred (Figure 10) – The background image is shown in the centre of the screen. • CentredBrick (Figure 11) – The background image is shown in the centre of the screen in front of a "brick wall". • CentredWarp (Figure 12) – The background image is shown in the centre of the screen and perspective lines are drawn in. • CentredMaxpect (Figure 13) – The background

Table 3: Background Section name Should the background be the same for all virtual desktops? Background image of the first virtual desktop Display method of the first virtual desktop's background image

Figure 21: kiconedit

102 LINUX MAGAZINE 2 · 2000

[Display] CommonDesktop=true / false Wallpaper0=Filename WallpaperMode0=Mode


image is shown adjusted to the size of the screen, starting from the centre of the screen. The height and width are changed in equal proportions. Consequently, the background image may not cover the entire screen. • SymmetricalTiled (Figure 14) – The background image is tiled symmetrically in the centre of the screen. • SymmetricalMirrored (Figure 15) – The background image is mirrored symmetrically in the centre of the screen. • Scaled (Figure 16) – The background image is adjusted to the size of the screen. • TopRight (Figure 17) – The background image is displayed in the top right-hand corner of the screen. • TopLeft (Figure 18) – The background image is displayed in the top left-hand corner of the screen. • BottomRight (Figure 19) – The background image is displayed in the bottom right-hand corner of the screen. • BottomLeft (Figure 20) – The background image is shown in the bottom left-hand corner of the screen. Instead of having a common background for all virtual desktops, you can have a different background image for each individual desktop. You can do this by entering CommonDesktop=false. You can then specify the individual images and modes using Wallpaperx=Filename or WallpaperModex=Mode. x is the number of the relevant desktop, starting from the zero. If you don't specify a background graphic, the background colour or background colour fill defined in the colour dialog box is used. We will see how to do this later on.

Many, many colourful icons Just as there a number of ways of creating background images, so there are also a number of ways of creating icons. The most obvious involves the use of an icon editor. A good choice would be kiconedit (see figure 21).


As it can be a lot of trouble to edit icon pixels, an alternative approach is to use "The Gimp" and its script capabilities. For our example theme we used the Fire logo script (see figure 22). The procedure is as follows: • Start "The Gimp" (gimp) • Start the Fire logo script via Xtns/Script/Logos/Fire logo . (If your copy of gimp doesn't have this script installed you can, of course, try a different one.) • Choose an attractive font and some relevant text (the example uses Helvetica/34/bold and an "e" to stand for "eclipse") • Press OK • The burning "e" shown in Figure 23 is created • Enlarge the view for a better overview, using the same method used when creating a graduated fill for the start panel • Merge the visible levels using the key combination [CTRL+M] • Enclose the burning "e" using the selection tool (sketched rectangle in Gimp main window) • Copy the area using the key combination [CTRL+C] • Create a new graphic using the key combination [CTRL+N] (The size is automatically set to that of the copied image.) Important: Set the "transparent background" option in the "New Image" dialog box. • Add the burning "e" to the new graphic using the key combination [CTRL+V] • Right-click the new graphic and use Image / Scale to increase it to 34 x 34 pixels. You must ensure that the "Constrain Ratio", as shown in Figure 24, is not selected, otherwise it is only possible to make symmetrical size changes. • Save new picture as e.xpm Table 4: Icons Section name The K on the start panel Exit key on the start panel Key to lock the screen on the start panel Home directory Trash can Full trash can

[left] Figure 22: The Gimp fire logo script [middle] Figure 23: The burning "e" [right] Figure 24: Scaling with The Gimp

Scan: This is the process of transferring photos or images on paper to a digital image file using a scanner connected to the computer. Render: This is the process of creating a realistic image from a model designed using a drawing program. Tiled images: For tiled backgrounds we make the most of the symmetrical characteristics of images. This ensures that the images merge seamlessly at the borders when one is placed next to another. ■

[Icons] PanelGo=Filename PanelExit=Filename PanelKey=Filename Home=Filename Trash=Filename TrashFull=Filename 2 · 2000 LINUX MAGAZINE 103



• Zip the archive (gzip eclipse.tar) Now you can start the KDE theme manager using the console or the program starter ([ALT]+[F2]) using the command kthememgr. Of course, the start menu can be used too. Figure 25 shows this program's main menu. Using Add…, import our theme, which will then appear in the list. With a brave OK we now transfer our one-third completed theme.

Info KDE Homepage: Example of a theme "eclipse": phtml?cattype=inc=trad=0=1= eclipse KDE Themes Homepage:

Removing themes

The Gimp Homepage: kiconedit: 800018/prog.html#KICONEDIT Figure 25: The KDE theme manager

KDE Theme manager: able/apps/themes/kthememanager-1.0.0-src.tar.gz ■

We now have our first icon. Obviously, we cannot describe how to create all of the icons contained in "eclipse" – there are just too many. Three weeks later… Thousands of icons have been drawn or "gimped" and they must now be entered in our configuration file. The sections provided for this are [Icons] (see lines 013 to 019 in Listing 1) and [Extra Icons] (see lines 020 to 040 in Listing 1). The parameters of the first section can be found in table 4. The section [Extra Icons] is, as Listing 1 suggests, a list of various icons. The icon names are identical to those of the icons which are to be replaced. The original icons can be found in the KDE directory (for SuSE 6.4 they are under /opt/kde/share/icons).

That's that

tar-archive: Although the tar program was originally designed to back up data to a tape drive, it can also be used to merge several files into one single archive file. However, the data is not compressed during this process. Tar archives are usually zipped using gzip or b2zip in order to save space ■

We are now coming to the end of this first part of our trilogy. What remains to be done so that you can admire the results of your work so far? There are two ways of changing the various system parameters so that your theme can be used. Firstly, you can make all the changes manually. However, this means: • Editing various configuration files • Copying various graphics and sound files • Adjusting a range of colours manually During the development phase, when you may want to try out different variations, these steps are annoying. The second, easier approach is to use the KDE theme manager kthememgr (see Figure 25). This copies the necessary files and automatically undertakes the editing required. To use it you must zip the relevant files including the configuration file THEME_NAME.themerc into a tar-archive created using gzip. To do this, proceed as follows: • Switch to the directory where the eclipse theme has been created • Archive the entire directory in a file (tar cvf eclipse.tar eclipse/)

104 LINUX MAGAZINE 2 · 2000

Unfortunately, kthememgr doesn't delete themes cleanly. The individual images must be deleted by hand before the theme is changed. You can do this using three commands: rm ~/.kde/share/icons/* -rf rm ~/.kde/share/apps/kwm/pics/* rm ~/.kde/share/apps/kpanel/pics/* -rf . Don't worry, the theme manager keeps a copy of the files so that they are available if the theme is used again. ■ Listing 1: eclipse.themerc 001 [General] 002 name=eclipse 003 author=Hagen Hoepfner 004 email=Hagen.Hoepfner<\@> 005 description=A dark sun for KDE (made using gimp and its Firetext-plugin) 006 version=0.3 007 008 009 010

[Display] CommonDesktop=true Wallpaper0=bg.jpg WallpaperMode0=Scaled

011 [Panel] 012 background=panel.xpm 013 014 015 016 017 018 019

[Icons] PanelGo=go.xpm:mini-go.xpm PanelExit=exit.xpm PanelKey=key.xpm Home=kfm_home.xpm Trash=kfm_trash.xpm TrashFull=kfm_fulltrash.xpm

020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040

[Extra Icons] Extra1=kfind.xpm Extra2=image.xpm Extra3=sound.xpm Extra4=aktion.xpm Extra5=kwrite.xpm Extra6=folder.xpm Extra7=kcontrol.xpm Extra8=kdehelp.xpm Extra9=kmail.xpm Extra10=kfm_refresh.xpm Extra11=folder_open.xpm Extra12=3floppy_mount.xpm Extra13=3floppy_unmount.xpm Extra14=5floppy_mount.xpm Extra15=5floppy_unmount.xpm Extra16=core.xpm Extra17=document.xpm Extra18=input_devices_settings.xpm Extra19=kab.xpm Extra20=kvt.xpm


105 LINUX MAGAZINE 2 路 2000




File management


You use programs like less or more when you want to view long files such as logs. But these aren't always the best choice. head and tail are two simple tools whose names describe just what they do, and they make the life of every Linux system administrator easier.

immediately. You can leave this follow mode at any time using [Ctrl-C] and once more scroll through the file. While running in "follow mode" less notes this with the text:

Although many things

Waiting for data... (interrupt to abort)

interfaces like KDE or

The end is near — tail

in the status line.

GNOME, anyone wish-

This neat little program writes, if it is not told to do anything else, the last ten lines of each file to the screen. If it is displaying the tails of more than one file, tail notes for the user what file it is busy with at the time: ==> file2 <==. If you should want tail to display more or less than the last ten lines, you can specify the number of lines using the option -n number. The parameter -q (for quiet) is also useful. It comes into play when the ends of several files are output and you don't want to be told which file end is which. (The parameter -v for verbose does precisely the opposite!) Probably the most frequently used option of tail is -f (which stands for follow). The display of files which change constantly (like the log files /var/log/messages or /var/log/maillog) should be updated; to achieve this you must continuously repeat the program call. This option does precisely that. It constantly examines whether the file has grown and displays the latest entries. It is useful, for example, when root would like to view the connection messages of a modem:

Head to head!

root@blue ~ > tail -f /var/log/messages Jul 27 21:02:22 blue chat[568]: expect (ssword:) .... Jul 27 21:02:22 blue chat[568]: Password: .... Jul 27 21:02:22 blue chat[568]: -- got it Jul 27 21:02:23 blue pppd[567]: Connect: ppp0U /dev/modem Jul 27 21:02:24 blue pppd[567]: local IP addU ress

Follow the lesser tail The command less also has an operating mode that permits the following of changes in a file. If you type capital "F" in the current less ("F" here stands again for "follow") then less waits for new lines to be added to the file being viewed and displays them

can be readily accomplished using graphical

ing to exploit their Linux system properly

This program outputs – as you might have guessed – the top ten lines of a file on the monitor. The options available are similar to those of tail: if there are several files to be displayed an optical separation in the output occurs. The option -n number is also available in order to display, say, just the first five lines in each case. head and tail are twins which are dedicated to the same tasks: only one starts from above and the other from below. If you have a whole series of configuration files in the directory /etc and are not sure which one you have to make a change in, a rapid:

cannot dispense with the command line. There are also many other situations where it is good to know a little about the command line jungle.

head /etc/*.conf will list the start of all the configuration files. As these files usually start with comments that explain the purpose of the file, the correct one is quickly found. root@blue ~ > head -n 3 /etc/w*.conf ==> /etc/webalizer.conf <== # # Sample Webalizer configuration file # Copyright 1997-1999 by Bradford L. Barrett U (brad<\@> ==> /etc/whoson.conf <== # whosun server and client sample configuratiU on file # Configuration entry is: "client" or "serverU " starting from position 1, ==> /etc/wine.conf <== ;; ;; MS-DOS drives configuration ;; head has incidentally – unlike tail – no follow option as it is rather unusual to allow files to grow "upwards"… ■ 2 · 2000 LINUX MAGAZINE 105





There are thousands of tools and utilities for Linux. ”Out of the box” takes the pick of the bunch and each month presents a program which we consider to be indispensable or, perhaps, little known. Here we examine the cache browser nscache.

After an extensive surfing session using Netscape, probably the most widely used web browser under Linux, there is a wealth of files in your personal disk cache. Netscape itself offers only very limited capabilities for viewing these files. NScache, a browser for Netscape’s disk cache, makes considerably more use of the filed data.

Old acquaintance Stefan Ondrejicka, the author of the program, is already known to many people through pavuk, a download tool for web pages. NScache has its homepage at Sourceforge, which is also home to a

Disk Cache: This is temporary storage space which is created by Netscape on the hard disk and reduces the need to constantly re-request web pages and images. This cache is usually located in the directory ~/.netscape/cache. ~ (Tilde): The tilde is shorthand for the user’s home directory. GTK+: A library, originally written for the graphics program Gimp, which is used to program menus, windows and dialog boxes under the X Window system. ■ 106 LINUX MAGAZINE 2 · 2000

number of other open source projects. As the program is only available in compiled form for Red Hat Linux 6.1, we can obtain the source code and compile it ourselves.

What do we need? To be able to install NScache we need GTK+ (version 1.2.0 or later) and the Berkeley Database Library. With the file nscache-0.3.tgz on board, we can move on to the actual installation procedure: tar xzf nscache-0.3.tgz cd nscache-0.3 ./configure make su - (enter root-password) make install exit If an error occurs at configure or make, this is often because although the necessary libraries may be present the corresponding developer packages are not. Distributions separate these from the actual libraries. The packages can be identified by the dev


or devel in their name. You need to install these before you can move on to compiling NScache. Once the program has been compiled successfully using make, obtain root rights using the command ”su -” so that we can install the program below the /usr/local directory using make install. We relinquish the root rights again with the exit that follows.

What is in it? In order to test NScache open an X terminal and enter: nscache & The ampersand sign (&) causes the program specified before it to run in the background. Without it the shell in the X terminal would not be able to process entries again until NScache had finished. NScache will now output a report, showing a tree view of the content of the disk cache (Figure 1). This view comprises three levels: • Protocol (ftp or http), • Servername and • File name on the server.


Berkeley Database Library: This library provides functions that can be used to access files organised in the University of Berkeley’s database format. This format is widely used under Unix - the library therefore comes as standard in Linux distributions. X terminal: Programs under the X Window system which provide a text terminal (similar to a command prompt window under Microsoft Windows.) Widely used X terminals include xterm, kvt and gnome-terminal. Shell: One of the most important parts of any Unix system - it provides a command line-controlled user interface for the system. Protocol: A standardised procedure which allows programs in a network to communicate with one another. Well-known protocols include ftp (”file transfer protocol”) and http (”hypertext transfer protocol”). Server: A program that provides a service for use by one or several clients. Examples of servers include X servers, FTP servers, WWW servers. The term server is also used to describe the computer on which the program is running. URL: ”Uniform Resource Locator”, is the unique address of a resource on the Net. The URL also indicates the transfer protocol, e.g. or MIME: Multipurpose Internet Mail Extensions, a method of indicating standardised files types. Examples of MIME types include text/plain (plain text file without formatting) and video/mpeg (mpeg-compressed video stream). MIME is used primarily in mail programs and web browsers. ■

Sections of the tree can be expanded or collapsed at a click of the mouse on the plus or minus signs. Individual file entries are shown in the disk cache with their original URL and their local file name. It is also possible to display a view which is sorted on the basis of different criteria, as shown in Figure 2. The view is switched using the two tabs Tree view and Sorted view. The URL, the size of the file or the date when it was last accessed can be selected as the criteria by which to sort the view.

Action If you have selected a file using the left mouse button, a context menu can be retrieved using the right mouse button. Probably the most useful actions are viewing the file using View file and deleting it using Delete file. We can, therefore, go through the cache and, if need be, remove particular files. MIME-type

specific view programs can be assigned using the menu item Options/Viewers setup… (Figure 3) using File viewer. In the example configuration the image viewing program xv was assigned to the type image/gif while the X terminal xterm, with a text browser retrieved in it w3m, was assigned to the type text/html. In addition, an external browser (e.g. netscape, lynx or w3m) can be given as a URL viewer. If need be, this retrieves the file concerned directly from the Net using View URL. In the entry fields of the configuration dialog %f stands for the name of the local file and %u for the corresponding URL. Armed with this handy tool, no-one has any excuse for trying to tidy up the Netscape disk cache using the rm -rf ~/.netscape/cache/ sledgehammer method. Now you can see what’s useful and what isn’t before you delete it. ■

Figure 3: Setting the external viewer

Info Nscache home page Figure 1: A tree view of the Netscape cache

Figure 2: Sorted view

■ 2 · 2000 LINUX MAGAZINE 107





Each month in KDE Korner we present KDE tools which you’ll wonder how you ever managed without before.

FTP: abbreviation for ”File Transfer Protocol”. FTP is a method of transferring files from one computer to another. This involves an FTP server, i.e. the program or computer providing the data, and an FTP client, the computer and software receiving the data.

The great classics of world literature are never out of fashion for very long even in the computer age. Thanks to the efforts of bookworms around the globe more than 2,000 works by Shakespeare, Goethe and Poe are available for download on various FTP servers in the form of what are known as Etexts.

Unfortunately, surfers seek more recent works in vain because, as things stand, books can only be made available for downloading if their copyright has expired. Below we wish to present Kgutenbook, a program which makes downloading and reading ”Romeo and Juliet” easy and enjoyable.

Figure 1: Which browser?

Figure 4: Heaps of mirror sites

Figure 2: It is better to select the FTP server directly!

Figure 3: Let’s have it! 108 LINUX MAGAZINE 2 · 2000

Figure 5: A clean sheet



Figure 7: Make your selection!

Figure 6: Which juicy read would you prefer?

Against the emptiness on the virtual bookshelf Kgutenbook makes it easy for you to fill up your virtual bookshelf. Once installed, the latest version can always be found at unstable/apps/network/ – you just need to follow Kgutenbook’s commands as we work our way through this article. After starting the program via K-menu–>applications–>Kgutenbook you must first decide which browser it is to use (Figure 1). Choose kfm, as KDE programs usually interact very well with one another. In order to finally configure the program you should now connect to the Internet, as Kgutenbook needs to know which FTP server you wish to use to retrieve your favourite classics. Click on the Now button in the popup window shown in Figure 2 and download a list of available servers (Figure 4) by clicking on the Download button in the window which then appears (Figure 3). A pretty long list, isn’t it? As soon as you have decided on a server, confirm your selection by clicking on the OK button. Kgutenbook now greets you with its still very empty main window (Figure 5). In order to fill this you need to stay online for a little longer and download the index of available books to your hard disc. To do this, click on the arrow icon in the first row of the menu panel. As soon as the download has finished, Kgutenbook automatically shows you what reading material awaits you on the FTP server (Figure 6). If you would like to study this list at your leisure, you can now go offline. If, on the other hand, you have already made your decision, download the text directly by clicking on the relevant title.

Read me!

Figure 8: A different kind of reading

As soon as the books are safely on your hard disc you can view the contents of your virtual bookshelf at any time by clicking on the book icon in the main window’s menu panel (Figure 7). Select the required reading matter from the list with a click of the mouse. Open the book concerned by clicking on the book icon (Figure 8). All you have to do now is read! You can browse through it using the arrow keys in the second row of the menu panel. You can even create virtual dogears! If you wish to add a bookmark somewhere, just click on the tick icon. Click on the red banner to return to this position at any time. If, when browsing the classics, you really develop a taste for books and would prefer to have the black and white print in your hands, that’s no problem: click on the printer icon and Kgutenbook gets started. ■ 2 · 2000 LINUX MAGAZINE 109



Jo´s Alternative Desktop


You alone determine the look of your Linux desktop. In this series we look at some alternative window managers and desktop environments. This month, WindowMaker is to be the object of our desire – a decidedly wellX 11: is a system with networking capability that allows graphical output. (Virtually) all UNIX programs use X11, as do the window managers that run on it. GNUstep: An environment derived from NeXT (now Apple), which endeavours to provide programs with a uniform interface. The GNUstep project endeavours to remain compatible with Apple applications. Further information can be found on the Web (see the Info panel.) Compiling: Programs are written using programming languages. They consist of files (source files) that contain pure text which cannot be run by the computer. Compiling is the process of converting the source files into machine code that can be executed. Libraries: These are programs or routines that can be used by a number of programs. Thus they do not come with every program but instead a single copy is installed in some shared location. ■

equipped window manager which does not suffer markedly in comparison with those such as KDE or GNOME. WindowMaker is undoubtedly one of the great classics among the X 11 window managers. Window managers provide applications with window frames and manage them on the desktop. Many are satisfied with just drawing the frames and leave everything else to additional (optional) tools. This is not the case with WindowMaker, which almost merits the description ”environment”. Strictly speaking it is one, but that is a matter for developers and enthusiasts of GNUstep programs. But for the desktop user, too, it is to some extent true.

A small environment Exactly what difference does an environment make? By general consensus it should offer drag and drop and support desktop icons. Programs should have a standard look and feel. Since WindowMaker does not interface directly to programs there is no uniform ”feel” to the programs. Drag and drop too works only within a particular frame: thus it is not possible to drag a text file to an editor icon in order to open it in the editor. Nevertheless, WindowMaker is very convenient to work with and other solutions than those from Microsoft’s Windows, KDE or GNOME are more likely to be seen as an opportunity than a limitation. WindowMaker is mature and complete - it lacks nothing. Since WindowMaker is really good – and because of this many old hands greatly enjoy using it – so far no distribution has fallen into my clutches that didn’t also include it. Very often, WindowMaker has merely to be installed from the CDs. It has matured over a long period and further development is undertaken very carefully.

110 LINUX MAGAZINE 2 · 2000

Users of older versions should avoid the dangers of update mania: when you do update, please use the packages of your own distribution. Whilst compiling the source code may not be difficult, the result is often a little unstable. From experience this only works well if you know for certain that none of the newly installed libraries were supplied from somewhere other than the distribution’s own package management. It is a task for experienced users only. Those who, nevertheless, are unable to stop themselves will find the current version – as well as further information – on WindowMaker’s Web page or on the accompanying CD. To install the software refer to the file INSTALL contained in the package which explains in detail what is to be done. There are options which can be selected during compilation to add support in WindowMaker for GNOME and KDE programs. Although you can manage without this, those who need it simply add after configure the option ”--enable-gnome” or ”--enable-kde”. Those not doing the compilation themselves are certain to find a special WindowMaker package in their distribution.

After the installation WindowMaker will not start immediately after it has been installed because every user who wants to use WindowMaker must carry out a user installation. This creates essential configuration files in the user’s home directory. The easiest way for this to be done is by the user running wmaker.inst. If the distribution that is used doesn’t have its own ideas concerning startup and configuration then a ~/.xinitrc is


created too (and if necessary an already existing one is saved) which ensures that after entering startx on the text console our WindowMaker carries out its tasks reliably (see Figure 1). On starting X via startx the file ~/.xinitrc is invoked which contains the command wmaker. If unforeseen difficulties arise here you should consult your friendly distributor about the problem, or at least consult the manual with regard to the preferred method for starting the graphical interface – unfortunately not everybody adheres to the usual rules of the game).

Automatic ”startx” on login Graphical logins look good. Whether they make sense though is a debatable point. If the graphical interface (X) suddenly stops working the attraction of a graphical login will suddenly vanish. A graphical login is also scarcely to be recommended on computers somewhat advanced in years, as in this case X is started twice in succession: once to log in, and once after you have logged in, which takes rather a long time.


A nice solution to this dual start-up is offered by a small file called ~/.bash_profile. This file is optional, but if present then on logging-in it runs in bash (the standard shell used under Linux). ”Optional” means that this file – if not present – can simply be created; and whoever creates it can cheerfully do whatever they like with it. Thus it is legitimate to start X from here, together with WindowMaker. Our script in this file should first test whether we are logging in using this computer (since we could also, in theory, log in from somewhere else.) If it is certain beyond doubt that we are located directly on the box in question, the next test should be to see if X is already running, since we could have started a login X that has been completed. The script shown below is an example which works well and avoids a lot of chaos: case `/usr/bin/tty` in /dev/tty[0-9]*) echo ” ” echo ”local login - find X-lockfile.” echo ” ” if [ -f /tmp/.X0-lock ]; then echo ”X-lockfile present - X will not bU e started.”

Fig. 1: WindowMaker as factory preset

2 · 2000 LINUX MAGAZINE 111



Hompage: http://www.

echo ” ” else echo ”X not active - start X in 3 secondU s (cancel with Strg-C) ...” sleep 1s echo ”2 seconds ...” sleep 1s echo ”1 second ...” sleep 1s if [ $? -eq 0 ]; then startx fi fi esac It is important for this script code to be located at the end of the file, otherwise other settings made within it will not be visible to WindowMaker. Since typing out is not much fun, it is also enclosed on the CD.

WindowMaker 0.62.1, WindowMaker Extras LinuxUser/desktopia/

Info WindowMaker home page WindowMaker tutorial orges.t/ GNUstep ■

Autostart – the second One of the marvellous things about WindowMaker is that it even offers us our own autostart – the file for this being ~/GNUstep/Library/WindowMaker/ autostart. If, for example, you want to coax the wheel on your mouse into working using the utility imwheel you can simply enter this here. It is important here though to close every program invocation with a ”&” unless the program immediately terminates again – this sends the program into the background. If you don’t do this, subsequent lines won’t be run until the program on the preceding line has terminated. Thus you can activate a screensaver, for example, using the line: xscreensaver -no-splash &

Basics In the top left corner of Figure 1 you can see the socalled ”Clip”. At the corners of the paper clip you can switch between a number of desktops. Various

112 LINUX MAGAZINE 2 · 2000

icons can also be attached to the clip for each desktop. As illustrated you can see how the Netscape icon (taken from the bottom icon bar) is ”docked” (inserted) at the top left. The ”Dock” itself is located at the right-hand border of the screen. It is virtually identical to the Clip, but is present on all desktops and always the same. The lowest icon of the default dock invokes Wprefs WindowMaker’s graphical configuration tool (which currently is well able to compete with the separate GTK configuration tool wmakerconf.) You will find it easy to discover the basics of WindowMaker use after playing around for half an hour. So we will concentrate here on the more important features. A double-click on the title bar creates an exceptional amount of space on the desktop, which is why many people no longer need any other desktops. WindowMaker is a real feast for those who like keyboard operation: it is fully adaptable, though adaptation is scarcely necessary thanks to intelligently chosen default settings. Even the movement of windows is possible via the keyboard. And anyone who finds it too much trouble each time to fetch the start menu onto the desktop again can simply pin it to the title bar with a mouse click.

Start menu There are two ways that the menu can be configured. By default the older method is used, which is intended for editing with a text editor. It can be found under ~/GNUstep/Library/WindowMaker/menu – and since it comes with an introductory guide and examples, it is easy to modify. For those who don’t want to use their editor for this the tool wm-oldmenu2new comes in useful, which converts the old file into the new format and thus enables it to be edited using WPrefs. This menu is saved in the file ~/GNUstep/Defaults/WMRootMenu. If a useable menu is already defined there it is used; otherwise the one defined using the old method is pressed into service.

Additional information Those who are interested you can take a complete tour through the world of WindowMaker: a tutorial is available (see Info panel) which scarcely leaves any questions unanswered. This, incidentally, is also available for downloading and thus can be browsed offline at leisure. Should any questions remain they are bound to be covered in the FAQ (Frequently Asked Questions) file that accompanies every WindowMaker package. Since WindowMaker alone does not amount to a complete desktop, the way to achieve an individual desktop with WindowMaker will be described in the next article. ■



Crossing the reality gap


You’d think that Linux would have the shoot-em-up genre pretty well sown up by now. However, realistic scenarios for Opposing Force etc. have been thin on the ground. With Soldier Of Fortune from Raven Software (Heretic II) and Lokigames, this gap is (at least partly) bridged.

Soldier Of Fortune transports the player into the secret and deadly world of the modern soldier. Using innovative multiplayer modes, your man has to battle his way through dozens of realistically designed missions on five continents. The game, (encompassing around 740MB), is easy to install thanks to the Loki setup. Those using a Banshee graphics card should add the following lines to the ”sof” start script to avoid spoiling their fun:


Fig. 1: Main menu, from nice little details to the status indicators.

Graphics and control errors may arise if this is omitted. The genre of the 3D-Shooter is not well known for its innovative game play. This reputation isn’t threatened by Soldier of Fortune. But if you prefer games that are thin on elaborate plot, then the allaction Soldier Of Fortune is the one for you.

2 · 2000 LINUX MAGAZINE 113



The soldier of the title is one John Mullins. He’s paid for carrying out the most diverse of secret missions. Mullins is assigned his tasks at the start of every mission. The game play now and then reduces such tasks to the familiar ”wipe out everything that stands in your way” routine. There are hostages to be rescued and civilians who have to be spared occasionally, so it’s not fair to generalise too much. The game runs on a highly modified and tuned Quake2 engine. Thus all the levers and adjustment wheels, for example, are simply borrowed from Q2. Otherwise the engine turns in a good performance when it comes to presenting scenarios realistically. SoF is especially outstanding in the richness of its detail. Your opponents don’t all look as if they have been cloned, they wear glasses and hats, which can even be shot down (unfortunately not collectively). The sound too is nice – running water, speech, shots and other sounds come across as very realistic – although the death cry is somehow always the same. That brings us to the most outstanding feature of this game – its brutality. The engine distinguishes between twenty-six different types of shooting injuries – each of which is shown in detail and animated! You can put opponents temporarily out of action by well-aimed shots in the knee or arm, but they’ll get really angry with you. Shot-away body parts and other gory details are reproduced by the game with macabre precision. Consequently, the game is unsuitable for children. In some places it

114 LINUX MAGAZINE 2 · 2000

feels like a Western, as you shoot your opponent’s gun clean out of their hand, or let an unfortunate sniper have it in a particularly vulnerable area. Ouch! It’s rather courageous to launch SoF when such violent games are increasingly controversial. Mind you, the depictions of violence can be greatly reduced via a password-protected menu option. It may be that a version with ”moderated violence” is the one that reaches European shelves. But realism gives rise to another effect too. From time to time, it seems as though you’re the star in your own action film. And this is fun. The effect of the various weapons is very realistic. If you’re hit by a rocket then you’ll find yourself (at best) in a lot of tiny pieces. There is still some way to go to reach the cinema-like standard of Half Life, but everything is basically fine – from the thunderous soundtrack to the faultless effects. Developers have used the tempo of the game to intensify its ”cinematic” qualities. It more or less has the best timing for any game of this type that I have come across. Soldier Of Fortune combines a very well balanced assortment of film sequences and action. You won’t be exhausted from sheer action, or sustain a significant fall in adrenaline. Your arsenal is oriented towards current stateof-the art weaponry. You have a small- and large-calibre pistol, a shotgun, an automatic submachine gun (with and without silencer), a firearm with (zoom) telescopic sight, an extremely useful and a heavy


machine gun etc. all at your disposal. All weapons have absolutely realistic operation. The number of weapons that you can carry around with you at any given time varies according to the degree of difficulty or level of realism that you have selected. The multiplayer mode uses the games network. This is integrated into the game transparently so that it is hardly noticeable. For multiplayer there are fifty different characters to choose from and seven different game variations. Amongst others is also a ”realistic” death-match, in which things like tiredness (if you run around too much) and manual reloading of the weapon play a role or ”Assassin”, in which you have to catch a certain opponent whilst hunted by another opponent. There’s an indicator built into the screen which registers the amount of noise you make (as in ”Thief”, for example). The more noise you make, the more your adversaries will notice you. Mind you, your electronic opponents‚ intelligence can be extremely ”unintelligent”. On one occasion, a soldier standing next to his mortally wounded comrade, was still unaware of (your) hostile presence. Although the ”Ghoul” engine is based on a long-established design, Soldier Of Fortune is in no way an outmoded game. The soundtrack and the film sequences are extremely well devised. The effects exhibit a hitherto unknown realism. Opinions will differ over the presentation, to put it diplomatically. All in all the game is soundly built and good value for money. ■


Rating: Long-term game fun: Graphics: Sound: Control: Multiplayer: Overall rating:

75% 85% 75% 85% 90% 80%

2 · 2000 LINUX MAGAZINE 115



Linux game on!


Why do Descent Clones exist at all? The only one worth mentioning has been forgotten and the rest are unmentionably bad. This is due to the high quality of the first two Descent games. Now Outrage/Interplay and Loki introduce the third edition of the classic (which also operates under Linux).

Fig. 1: The start-up screen.

Who remembers Descent 1 and 2 – games that inspired the phrase “video game-sick“? A pilot of a small one-man spaceship had to fight his way through mines filled with rebelling robots and blow up reactors. The first part of the game was a milestone in 3D technique. Indeed, Descent was around for a long time before Quake and Consorts brought genuine 3D rendering. After the smooth installation of Descent 3 you’ll see a furious, computer-generated introductory film. The hero in his damaged spaceship drives unconscious into the sun. He is saved from the smoking wreck at the last minute by a fascinating machine.

So - what’s different about Descent now? The answer - a lot. First of all, everything is 3D, including all powerups and every trivial detail. The lighting and the ambient effects make Descent one of the most graphically impressive members of the 3D family. Weapon fire glows, mist and haze look wonderful and opponents are highly detailed. The game engine is really two engines combined. I don’t know how they did it, but the results are quite convincing. For the first time there are “outside scenes“ in Descent. You’ll fly in the open sky and through large landscapes. Although there is not too much life outside, the display is convincing and an agreeable change from the claustrophobic atmosphere that would otherwise build up. Missions have become more sophisticated. The player no longer simply charges from room to room looking for coloured keys. Instead, pressure is built up with the introduction of time limits. Or you’ll have to search through a large complex for files. Some can only be solved by concealment. The designers obviously read through their “How to

Fig. 2: Many options can be changed.

116 LINUX MAGAZINE 2 · 2000


write games“ manual for this one and added a few original extras of their own! Lots of various opponents cross the player’s path. The range extends from the lowly vacuum cleaning robot to monsters who’ll send you into the hereafter in a New York minute. The AI waits with quality expected in Descent 2, plus improvements. Instead of simply going for you en masse, they take cover, pull back and even set ambushes. The arsenal of weapons has also grown. Aside from the familiar quad laser or the vulcan, there’s also the new Railgun-like Mass Driver. In addition, three ships are available for selection - a light, nippy one, a heavy, strongly armoured one and one that’s in between. In contrast to the wildly angled designs of older generations, the new levels are gigantic and have a logical structure. The cartography function is built-in and better than ever. The instrument displays can be changed in size, display and position so that every player could probably find his optimal gaming solution. The sound is atmospheric and dynamic, droning at times, but bearable. Explosions are loud and crisp. Debris flies through the region and contorted lighting streaks across the room. All in all, the effect section is absolutely satisfying. The multiplayer mode offers an astonishing plethora of variants. You’ll be confronted with no less than nine different types of games, from the simple D’Match to cooperative campaign games. In one variant the participants even play football (seriously!). Some of the modes, however, are quite sparsely documented.

Fig. 3: Dark caves…


The game is strong on plot, which is sustained by excellently crafted intermediate scenes and a theme beautifully drawn through the game. For new players there is a practice level in which the principles of control and the game are explained in audibly and in text. Descent 3 beginners need more time for learning the control than with other 3D games. But after learning Descent 3, other games will be a piece of cake. One of the few things that grates is the inclusion of the robot thief from D2 - together with his new, big brother - the Superthief. The old one got on my nerves quite a lot, the new one is a real nightmare. It’s faster and more cunning than ever before and capable of stealing your weapon midfight. Although a thief in some missions is quite funny, he’s used to overkill in this game. This game has fantastic graphics, good sound, detailed mission structure, a real storyline, interesting weapons and opponents as well as a successful multiplayer division. Nowadays it is becoming increasingly difficult to find real weak points in the new editions of tried and tested games. ■

Fig. 5: ... and impressive outside scenarios.

Rating: Long-term game fun: Graphics: Sound: Control: Multiplayer: Overall rating:

Fig. 4: ... fantastic explosions ...

70% 90% 75% 85% 90% 85%

AI: Artificial intelligence, the attempt to program machines so that their reactions mimic those of humans. ■

2 · 2000 LINUX MAGAZINE 117



Unreal life


Perhaps the most wellknown simulation game of all time is now available on Linux. But with expectations for the new edition of this classic game so high, is the third generation bound to disappoint?

The pre-release version at our disposal needed 440 MB of hard disk space to install it. After startup, a window opens that can be up to 1152 x 1024 in size. Alternatively, you can also switch to the full screen display, which speeds up the graphics slightly. All in all, SimCity 3000 would appear to be astonishingly resource-hungry for a simulation game. It needs a 300 MHz Pentium class processor and at least 32 MB RAM (but I would definitely recommend 64). You are then given the possibility of simulating cities of enormous size with a level of detail never seen before. Your attention is grabbed as soon as you see the film sequence in the main menu. Cute little animat-

Fig. 1: Intro. video (left), Main menu (right).

[left] Fig. 2: Many details with best possible enlargement [right] Fig. 3: Famous structures, such as the Eiffel Tower, can be built.

118 LINUX MAGAZINE 2 · 2000

ed buttons switch to the main program functions. Accompanied by jolly background MP3 music (from a selection of over 10 tracks), you can begin creating your first big city. Tried and tested procedures and operating concepts have been retained. Superfluous frills have been mostly avoided and many entertaining details added. So, from the “shifting fly droppings“ of days gone by, you get real street activity with animated people, bicycles, cars, etc. Building details are neatly animated and the attention to detail is pleasing. When looking at a police station building data, for example, you stumble across the line “Monthly doughnut consumption“.


Even though it is still unmistakably “Sim City“ and little has changed in the basic design, the game has become much more flexible, with many improved features. The music changes as the size of your city grows, echoing changing scenarios. Car horns and irate motorists shouting can be heard if there is a traffic jam. Famous buildings from the real world, such as the pyramids or the Statue of Liberty can be included in the city. The selection of buildings is bigger and new modes of transport, such as underground railways and bypass roads, have been added. When making decisions, the player has the support of a team of electronic consultants. They have their say automatically or on request, depending on the situation in the game. You should read what they have to say at all costs – they have many useful hints to give. You also have detailed diagrams at your disposal and overviews of events in the game. These make it possible to assess any situation accurately. Operation is, as usual, mostly intuitive – but unfortunately you can get stuck in a few spots. Also, the problems encountered in the Windows version of the game have been imported into this platform. “True to the original“ – it gets stuck when building bridges or slip roads onto motorways. Libraries have a negative number of books and the export of the water supply constantly fails for reasons that are hard to understand. These are small details well-known from the Windows version, correction of which is a job for the programmers of the original. It would be too much for the Loki programmers to rectify. As was the case in the older versions, with time the game will come up with one or two surprises. UFOs can suddenly descend on the well-ordered streets of the city and, if you’re unlucky, they reduce some part or other of your city to rubble. Or maybe a tornado or earthquake will play “landscape designer“ in the inner city, meaning you soon have your hands full with things to do. Unfortunately, the same applies to computers. If you have loaded a big city with lots of activity, then even the latest computers can crash. I think there is still room for improvement here for Maxis and Lokis. I mean, whoever heard of a Sim City bringing a Pen-


tium 450 to its knees? Perhaps increased use of accelerated graphics would be helpful. It is some consolation that the Windows version of the game suffers from the same problem. The logic and intrinsic intelligence in the game are OK, but in the final analysis a bit so-so. So, the inhabitants of a city do not want to live right next door to a high security prison, that’s logical enough. And they complain if neighbouring cities have advantages that are not to be found in their own. The fact that they still complain about their taxes being too high, even when dropped to 2%, OK, even that could be realistic. What I missed here was a multiplayer option. On the plus side, you do have the option of designing your own buildings and integrating them into the game. Also, the possibility of transferring old games from SimCity 2000 and expanding them with the new options offered by SimCity 3000 will certainly be enormously motivating for some fans. Humorous details and the funny news tickertape at the edge of the screen means you always have to smile. If a city is going too well, you can always let loose one of the catastrophes available and then measure its regeneration capabilities. In spite of all the new features and research that have gone into the new version (the developers even investigated how water towers actually work) it is still the same old Sim City. If you cannot stand SimCity, SimCity 2000 or similar games, this won’t convert you. But for all the Sim-addicts out there, it’s clearly a must. For casual players the impulse to buy may not be quite as strong. It is very much a question of which way your inclinations lie. ■

Fig. 5: Various consultants are on hand to help

Rating Long-term enjoyment factor: Graphics: Sound: Control: Multiplayer: N/A Overall rating:

70% 55% 85% 70% 65% Fig. 4: Tooltips help the beginner (left). Preferences (right).

2 · 2000 LINUX MAGAZINE 119



New civilisations


Another classic from Loki arrives fresh on the Linux desktop. This time it’s Sid Meier’s latest work, Alpha-Centauri, that is joining the already wide range of high quality strategy simulations for Linux. Firaxis, a software house with strategy experience, provided the necessary background.

Figure 1: A number of video sequences, some of which are rather long but nice to watch 120 LINUX MAGAZINE 2 · 2000

A user-friendly setup program makes installation child’s play. The installation size can be selected in five stages between 26 and 601 MB. If completely installed on the hard disk there’s no need to insert the CD later on. Like most games from Loki AlphaCentauri has ESD support – you don’t miss any warnings or system sounds during the game as a sound card is allocated. The sounds heard during the game are pleasant and unobtrusive. Useful additions in the Linux version, such as the ability to iconify using [Ctrl+Z] and edit string gadgets using the shortcuts from vi or emacs round off the technical side of implementation nicely. Those of you who have already played the classic Civilization 2 will remember that there were two possible ways to win a game: by totally wiping out all your opponents or by becoming the first civilisation to send a manned space mission to Alpha-Centauri. The latter was undoubtedly the most honourable, but also the most difficult way. This is where Alpha-Centauri starts. There is a malfunction on board the expedition’s spaceship and the population travelling on the ship is awoken from a deep sleep. In the chaos that ensues the future inhabitants of space are divided into seven groups, each with their own ideology. They decide to divide the ship before it crashes without power into the destination planet in order to increase their chance of survival and be able to build up civilisations in various places based on their respective ideology. The “Spartans”, for example,


favour war as a solution to conflicts whilst the “Morgan conglomerate” endeavours to maximise personal wealth. So this is where we are. The ship has landed or crashed. The people have built the first town using the materials available. And now there is a difference of opinion. Many will soon realise with a yawn that Alpha Centauri is not much more than the third rehash of Civilization. In addition, the terrain can seem strange and boring and many of the terms and routines with which Civilization players are familiar no longer function. In Civilization we had sports complexes, libraries, mathematics and the alphabet: things with which everyone is familiar and which we could classify and use in the context of the game without any problems. But now the future has arrived: there are network nodes, gene factories, retrovirus research projects and ethical mathematics. Many of the familiar, easily understandable elements from the old days now have strange names, as have the spies, for example. Furthermore, Alpha-Centauri cannot be easily extended with modules, as Civ could at the time. Enough about the negative points. Others will be excited by the new possibilities Alpha-Centauri offers the ambitious simulation freak. Many of the games are like cheap beer: you drink some, have


some fun, but soon forget what you drank and what exactly you did. However, Alpha-Centauri is like the work of a master wine-grower: designed in detail, each ingredient closely inspected, each process carefully considered. The design has been distilled with careful attention to detail, the result is pure enjoyment. Firaxis invested heavily in a detailed background story for the game. This creates a credible situation which grips the player. Co-author Brian Reynolds has done his homework: Sid Meier’s Alpha-Centauri has the degree of scientific plausibility we expect from a game from Firaxis, a software house that also developed titles such as Gettysburg and Railroad Tycoon Pate. On the back page of the manual, for example, there is a short essay on the dynamism of the coming into being of planets and solar systems, which provides an explanation of why the “Planet” in Alpha Centauri is like it is. This kind of detail is the difference between a good and a really good design. Also worth noting is the amazingly high level of atmosphere and personality for a strategy game. The main reason for this is the high quality of the game texts. These texts in conversation and negotiations are what bring the leaders of the various groups and all the other characters to life. The

[above] Figure 2: The first game. Thank goodness the help is excellent

[below] Figure 3: This is how it looks if you are to configure a whole world

2 · 2000 LINUX MAGAZINE 121



[above] Figure 4: A confusing variety of research goals and discoveries

[right] Figure 5: Designing new units is fun. [ left] Heads of state à la Alpha Centauri

game’s diplomatic model is certainly one of the best there is in a computer game. The players seem to be dealing with real people when they try to keep relations between up to six other civilisations in check. The option to configure units individually down to the last detail is a lot of fun and allows for an almost infinite amount of variations. Of course, each “tribe” has its own advantages and disadvantages, which take effect in the course of the game and which players can use to their benefit where necessary. The artificial intelligence is cleverer than it is in other games of this kind and cannot be so easily outwitted with diplomatic tricks. The help system integrated into the game is context-sensitive and activates itself automatically. It makes life particularly easy for beginners as they try to find their way through the jungle of options. Again, there are several ways to win the game, including a diplomatic and economic victory. While in Civilization the mission to Alpha Centauri was the most desirable victory, this time achieving “Transcendence” is the highest goal. In addition to the diplomacy aspect, players must ensure that their own citizens are always in a good mood and productive. Players spend a lot of time exploring the terrain, discovering new technologies, building new bases and conquering enemies. The bases play a key role as they serve several purposes, such as the construction of new units and research. It is also possible to invent and build new things to increase the productivity and well-being of the inhabitants. When new discoveries are made, play122 LINUX MAGAZINE 2 · 2000

ers can use them to “upgrade” all the old units. The technology aspect of the game is so extensive that it is a little hard to grasp. You may catch yourself making a gut decision as it is so easy to lose an overview of all the technologies. The secret projects the game offers are something else – in this respect the game is a lot less restricted than its predecessors. A host of “special” projects, some of which include very important events for the outcome of the game, are yet to be implemented. The game’s multi-player capabilities are up-todate and even include the option to play moves by email. A wide range of functions can be switched on or off and provide extensive scenario settings. Several players can even play on one computer (alternately). On balance, although Alpha Centauri isn’t completely different from its move-based strategy predecessors what it does, it does well. It feels as if the game designers have done practically everything right. So it is more of an evolution than a revolution. There is likely to be a difference of opinion on the originality of this evolution. But for simulation fans, there’s no getting away from it: buy this and you’ll be spending long nights playing “just one more move”. ■

Evaluation: Long-term enjoyment: Graphics: Sound: Control: Multiplayer: Overall score:

95% 55% 75% 90% 85% 85%



Installing and using joysticks


USB joysticks can be used without problems (provided the distribution is prepared for it like SuSE and Mandrake). The range of supported devices leaves many commercial competitors standing. However, the power gamer must sacrifice the cool force feedback entirely.

Joystick module pre-compiled The commissioning of a joystick under Linux is easy, provided you understand the logic behind it. Normally, game program libraries (like SDL, for example) use the joystick that is connected with the aid of the device files /dev/js0 to /dev/js3. If these interfaces haven't already been set up in the Linux installation, the command mknod /dev/jsX c 15 X provides a remedy (where X is a wildcard for the numbers 0 to 3). The choice of supported devices is a slight disadvantage when it comes to selecting the correct kernel driver module for its input device. In many distributions all joystick modules are already pre-compiled and ready for deployment in the corresponding driver folder (e.g. /lib/modules/2.2.16/misc).

The agony of choice If you don't have one of the gamer tools shown in Table 1 you must check compatibility before it is commissioned. Joysticks for the analogue game port (Classic PC analogue) are commonly found. The appropriate kernel module should be loaded first with modprobe joy-analogue. To do this, you must be logged in as the root user. You can check that the joystick has been successfully initialised using the command dmesg. The last two lines of the resulting text should look as follows; js: Joystick driver v1.2.15 (c) 1999 VojtecU h Pavlik <> js0: Analogue 3-axis 4-button 1-hat joysticU k with CHF extensions at 0x201 [TSC timer, 448 MHz clock, 1234 ns res] In this example, a three-axis joystick (0-2:X/Y/Thrust) with four fire buttons and a "hat stick" was recognized. The all-round view lever can be used in games in the form of two further axes (3 and 4).

Linux has a dozen

Anything else? This additional function is not automatically recognized. It requires an additional module option which (along with the device file joystick module allocation) can be entered in /etc/conf.modules. The driver is then automatically loaded as soon as a game accesses a device file (/dev/js*): alias char-major-15 joy-analogoptions joy-anU alog js_an=0x201,0x2fb

different joystick systems. In addition to the usual analogue game port, it even offers drivers for exotic C64 and Amiga joysticks.

The parameters following js_an are indicated by the base address of the game port and the special settings for the hat stick. The exact allocation of the special functions (Pad Buttons, CHF and FCS Hats) can be input /usr/src/linux/Documentation/joystick.txt.

Troubleshooting The analogue game port is normally integrated on the sound card. In very old models the port's location is fixed at the usual base address 0x201. Older PNP ISA bus cards must be given a helping hand using isapnp. In the case of PCI sound cards a simple kernel module parameter in /etc/conf.modules is usually all that is required: alias sound char-major-14alias char-major-1U 4 es1371 options es1371 joystick=0x200

Table 1: Supports joystick systems

In this example the sound driver for SoundBlaster PCI64/128 (es1371) is informed that the joystick port is located at address 0x200. The joystick module cannot work correctly without sound drivers in such cases, but the sound driver is automatically unloaded. If no sound output occurs, the line /sbin/modprobe -r sound; /sbin/modprobe sound should be appended in the boot script (/etc/rc.d/rc.local for Red Hat and Mandrake) in order to make the driver resident.

Gravis GrIP


Logitech ADI


More information on the subject of joysticks and how to commission them under Linux can be found on the developer pages of Czech Vojtech Pavlik, who (sponsored by SuSE) tirelessly develops Linux drivers for input devices. â&#x2013;

Classic PC analog


FPGaming and MadCatz A3D joy-assassin.o

Microsoft SideWinder


ThrustMaster DirectConnect


Creative Labs Blaster


PDPI Lightning 4 card


Trident 4DWave


Aureal Vortex gameportMagellan joy-magellan.o Space Mouse


SpaceTec SpaceOrb 360


SpaceBall Avenger


SpaceTec SpaceBall 4000 FLX joy-spaceball.o Logitech WingMan Warrior


NES, SNES, PSX, N64, Multi


Sega, Multi


TurboGraFX interface


2 ¡ 2000 LINUX MAGAZINE 123




Welcome once again to Georg's Brave GNU World. I'd like to begin with a small but rather useful project.

Sel Sel [5] by Thomas Kluge is a nice easyto-use file manager for shell scripts. With the help of Sel the user can select file or directory names and execute commands on them. In the case of a simple echo command the selection can be taken from stdout to be used in any way the shell script's author desires. This kind of functionality was impossible to get in shell scripts without rather complicated and errorprone dialog scripts. Thanks to Sel there is now a small – the C sourcecode archive containing manpage & help is 20kB – curses library based tool available. Using the curses library is the only real weakness as Sel is restricted by the curses limitations. Sel is released under the GNU General Public License, which makes it Free Software and the author explicitly encourages others to modify it accordingly to their needs. But I'd like to seize this opportunity for a personal remark. Quite often in the past authors of such small projects didn't approach me because they didn't consider them "worthy." I completely disagree. In my eyes these small projects make up a very significant part of the movement – even if they sometimes fail to get a lot of recognition in the flood of new developments. That's why I think this column is a good place to offer these small projects a forum. But I can't be 124 LINUX MAGAZINE 2 · 2000

everywhere and see everything – I depend on your help. Should you have written such a small project or stumble upon one on the net, please tell me about it. My email address [1] should be sufficiently far-spread by now, I guess.

LinCVS LinCVS [6,7] is already significantly bigger. It is a front end for the client of the Concurrent Version System (CVS). The program has its roots in early 1999, when Tilo Riemer discovered there was no CVS front end under GNU/Linux that satisfied his demands as far as functionality and stability were concerned. So, as with many projects, this one was started to scratch a personal itch. His reference was WinCVS although it does ask the user far too often for information that is already present in the CVS administrative files. Additionally many front ends are not capable of administering more than a single project. Tilo Riemer was a CVS newbie when starting the project so his demand was that LinCVS sould be extremely easy to use and very stable. The usage of LinCVS is in fact extraordinarily simple. One of its best traits is that the user only has to give information for import or checkout of projects. What will also benefit many users is that LinCVS is capable of searching entire directory structures for CVS managed projects, adding them to the workspace automatically -- there is no upper limit for the number of projects managed. The sepa-



ration into CVS and non-CVS managed files is very intuitive, as are the context menus that can be brought up with the right mouse button. Authors of this GPL-licensed project are Tilo Riemer, Wim Delvaux, Helmut Koll and Sven Trogisch. Their biggest problem is currently optimising the auto-detection of changes made outside of LinCVS because that takes up too much processing power. On their to-do list also is the full support for ssh, the simplification of the checkout of older versions, easier merging of branches and a WinXX port. Since none of the authors work on LinCVS full-time they are still looking for developers but mainly for people who'd like to do some documentation. All things considered I personally like LinCVS a lot. Only the choice of the toolkit is unfortunate as the authors decided to use Qt (that was Non-GPL up to now). This and the missing autoconf/automake configurability were the only drawbacks I could find.

Dia If you have an interest in creating structured diagrams you will probably find Dia [8] to be useful. The project was originally started by Alexander Larsson but since the end of last year, James Henstridge has been its maintainer. There have of course been many more people involved in the development of Dia but the list is too long to be given here. The functionality of Dia is comparable to that of the proprietary program "Visio." Its job is to create structured diagrams out of standard objects. It has modes for flowcharts, network and UML diagrams, for instance. But there are also some generic graphical functions to create simple drawings.

The program supports a wide number of output formats like EPS, SVG, CGM and PNG and due to the win32 port of the GTK+ toolkit, which it is based on, it is also useable under win32. This makes communication problems unlikely. Its extensibility is one of its most positive features. Should a certain type of diagram not be supported yet, new objects can be added via simple XML files which don't require programming skills in most cases. Creating import filters for alien formats is a little more difficult but the developers are working on making this easier. An import filter for "Visio" diagrams is particularly missed by many users, but since it is a proprietary program with proprietary formats, creating an

The file select dialog for shell scripts also dominates the coloured diagrams.

All projects under control with LinCVS.

2 路 2000 LINUX MAGAZINE 125



Object oriented design with UML and Dia.

import filter isn't as trivial as it might seem. This is a perfect illustration of the case in which a user's data is locked up in a data format that he has no control over or access to. The authors don't have the capacity right now to reverse-engineer the Visio format, but on the Free Software Bazaar under the ID 990524A you will find a reward for the person that succeeds in writing a library that can read and write Visio files [9]. As I write this, the reward stands at $3000 US – so if you happen to have a little spare time and want some spare money you might give it a look. Other plans are implementing more types of diagrams and completing the Python binding. Dia also needs more documentation – volunteers are very welcome. If you want more information, the Dia mailing list is probably the right place to look [10]. Since the next two projects are closely related, I will do a joint feature. Although this will be more or less an article about upcoming technologies I still think it should be interesting for most people.

libCIM & PaulA LibCIM & PaulA [11] are the current prestige projects of the German company ID-PRO – and both of them are distributed in the spirit of Free Software under the GNU General Public License. In order to have this feature widely understandable, I should probably give a short introduction to the background. CIM stands for Common Information Model and it is an object oriented concept to model system information. Its origins are Network 126 LINUX MAGAZINE 2 · 2000

Management Systems (NMS) like Tivoli, OpenView and so on. Their job is in the area of exchanging information within big networks and administering these systems based on the information exchanged. In order to have some way to communicate between different NMSs a shared protocol was needed. It was out of this demand that the "WebBased Enterprise Management" (WBEM) initiative was born that defined the CIM standards as well as their encapsulation in XML and transport via HTTP. The publication of these standards is done by the "Distributed Management Task Force" (DTMF) [12]. So much for the background. In the case of communication between two NMSs, one of them becomes the client and the other the server, the "CIM Object Manager" (CIMOM). Assuming that the client has access to the server, it can read and change system information on it. Since CIM has been designed to allow monitoring of any type or structure of network it is a little too complicated to be described in detail here. But to understand the principle it is only necessary to know that the CIM defines certain base and derived classes. It is the objects that have been instantiated out of these classes that are being read and manipulated by the server. If this was over your technical head just trust me on the following: this principle allows you to administer networks without knowing the specifics (hardware/operating system). LibCIM is a Perl library that allows programming in CIM objects. It also has functionality to serialise them in XML and transport them via HTTP. So libCIM can be used for the creation of CIM clients and servers (CIMOM).



Free projects on commercial web sites: curse or blessing?


PaulA is a server/CIMOM which uses libCIM for implementing the CIM objects. One possible use for PaulA would be to administrate GNU/Linux systems in a standardised way without having to know about the distribution the different systems run on. This is definitely something that'll make life easier for administrators. The current status is "pre-alpha", so it'll be a while until you can use it on a daily basis. The developers of libCIM & PaulA, Axel Miesen, Volker Moell, Eva Bolten, Friedrich Fox and Marcel Hild, do hope to have a stable production system within a few months, though. This system could then be used to check the interoperability with other CIM systems. But at the moment, a lot of providers (the code that does the actual work) have to be written. This is definitely a very hot area: licenses for software packages like Tivoli are extremely expensive and experts on these technologies are rare. Developers interested in participating in this project are very much encouraged to do so. I'd like to use the remaining lines to answer a question that Hans Fuchs has asked because it does get asked rather regularly.

What to think of Sourceforge? In an email he sent, Hans Fuchs expressed his concerns about the success of the Sourceforge machines [13], that provide developers of Free Software with cost-free infrastructure like webspace, FTP server, mailing lists or a "" domain. His concerns were based on the fact that the servers are the property of and being run by VA Linux.

Personally I am someone to encourage a sound amount of scepticism if it comes to altruism in companies – although companies in the GNU/Linux area do often have the motivation to "give something back." But I don't think we have to rely on altruism. Like other companies in this area, VA Linux lives through and with a lively and healthy development of Free Software. So the Sourceforge service helps the company by strengthening its own foundation. Additionally it is rather positive for its public image and gives it the chance to keep an eye on many interesting projects. But does this maybe contain an underlying threat to the community that might be thwarted by mirroring the servers elsewhere? Personally I regard this risk as being rather small. First of all VA Linux has nothing to win by trying to exploit its access to the projects. The resulting loss of image would be so severe that it would vastly outweigh the gain. And all authors keep local copies of their source code, too (not even mentioning the releases spread all over the net) – which could be developed anywhere. That's why I believe that Sourceforge has its uses and does help the development of Free Software. But it wouldn't hurt to implement this concept in other places and have a less centralised situation.

[1] Send ideas, comments and questions to Brave GNU World <> [2] Homepage of the GNU Project [3] Homepage of Georg's Brave GNU World [4] "We run GNU" initiative [5] Sel home page [6] LinCVS home page [7] LinCVS on Sourceforge [8] Dia home page http://www. [9] Free Software Bazaar http: // ar_catoffers.html#graphics [10] Dia mailing list -- subscribe by mail to <> [11] PaulA home page [12] Distributed Management Task Force [13] Sourceforge ■

...and onto the plane Okay, that's it for this month. In just a few hours I will be sitting in the plane to California. Emails [1] containing possible topics, ideas, questions and project feature suggestions will reach me nonetheless. So don't hesitate. ■ 2 · 2000 LINUX MAGAZINE 127

Linux Magazine UK 002  

Linux Magazine UK 002

Read more
Read more
Similar to
Popular now
Just for you