Page 1



Digital Spy We pride ourselves on the origins of our publication, which come from the early days of the Linux revolution.

Dear Linux Magazine Reader,

Our sister publication in Germany, Christmas will soon be upon us and, with it, the chance to over indulge. Have we all been founded in 1994, was the first good girls and boys? Just how does Santa know these days if you are good, or that you’ve Linux magazine in Europe. Since been caught looking at a Windows XP site and so will be missed this year. Will Santa bring then, our network and expertise has grown and expanded with the me a new Sun Blade server? Can someone, anyone please stop all the spam? Linux community around the I live my life by email. I have a continuous webcam pointed at me. I drive an automatic world. car because even that is digital – stopped or foot full to the floor (30 mph max because it is As a reader of Linux Magazine, a small engined untuned Mini). I listen to music digitally, watch digital news feeds and you are joining an information network that is dedicated to play digital games. Slowly my life is becoming digital. This is fun at times, but I know distributing knowledge and many find the amount of information held about someone intrusive and if correctly technical expertise.We’re not datamined and sifted, whole patterns of your life are open to the world. simply reporting on the Linux and Open Source movement, Buy something with a credit card and what you’ve bought, when and where are all we’re part of it. recorded. With enough records we could build up a typical weekly picture with your habits and movements. Deviate from your normal routine and it could be highlighted. Not that it is infallible, otherwise every car would have tax and insurance. It is all a case of who is doing the data mining and for what purpose. Santa will no doubt be using technology this year to keep checks on people. A quick browse by name brings up many associations, email archives show just what you said to whom and when, and if cross referenced just how long it took to send the apology. Mailing lists will not just be harvested by spammers to steal your valid email address but for your comments and views. It is no use trying to correct a webpage when the spiders have already taken a cache copy. Should this be a problem? After all we are all honest and, in a cowboy film, would wear white. We use Linux so we must be good. The problem is things get taken out of context. Misquoting is easy to do and leads to the wrong opinions being formed. Mr W Gates is usually quoted as saying “640K should be enough for anyone”. The quote appears in so many web sites and books that if he did not say it, as he now points out to people, then he Online Archive should be a little pleased because of the publicity. Linux Magazine is proud to With Human Resources departments now doing a scan on the name announce the availability of for job applicants, will they have the time to read around any our online archive. Under misquote to understand the context? http://www.linux-magazine. With the changing digital age will this mean we become more com/Readers/Archive you will sensitive to what we say online? Even IRC channels now have logging find a database of Linux Magabots so what was once a free flowing form of communication is zine articles in PDF format. occasionally becoming stilted and people think about future Note than in addition to the consequenses before typing. current articles available Will I be misquoted in the future – Yes. online, we will be adding the Will anyone take the time to check the context – No. complete content of back “I am sorry, I apologise, I meant to say: My mistake.” issues as well. New articles are There, that should get me out of a few future problems.

added every week, so check frequently!

Now Santa, about that new Ferrari… Happy Hacking,

P.S. You will have noticed the cover date of this issue is December 2002 / January 2003. We are not skipping a month but falling into line with the rest of the publishing industry. The date is used in the UK as the date to remove from the newsagents’ shelves and overseas as the sale date.

John Southern Editor

Dec 02 / Jan 03



December 2002 / January 2003


20 Software



Windows on a Linux host system normally means VMware –

Business . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

a software product that emulates a complete PC, providing a useful solution with


performance to spare.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Is your system up to date? Fix the latest problems.

Read on to discover how and where both


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Zack’s news roundup from the kernel developers.

of the members of

Letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18


this ship







issues affect the new


3.2 version, and how

Linux & Windows Intro

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Integrate two operating systems and make life simpler.

to deal with them.

VMware 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

32 Win4Lin 4.0

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Win4Lin provides Windows emulation for Linux. We take a look at the new hardware support and features.

Windows XP tested

Microsoft, who manufacture and distribute their own unique OS have been around for quite some time. Their latest version, Windows XP, comes in two flavours, the “Home Edition” and the

Serial Connections

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 A comprehensive guide to networking with a serial cable between two computers


Cygwin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

find out whether

Edition”. We felt it was time to these

Access your favorite tools while working on Windows.

can with


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Free Standards Group

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 A look at the standards everyone wants to be part of.


United Linux





Linux Release Candidate 2. With specific focus on installation,




. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 The stream editor solves problems in next to no time.




security features and components that make up



Sylpheed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42


A quick and flexible mail client that helps you manage even the largest amounts of email.

distributions, we highlight



other of


problems encountered

LaTeX Workshop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

and show you what to

In our Latex workshop we take a look at error messages and modifying fonts and images.

expect in the future.




Linux systems.

Windows XP tested . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 United Linux

products keep

Dec 02 / Jan 03

December 2002 / January 2003





Wouldn’t it be nice to have an answer to spamming? At least it is no deal to configure a mail server to prevent your system from being misused as a relay station by spammers. Restrict the Mail Transfer Agent’s capabilities and stop being

Charly’s column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Real System Admin tips and tricks to help you.

OpenSSH: Part III

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Make cronjobs more secure by adding commands to your OpenSSH keychain.

an accomplice to the crime.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56


Perl Tutorial: Part 7

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

To fit in with the festive season we are going to look at presents, well packages, but still bundles of coding joy.

C Tutorial: Part 13

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

The final part. This time what not to do.


Kmp3Indexer LINUX USER

Do you use your computer as a jukebox? Are you slowly losing track of your

Ktools: Kmp3Indexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

musical treasures as your

DeskTOPia: xmtoolbar

MP3 collection expands? No

Add a menu to your desktop or just improve what you have with the Xmtoolbar.

matter how thin the line

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

between chaos and genius may be, a well organized

Out of the Box: LinkLint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

index of your favorite MP3s

Forget testing website links manually with LinkLint.

can help you avoid wasting time looking for your music tracks. Kmp3Indexer takes the pain and misery out of creating the lists.

Console RPM

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

Using the command line tools from RPM is quicker and gives you more control than the graphical front-ends.

Gambas Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 A Visual Basic clone for Linux.

dig and DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86



dig and DNS

The Server





ween user-friendly host

Brave GNU World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 The User Group Pages

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

names and the more computer-friendly



addresses. We take a

Events / Advertiser Index / Call for Papers

. . . . . . . 96

peek behind the scenes and let the dig tool do

Subscription CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

some spying for us.

Next Month / Contact Info . . . . . . . . . . . . . . . . . . . . . . . . . 98

Dec 02 / Jan 03




Software News ■ Enterprise workflow from Plexus Recently released, Plexus FloWare claims to be the first industrial strength workflow engine available for Linux. With the enterprise user in mind, the main design criteria of scaleability and robustness for FloWare are of paramount importance. Initially, Plexus FloWare is to be released on the SuSE version of Linux, with other versions to follow. The current release is designed for the Oracle 9i database, but there are plans

to support mySQL by the 1st quarter of 2003. FloWare fills the need for an enterprise strength workflow product, something that is required by the complex businesses running today. Plexus is well placed to fill this need, with their close involvement with industrial associations such as the Workflow Management Coalition and AIIM. ■

■ Balancing act of the gods For mission-critical web server infrastructure and solutions, load balancing is a key component. Zeus Technologies has a successful track record with such products and have now released Zeus Load Balancer 1.6.

With their minds set on producing ‘deploy-and-forget internet traffic management’, the focus is firmly on

easing the installation and maintaining the simplicity of traffic management. New features for this version include Automatic Session Persistence, Port Mapping and Connection Draining. The inclusion of FTP Load Balancing will greatly help those running file servers. This release also sees updates to the user interface, improvements to the diagnostic tools and a revision of the documentation. “Load Balancing has traditionally been thought of as a hardware application.” said John Paterson, Zeus CEO, “but with the increase in performance and lower price of standard server hardware, load balancing can now be delivered by software, allowing far greater functionality and more flexible deployment options than from a hard-wired, dedicated box.”■

■ Delegating power with PowerBroker The recently released PowerBroker v3.0 is now available for download from the Symark website. The product will provide you with the ability to delegate administrative privileges without having to disclose that all important root password. This means that administrative tasks can be passed to directors and managers without introducing any new security risks. PowerBroker also manages privileges


Dec 02 / Jan 03

and access to third-party applications and accounts (e.g. database, CRM, and ERP), including generic accounts. In addition, PowerBroker extends the limited logging capabilities of traditional UNIX systems by providing an indelible audit trail of all accepted and rejected user requests and session I/O to ensure a secure environment as well as user accountability. ■

■ Finding RPMs made easy is looking to become the de facto standard for people searching for that elusive rpm. This new Linux portal offers users a convenient search engine geared just

towards locating those convenient package applications and libraries. Packages can be searched for in a variety of ways, including package name, by file name in an individual package, by keyword or dependency, etc. The bugbear of dependencies is resolved by cross references. ■

■ Exchanging MS Exchange The SCO Volution Messaging Server is a popular e-mail replacement for Microsoft Exchange. Known to be a reliable, easy to use and low cost solution, it provides small-to-medium businesses and replicated sites with secure and robust e-mail and collaboration tools. A significant feature of the SCO Volution Messaging Server is its compatibility with popular mail clients, including full support for Microsoft Outlook products, other Internet mail clients, and Netscape Navigator and Microsoft Internet Explorer browsers. To make the transition from legacy Exchange servers as simple as possible, SCO now offers their Insight Connector product, which runs on those machines still required to use Microsoft Windows. When used in conjunction with SCO’s Volution Messaging Server, the user will find they have access to all of the popular Outlook collaboration functions such as calendaring, shared folders, global address books and the meeting room scheduling. ■ volution/



■ Trolltech releases Qt 3.1 process even further. Improvements include integration with the Mac Appearance Manager, anti-aliased text drawing, user setting and greatly improved OpenGL support. With Qt 3.1, Qt now supports Mac OS X v10.2 – code named Jaguar – the newest version of the Mac OS. Qt 3.1 also enables users to implement their own Help Browser based upon Qt’s own Documentation Help Engine, giving Qt users an up and running framework into which they can insert their own documentation libraries. Other Qt 3.1 benefits include easier graphical user interface (GUI) design, enhanced documentation assistance, improved multi-threading and hundreds of improvements to the existing class library. ■

The new release of the single-source, multiplatform C++ software development toolset sees the introduction of the QMotif integration module. Developers are now able to insert Qt-based code into existing Motif applications in a simple, step by step action. This therefore allows for a gradual transition path for single platform applications to reach much wider markets by using a variety of different platforms. The multiplatform approach enables IT professionals to spread investments and resources over time. The benefits of this gradual process are reduced down time, ease of development and reducing the barrier which stalls many projects when trying to move to alternative platforms, This, in turn, translates into time and cost savings.

Once your applications are fully migrated to Qt, the flexibility and freedom of platform choice is unrivalled. “One of the biggest selling points to management was the ability to use the same code base across Linux, Windows and Mac,” says Software Engineer Phil Brooks of Olympic Medical. “We ported our basic application to MS Windows in 15 minutes.” “With the QMotif module we are able to migrate the ASTRIX radio planning tool to Qt, without rewriting the entire application,” said Senior Software Engineer Trond Hageseter at Teleplan. “Qt has been instrumental in the improvement of ASTRIX.” In addition to this, there are also many improvements made to the integration of development tools for Mac OS X, simplifying the multiplatform development

■ SQL power all Pervasive

■ SuSE Linux Enterprise Server and IBM DB2

Pervasive has launched their latest product, P.SQL V8 for Linux. This is a completely reconstructed version of its database management system. The new version offers users a completely reengineered database architecture, with speed improvements up to 200%, in terms of transactional and relational elements. In addition to this, Pervasive’s other unique selling points are little to no maintenance requirements, ease of installation and an affordable price. Linux continues to be a key platform for Pervasive users because they know that applications running under Linux offer users inherent stability, higher security, very high up-time and availability plus excellent built-in logging capabilities. In some cases, existing Linux users have implemented Pervasive database engines instead of Oracle, because in addition to the huge cost savings, the Pervasive database also offers greater stability. Two of Pervasive’s most well known Linux users are US real estate company Century 21, and the Dutch Ministry of Traffic/Siemens, who are using Pervasive to manage an advanced motorway traffic management system around the city of Amsterdam. ■

IBM DB2 Version 8 database software is used all over the world with a proven track record in the enterprise market. SuSE is the first Linux distribution to have been validated for use on all of the hardware platforms that DB2 supports. This includes IBM zSeries mainframe computers. The SuSE Linux Enterprise Server (SLES) has proven itself as a powerful Linux platform for IBM’s DB2 Version 8 database software, proven by the SLES latest certification for DB2. The story of success for SuSE and its enterprise products builds on the work done validating other distributions,

including SLES 7, SLES 8 on Intel 32 and 64 bit, and SLES 7 on zSeries (and SLES 8 on zSeries when available). SuSE Linux assures enterprise customers running database workloads in a tested Linux environment. SLES is a proven Enterprise Solution and one of the most secure, affordable and reliable operating systems in the world. “IBM’s DB2 database software is a leader in database scalability, reliability, multimedia extensibility, and Web provisions which are needed for the most demanding e-business applications,” said Boris Nalbach, CTO SuSE Linux AG. “This latest validation is proof of the successful co-operation between IBM and SuSE and our determination to deliver Linux-based Enterprise Solutions for the data center.” “IBM is committed to support the efforts of SuSE and UnitedLinux partners to enhance the scalability and reliability of Linux for the enterprise,” said Lauren Flaherty, Vice President of Marketing, IBM Data Management Solutions. “The combination of DB2 and SuSE Linux enables enterprise customers easily to deploy both products right out of the box.” ■ products/sles/index.html

Dec 02 / Jan 03




Business News ■ Trustix to burn rubber with Ferrari Trustix has been chosen by the car manufacturer Ferrari in the UK to integrate additional Linux based solutions to its existing IT infrastructure. Ferrari UK is installing five IBM eServer xSeries systems with two Trustix LAN servers, Trustix Proxy server, Trustix Web server and Trustix Firewall. Ferrari’s initial deployment is replacing an existing Novell network and with Trustix Firewall they are supplementing an existing Checkpoint firewall installation. This decision augments their existing deployment of a Linux based mail server five years ago. The move towards an Open Source solution came at the suggestion of LinuxIT, who were aware of Ferrari’s keen interest in continuing to lower IT costs and increasing manageability through Linux based solutions. Chris Rooke, IS Manager at Ferrari UK commented: “We have been watching developments in Linux with great interest since our initial implementation five years ago. The Trustix and IBM server solution will eliminate the upgrade costs associated with the Novell approach without any loss of functionality. The Trustix Firewall will supplement an existing fire-

wall strategy and give us the ability to cost effectively and easily manage that firewall locally and remotely which we could not previously do”. Rooke continued: “The graphical interface with the Trustix products is very easy to use and therefore you do not need in-depth Linux experience to use the products on a day to day basis. This is particularly important for us as only one of the team is fully Linux literate”. “Ferrari UK’s migration to the Trustix Linux Solutions is a true reflection of the standard of quality of the products,” noted James Dean, UK Sales Director of Trustix. “We are delighted by Ferrari UK’s investment, and we anticipate that their total cost of ownership savings will be considerable”. ■

■ Professional Open Desktops from LinuxIT Corporate Linux desktop solutions are now seeing the light of day offering a viable alternative to Windows fare. LinuxIT has started its Professional Open Desktop initiative to help provide just such an alternative. Designed for organizations with nontechnical staff, the Open Professional Desktop looks to provide full interoperability with Microsoft and Unix systems, but at a much lower license and maintenance cost. This program offers all the functionality required for business needs without the vulnerability, exposure and costs associated with the usual desktop alternatives that other companies seek out. Peter Dawes, Sales Director of LinuxIT commented, “Often, moving to Linux desktops is cheaper than staying with the alternatives. Many organizations have woken up to the true cost of desktop systems, both in licence costs and maintenance.“ Coming in two forms, the Professional Open Desktop Standard Edition is a Linux Desktop based on LindowsOS, which is optimized to run Microsoft Office applications including Word, Excel and PowerPoint. An Enterprise edition is more focused on Ximian technology. These result in a package that offers Linux desktop solutions, delivering the best of both worlds without needing a Microsoft Operating System license. ■

■ Strategy & Technology moves to Linux Strategy & Technology Limited, a consultancy for the digital television industry, and responsible for many of the applications used to make such interactive programs as Big Brother and Banzai, is moving to Linux to run their integrated financial, job costing and business management software. They have decided to run with Hansa Financials applications, operating on a Linux server platform, which will also support the company’s mixed clients – Apple Mac, Windows and Unix PCs – across a wide area network. This will enable the staff at Strategy & Technology to access and share real-time information across each of its nine locations.


Dec 02 / Jan 03

Explaining the reasons for choosing Hansa ahead of its competitors, Nick Birch, Director of Strategy & Technology, comments, “We decided to host the software on Linux as it is a reliable, cost effective and scaleable platform. Not being Microsoft or Windows enthusiasts, we welcomed how efficiently the Linux server operates with little need for attention and how it safeguards our business against virus attacks that are commonly associated with alternative operating system platforms.” “We were looking for an internationally proven product that would allow us to operate a Linux-based server and mixed clients. Hansa Financials is

one of the few business management software products available on the market that can deliver true multiplatform capabilities.” Strategy & Technology is taking advantage of the remote access capabilities of the Hansa software to provide nonaccounting staff with the ability to log on to the system off site in order to input information required for project costing such as time recording and order processing. By allowing every member of staff access to the system, regardless of their location, this will help streamline administration and improve the overall efficiency of the company. ■



■ Linux Online recruitment site is the new recruitment site dedicated to filling the needs of Linux professionals world-wide, a job market that will surely lose its niche status any day now. Visitors to the site will get the chance to register their details online, part of the free service. From there, jobseekers can be notified of suitable vacancies by email or text message. There is also the chance to browse through the list of positions to see if there is anything the prospective job hunter might need to revise on their CV to be in with a better chance of success. As you would expect, it is the

employers who get to pay for the services, so they can score that most worthwhile goal and find a suitable candidate for the job. “Despite the global recession in IT, finding trained professionals with Linux skills is still difficult” commented Peter Wilson, the Technical Director of Yellowbeak. “The first commercially packaged versions of Linux came to market in 1994. It now accounts for nearly 30% of the market for server operating systems and more than half of all Web servers run Linux. The advantage of the site to advertisers is that it also attracts

■ Interface Solutions to

■ Union set to see IT done right The Public and Commercial Services union has launched a unique partnership, with an approach to government IT outsourcing that will ensure the latest government tender, the £7.6 billion pound contract to provide the Inland Revenue’s information technology services, does not end like so many others in failure. The multi-million pound, ten-year contract to provide IT services to the Inland Revenue is the largest government contract in Europe, employing 3,000 workers. The contract has direct implications for millions of people across the UK. Over 70,000 employees in the Inland Revenue service rely on the IT services the new contractor will provide and a further 29 million members of the

public rely on the Inland Revenue’s services. PCS represents members working for both the Inland Revenue and IT service contractor. As key stakeholders in the project, it is vital that the members’ views are considered during the tendering process. In a unique union initiative, PCS has commissioned an independent survey. Today PCS launches the manifesto “Quality, Respect and Partnership” based on the survey’s findings. For the first time those delivering the service will have their say on how it should be run. Surely with such open minded thinking, Linux and Open Source initiatives must have a role to play? ■

■ Sendmail eclipses Sun Recent benchmark tests show that HP ProLiant servers running Sendmail are outperforming Sun platforms. Performance testing indicated that Sendmail software running on two- and four-way HP ProLiant servers with SuSE Linux Enterprise 7 and the Reiser Filesystem was significantly better than Sun servers running Solaris in message throughput. It goes without saying that it beat it on price too, so we won’t tell you about that. “With this proven solution from HP and Sendmail, running Linux on ProLiant servers, MobileCom has now in its hand a highly performing and reliable


Dec 02 / Jan 03

candidates with C++ and Java skills – other disciplines where there are skill shortages. On day one of launch we are already advertising positions in the United States, UK and Europe.” ■

email service,” said Mickael Ghossein, Chief Executive Officer of MobileCom, a leading wireless provider in the Middle East. “This solution will allow us to serve our rapidly growing customer base with the innovative services needed to stay ahead of our competition.” ■

distribute Trustix in UK Interface Solutions has been chosen to provide Trustix plug and serve Linux based products with IBM eServer xSeries servers to IBM resellers. Trustix, the Norwegian independent software vendor of security and network management solutions for Linux, is a provider of standalone solutions which include a firewall, mail server, web server, LAN server and proxy server. Peter Hammett, Managing Director of Interface Solutions explained: “The Trustix Linux Solutions makes Linux based systems accessible and manageable for all types of organisations, requiring little or no need for in-house Linux expertise. This combined with the first rate IBM eServer hardware is a very attractive proposition for our resellers as it makes an easy to install and maintain solution for its increasingly price sensitive customers.” “We are delighted to work with Interface Solutions as a leading supplier of IBM eServer xSeries solutions in the UK”, said James Dean, UK Sales Director of Trustix. “The new integrated solution from Trustix and IBM, coupled with all the services from the Interface Solutions’ team, will help further reduce the total costs for any of the organizations who want to take advantage of all the benefits of using and implementing the Linux operating system platform”. ■



Insecurity News

1.0b1-8.1 for the old stable distribution (potato) and in version 1.0b3-2 for the unstable distribution (sid). ■ Debian reference DSA-192-1 html2ps

■ Bind 8

■ luxman

ISS X-Force has discovered several serious vulnerabilities in the Berkeley Internet Name Domain Server (BIND). BIND is the most common implementation of the DNS (Domain Name Service) protocol, which is used on the vast majority of DNS servers on the Internet. DNS is a vital Internet protocol that maintains a database of easy-toremember domain names (host names) and their corresponding numerical IP addresses. Circumstantial evidence suggests that the Internet Software Consortium (ISC), maintainers of BIND, were made aware of these issues in mid-October. Distributors of Open Source operating systems, including Debian, were notified of these vulnerabilities via CERT about 12 hours before the release of the advisories on November 12th. This notification did not include any details that allowed us to identify the vulnerable code, much less prepare timely fixes. Unfortunately ISS and the ISC released their security advisories with only descriptions of the vulnerabilities, with-

out any patches. Even though there were no signs that these exploits are known to the black-hat community, and there were no reports of active attacks, such attacks could have been developed in the meantime – with no fixes available. We can all express our regret at the inability of the ironically named Internet Software Consortium to work with the Internet community in handling this problem. Hopefully this will not become a model for dealing with security issues in the future. ■ Debian reference DSA-196-1 bind

■ html2ps The SuSE Security Team found a vulnerability in html2ps, a HTML to PostScript converter, that opened files based on unsanitized input insecurely. This problem can be exploited when html2ps is installed as a filter within lprng and the attacker has previously gained access to the lp account. These problems have now been fixed in version 1.0b3-1.1 for the current stable distribution (woody), in version

Security Sources, List:debian-security-announce, Reference:DSA-… 1)

Mandrake, List:security-announce, Reference:MDKSA-… 1)

Red Hat (linux-security and redhat-announce-list) Reference:RHSA-… 1)

SCO, announce.html, Reference:CSSA-… 1) (slackware-security), Reference:slackware-security …1)


SuSE security/, download/updates/, List:suse-security-announce, Reference:suse-security-announce … 1)

Comment Debian have integrated current security advisories on their web site.The advisories take the form of HTML pages with links to patches.The security page also contains a note on the mailing list. MandrakeSoft run a web site dedicated to security topics. Amongst other things the site contains security advisories and references to mailing lists.The advisories are HTML pages,but there are no links to the patches. Red Hat categorizes security advisories as Errata:Under the Errata headline any and all issues for individual Red Hat Linux versions are grouped and discussed.The security advisories take the form of HTML pages with links to patches. You can access the SCO security page via the support area.The advisories are provided in clear text format.

Slackware do not have their own security page, but do offer an archive of the Security mailing List.

■ Kernel

There is a link to the security page on the homepage. The security page contains information on the mailing list and advisories in text format. Security patches for individual SuSE Linux versions are marked red on the general update page and comprise a short description of the patched vulnerability.

1) Security mails are available from all the above-mentioned distributions via the reference provided.


Dec 02 / Jan 03

■ Kerberos A remotely exploitable stack buffer overflow has been found in the Kerberos v4 compatibility administration daemon distributed with the Red Hat Linux krb5 packages. Kerberos is a network authentication system. A stack buffer overflow has been found in the implementation of the Kerberos v4 compatibility administration daemon (kadmind4), which is part of the the MIT krb5 distribution. This vulnerability is present in version 1.2.6 and earlier of the MIT krb5 distribution and can be exploited to gain unauthorized root access to a KDC host. The attacker does not need to authenticate to the daemon to successfully perform this attack. kadmind4 is included in the Kerberos packages in Red Hat Linux 6.2, 7, 7.1, 7.2, 7.3, and 8.0, but by default is not enabled or used. ■ Red Hat reference RHSA-2002:242-06

Security Posture of Major Distributions Distributor Debian

iDEFENSE have reported a vulnerability in LuxMan, a maze game for GNU/Linux, similar to the PacMan arcade game. When successfully exploited a local attacker gains readwrite access to the memory, leading to a local root compromise in many ways, examples of which include scanning the file for fragments of the master password file and modifying kernel memory to remap system calls. This problem has been fixed in version 0.41-17.1 for the current stable distribution (woody) and in version 0.41-19 for the unstable distribution (sid). ■ Debian reference DSA-189-1 luxman

The kernel in Red Hat Linux 7.1, 7.1K, 7.2, 7.3, and 8.0 are vulnerable to a local denial of service attack. Updated packages are available which address this vulnerability, as well as bugs in several drivers.



The Linux kernel handles the basic functions of the operating system. A vulnerability in the Linux kernel has been discovered in which a non-root user can cause the machine to freeze. This kernel addresses the vulnerability. Note: This bug is specific to the x86 architecture kernels only, and does not affect ia64 or other architectures. ■ Red Hat reference RHSA-2002:262-07

■ nss_ldap A buffer overflow vulnerability exists in nss_ldap versions prior to 198. When nss_ldap is configured without a value for the “host” keyword, it attempts to configure itself using SRV records stored in DNS. nss_ldap does not check that the data returned by the DNS query will fit into an internal buffer, thus exposing it to an overflow. A similar issue exists in versions of nss_ldap prior to 199 where nss_ldap does not check that the data returned by the DNS query has not been truncated by the resolver libraries to avoid a buffer overflow. This can make nss_ldap attempt to parse more data than what is actually available, making it vulnerable to a read buffer overflow. Finally, a format string bug in the logging function of pam_ldap prior to version 144 exist. ■ Mandrake reference MDKSA-2002:075

■ perl-MailTools A vulnerability has been discovered in Mail::Mailer perl module by the SuSE security team during an audit. The vulnerability allows remote attackers to execute arbitrary commands in certain circumstances due to the usage of mailx as the default mailer, a program that allows commands to be embedded in the mail body. This module is used by some autoresponse programs and spam filters which make use of Mail::Mailer. ■ Mandrake reference MDKSA-2002:076

■ Kdenetwork During a security review, the SuSE security team has found two vulnerabilities in the KDE lanbrowsing service.

Figure: The CERT/CC Vulnerability Notes Database at

LISa is used to identify CIFS and other servers on the local network, and consists of two main modules: “lisa”, a network daemon, and “reslisa”, a restricted version of the lisa daemon. LISa can be accessed in KDE using the URL type “lan://”, and resLISa using the URL type “rlan://”. LISA will obtain information on the local network by looking for an existing LISA server on other local hosts, and if there is one, it retrieves the list of servers from it. If there is no other LISA server, it will scan the network itself. SuSE Linux can be configured to run the lisa daemon at system boot time. The daemon is not started by default, however. The first vulnerability found is a buffer overflow in the lisa daemon, and can be exploited by an attacker on the local network to obtain root privilege on a machine running the lisa daemon. It is not exploitable on a default installation of SuSE Linux, because the lisa daemon is not started by default. The second vulnerability is a buffer overflow in the lan:// URL handler. It can, possibly, be exploited by remote attackers to gain access to the victim user’s account, for instance by causing the user to follow a bad lan:// link in a HTML document. ■ SuSE reference SuSE-SA:2002:042

■ Traceroute-nanog/nkitb Traceroute is a tool that can be used to track packets in a TCP/IP network to determine it’s route or to find out about not working routers. Traceroute-nanog requires root privilege to open a raw socket. It does not relinquish these

privileges after doing so. This allows a malicious user to gain root access by exploiting a buffer overflow at a later point. For all SuSE products prior to 8.1, the traceroute package contains the NANOG implementation. This package is installed by default. Starting with 8.1, SuSE Linux contains a traceroute program rewritten by Olaf Kirch that does not require root privileges anymore. This version of traceroute is not vulnerable. As a workaround you can remove the setuid bit or just allow trusted users to execute traceroute-nanog. Become root and add the following line to /etc/ permissions.local: “/usr/sbin/traceroute root.trusted 4750”. This line will keep the setuid root bit for /usr/sbin/traceroute and just allow users in group trusted to execute the binary. To make the permission change and keep it permanent you have to run chkstat(8): “chkstat -set /etc/permissions.local”. ■ SuSE reference SuSE-SA:2002:043

■ Apache Apache version 2.0.42 allows remote attackers to obtain the source of CGI scripts that are stored in locations for which both CGI and WebDAV are enabled. When a POST request is sent to a CGI script on an affected server, this vulnerability will cause the source code of the script to be returned to the attacker. This has the impact that remote attackers can obtain the source code of CGI scripts located on affected servers. ■ CERT reference VU#910713

Dec 02 / Jan 03




Zack’s Kernel News Chilly rivals The procps system monitoring project has been forked. Rik van Riel and Albert D. Cahalan have been maintaining separate versions for a while now. This recently came up on the linux kernel mailing list, when someone noticed that two completely different versions, 3.0.1 and 2.0.10 had been announced, with sources available on two different web sites. This is not the first time a free software project has forked, but usually, as in the case of libc (glibc), gcc (egcs), and emacs (xemacs etc.), the forked project chooses a new name for their new path. According to Albert, Craig Small, Jim Warner and himself have been maintaining procps for several years, while Rik only recently picked it up; whereas Rik appears to have felt that the project was unmaintained. In particular, Albert’s version of procps still

does not support the newer interfaces in 2.5 kernels, while Rik’s version addresses those new interfaces and is intended to provide compatibility with newer kernels. One might argue that Rik’s work is intended as a hostile takeover of already maintained code, for various reasons that have not come to light; except that Rik himself admits that Albert’s code has merit, as a general rewrite and improvement of the previous code-base; while his own version is merely intended to support the latest kernels. Al Viro is less diplomatic, saying that Rik’s code has taste, and that he won’t work on anything having to do with Albert’s version. The final result of this conflict is still unknown. One possibility is that we will all get to see just how long two competing projects can keep the same project name, before one of them finally wins out. ■

Changing license The BitKeeper free license continues to change. Larry McVoy has now decreed that no one working on competing projects may use BitKeeper’s free license. Ben Collins, of the Subversion version control system project, found his license retroactively terminated as a result of these changes. There is some suggestion that, as virtually all Linux distributions support BitKeeper alternatives, no one working on those distributions has the right to use BitKeeper for free. It could also turn out that kernel developers using BitKeeper would forfeit their licenses if they participated in mailing list discussions aimed at helping design a free alternative. Since such discussions do occassionally come up, this is likely to be a recurring issue. In spite of the license changes, and (in the opinion of some developers) Larry’s apparent increasingly arbitrary way of interpreting the text of those changes;


October Dec 02 / 2002 Jan 03

Linus Torvalds and others do continue to use BitKeeper as a central tool in kernel development. Larry and others, including developers who are quite critical of BitKeeper, estimate that a proper BitKeeper replacement will take years to develop. One of the reasons for this is that BitKeeper was designed for massive scalability, using the concept of distributed repositories, from which developers may easily push and pull from and to other repositories. Subversion, the nearest free competitor, still relies on the CVS-like concept of a central server housing the main repository, with each developer using a client to check changes in and out of that repository. While Subversion is a marked improvement over CVS, it would need significant changes in design, and further feature development, before it could be considered as an actual replacement for BitKeeper. ■

INFO The Kernel Mailing List comprises the core of Linux development activities.Traffic volumes are immense and keeping up to date with the entire scope of development is a virtually impossible task for one person. One of the few brave souls that take on this impossible task is Zack Brown. Our regular monthly column keeps you up to date on the latest discussions and decisions , selected and summarized by Zack. Zack has been publishing a weekly digest, the Kernel Traffic Mailing List for several years now, reading just the digest is a time consuming task. Linux Magazine now provides you with the quintessence of Linux Kernel activities straight from the horse’s mouth.

Freezing Kernels A kernel feature freeze has been promised for months, with October 31 as the promised deadline. That deadline has passed, and while there is no objective measure for such things, it does seem as though this freeze has been taken seriously, and development will shift gears toward stablizing for the 2.6 stable series. In the mad crush of developers wanting to get their features included before the cut-off date, it has become clear that even after the freeze, new features may be allowed in under certain conditions. For instance, features that are very modular and not likely to interfere with anything else, may be considered for inclusion for awhile. Whatever its form, the process of moving from a development mode to a stability and production mode is very important. One of the great unsolved problems of free software is how to make this migration in a timely fashion. In the past, several years have gone by between stable kernel series; and the kernel has not been the only project to manifest this condition. Virtually all large free software projects, and many smaller ones, exhibit the same timing behavior. This has been recognized as a problem for a long time, but the solution has proven elusive. If kernel development


does manage to shift gears, and bring out a new stable tree of the kernel in a reasonable amount of time, it will be a major achievement in the evolution of the development process itself. The great discovery made by Linus in 1991 was that a software project encouraging contributions from all willing participants regardless of technical skill, could thrive and in fact surpass the work of small expert teams working in seclusion. While this mode of development resulted in tremendous feature-sets, such a large group of developers proved difficult to coordinate, to produce a final product. When one feature was stablizing, another feature was just getting started. Waiting for the new one to stablize would give the older one a chance to start exploring new, less stable territory. Accepting code from one group while insisting on freezes for others caused anger and bitterness. This problem still confronts most free software projects; and the progress of the 2.5 kernel toward a 2.6.0 release will reveal just how far such questions have been answered. ■

Calming IDE The state of IDE and IDE development continues to change. Recently the maintainer, Andre Hedrick decided to share the IDE maintainership with Bartlomiej Zolnierkiewicz. Back in the days when Martin Dalecki had taken over IDE and implemented massive and some long overdue cleanups, Bartlomiej had been of invaluable assistance. Toward the end they had a slight falling out, and Bartlomiej decided to maintain his own version of the IDE layer. Shortly thereafter, Martin gave up the fight, and Andre was reinstated as the primary IDE maintainer. Andre has always been a bit of a loose cannon, demanding absolute confidence from everyone around him, and insulting all those who dared suggest he might be wrong on any technical issue. Since returning as IDE maintainer, other developers (notably Alexander Viro) have been coaching him on

etiquette, and he himself insists that he’s changed, and that he is no longer the bombastic flamethrower he had been in the past. So far this does appear to be true. Andre is not the only kernel developer with a temper, and indeed, being easy to work with is not a stated requirement of any developer. Some people have blamed the poor state of IDE on Andre’s former irritability; but Linus Torvalds has pointed out that other developers, notably Al Viro, are notorious firebreathers, without any negative impact on their subsystems. Rather, Linus has suggested that the IDE house of cards was enough to drive any developer insane. Whatever the case, and in spite of the earlier flamewars surrounding maintainership, it does seem as though future IDE development will be relatively peaceful. Here’s hoping. ■

Sunny scripts Andrew Morton decided one day to publish the scripts which he uses to produce and manage all of his kernel patches. As he describes them, “These scripts are designed for managing a ‘stack’ of patches against a rapidly changing base tree”. This can be said to be the holy grail of free software development, as any well-maintained project will involve developers trying to manage a stack of their changes, against a constantly changing code-base. This is sometimes done with a version control system like Subversion, but in the case of the kernel, which uses a version control system many developers disapprove of and refuse to use, doing one’s own patch management becomes the only option. And with such a rapidly changing codebase, a set of scripts like those developed by Andrew become even more useful. Of course, most kernel contributors do not actually need such sophisticated tools. For small fixes and new features likely to be accepted right away into the tree, there may not be much patch management required. Developers like Andrew and others often maintain large sets of very invasive patches, some of


which they have written themselves, while some they have adopted from other developers. Keeping these larger patches organized and up to date is essential, if they are to have any hope of making it into the main tree before going out of date. Best of all, Andrews scripts are generic, and can be used by developers who are working on any project. ■

Rising systems The ioctl interface may be going the way of the Didus Ineptus, in favor of a new filesystem interface that seems to be much cleaner. IO control functions have been part of UNIX since the early days, and allow drivers and modules to define new functions that may be called by other parts of the system. Since there may be an arbitrary number of drivers and modules, the number of ioctls has increased without bound. Furthermore, because each is defined in the particular driver that implements it, any kind of documentation has proved virtually impossible to write. Until recently the interface had to be tolerated by all, because there was no better way for other parts of the system to communicate with any particular given driver. With the acceptance of libfs into the 2.5 kernel sources, however, that may all be a thing of the past. By creating filesystems on the fly in RAM, drivers may respond to changes to those filesystems in much the same way as they responded to ioctl calls in the past. It is similar to adjusting system parameters by modifying values in the /proc directory; except this interface is entirely generic and does not require building endless waste into the core kernel itself. Designing these RAM-based filesystems is apparently quite easy; and any developers submitting patches implementing new ioctl calls are now being told to resubmit their patches using the RAMFS approach. It appears that one of the oldest and most hated legacy UNIX features may finally be relegated to the status of being a deprecated interface. ■

Dec 02 / Jan 03




Letters to the editor

Write Access Audio Ordeal Q I have lecture notes on audio cassette which I want to put on CD, but I cannot figure out how to record from the ‘linein’ input on my sound card, though I have managed to make a recording from the ‘mic’ input. What am I doing wrong? This is with SuSE 8.1 with a Guillemot MaxiSound Fortissimo soundcard. Danial Hand, by email A While there is lots of development for sound applications in Linux, much of the basic support does seem to be missing, at first glance, if you want to do anything other than play audio. Assuming that you have the soundcard configured with the correct drivers, something SuSE is quite good at of late, you probably just haven’t got the correct mixer set, for levels and also for the recording source. If you start off alsamixer in a terminal, you should find controls for ‘Line’, but you may have to scroll further off to the right of that screen to see them all. You will need to select this as a recording source by hitting the space key. Depending on your card, you may also have a ‘Capture’ control, which might also need to be set as a source. There is more information available online, with the best place to start being ■

Library Card Q I often have problems with library dependencies when trying to compile a more recent version of an application. How can I see what libraries are supposed to be associated with an executable file? I have found that, by creating symbolic links for some libraries I can get around compilation errors. This is probably a very dirty hack, which is why I only subject my test/tinkering system to it. Jonathon Hoskins, Devon


Dec 02 / Jan 03

A The command ldd might be what you are looking for. This will give you details of all the shared library dependencies for a piece of code, be it an executable or another library. With the verbose switch you will also get details of which version of those libraries you have. colin@work:~> ldd -v /opt/kde3/U bin/kmail => /opt/U kde3/lib/ U (0x40014000) ... Version information: /opt/kde3/bin/kmail: U (GLIBC_2.0) => /lib/U U (GLIBCPP_3.2) => /usr/lib/U U (CXXABI_1.2) => /usr/lib/U U (GLIBC_2.1) => /lib/ U (GLIBC_2.1.3) => /lib/ U (GLIBC_2.0) => /lib/

XP Format Q I have tried to install Linux on my laptop and dual boot it with XP, which is currently installed. It says that it will need to format the entire hard disk. Is there any way to avoid this, as backing up my XP installation and reinstalling will just be too much trouble. John Monaghan, Ardee A If it is the factory installation of XP, then the it is bound to be the case that the hard disk will have been partitioned with NTFS, the file system used by XP by default. Unfortunately, there are no Linux distributions that currently allow

you to resize this type of file system in order to make space for your new Linux installation. There is third party software which will allow you to resize NTFS partitions, Partition Magic 8.0 is the most popular. Should you decide to go for the reinstall, then you have more options. You would want to reinstall XP on the machine first, but make sure it uses the win32 file system. This file system is resizable, so, when you come to do your Linux installation, the partition can be made automatically. If you do decide that the hassle element is just too much, you still have the option of running something like Knoppix, a Linux distribution that can be run from a CD, so you don’t need to create that dedicated partition. As an alternative to that you could even consider a umsdos Linux, a Linux distribution that you install almost as a Windows application. Neither are as good as having a full Linux install though, so I hope you do make that extra effort. ■

ERRATA Yes, there was an error in last months article on M4. In the second Listing 2 on page 40 the line _link(</p> really should read _elink (</p> Thanks to all those who pointed this out, even those who pointed it out more than once.

Linux & Windows Intro


Interaction between Linux and Windows Computers

Connecting Two Worlds Linux is extremely communicative by nature – after all it is part of the Unix family, which expects no less. But Windows networks have their own rules. Fortunately, there are some software applications that can help integrate the two platforms. BY HANS-GEORG ESSER


his issue’s cover story is not about Samba for a change, although Samba normally predominates in articles on this topic. Instead, we will be looking at several different aspects of being able to use both Linux and Windows at the same time. Having more than one computer is not always a desktop option. Having to reboot the computer takes too long just for some extra functionality or task. Even when you do have more than one machine, you may find yourself needing to connect and use other operating systems which are available on the network from your favorite machine. It might even be as simple as the need to connect two computers and transfer data and files without the usual ethernet network being available. We explain all this and show you how your collection of computer operating systems can live in harmony. Having all the functionality of

Linux while working on a Windows machine may just save you time and effort in that busy schedule. Being able to run other operating systems while still in Linux will save you from a full systems crash: • The idea of having two operating systems on one computer without having to reboot sounds very appealing to most users – that is why we have regularly looked at emulators for Windows in the past. In the current issue we will be looking at the new versions of VMware Workstation 3.2 (page 20) and Win4Lin 4.0 (page 24). These have both advanced in features and ease of use since we last considered them. We let you make an informed choice. • If a second computer is available at home or in the office, and one of these machines

still runs on Windows while the other has now successfully moved to Linux, there still may be some data transfer issues to resolve. You do not automatically need an Ethernet or a Wireless LAN for this task – a simple null modem cable will also do the trick, provided both computers have serial ports (page 26). • Finally, we will discuss an X server for Windows that will allow you to run XEmacs, xterm, or your favorite X application just as easily on Windows as you do on Linux. This involves running the actual program on a Linux computer, but allowing your computer to make the most of the office Linux server (page 30). ■

Cover Story VMware 3.2 ...............................20 With the release of the new version 3.2, we explore what issues are affected and how to deal with them.

Win4Lin 4.0 ..............................24 Version 4.0 of Win4Lin provides Windows emulation for Linux.We take a look at the hardware support.

Serial Connections...............26 Networking is one of Linux’s stronger points. Even without a network, a serial link can solve problems.

Cygwin .........................................30 Condemned to working with Windows does not mean you have to do without your favorite tools. Xfree86 is possible.

Dec 02 / Jan 03



VMware 3.2


hat do you do if you’ve become so attached to your Linux system that you can no longer live without it but still need to run applications like Microsoft Office or Adobe Photoshop? In this case VMware, an application which emulates the PC, provides a, mostly, useful solution with performance to spare. The performance of virtual machines normally scales well with the hardware of the host computer. The only issue with hardware emulation is the fact that there is no support whatsoever for any kind of direct hardware access – the host operating system sees only the PC emulation. For graphics fans this means doing without maximum application performance, specifically in areas such as CAD, rendering, and of course games. And VMware’s known difficulties with sound and CD writers unfortunately live on in the new 3.2 version meaning that access to both device types by the guest system is extremely restricted – for example, microphone or CD recording still will not work.

VMware Workstation 3.2

The Tenant Windows on a Linux host normally means VMware. Read on to discover how and where the members of this more or less involuntary partnership meet, what issues affect the new 3.2 version, and how to deal with them. BY FREDERIK BIJLSMA

Installation in Several Steps You can download the VMware archive from the Internet [1] in tar or rpm format, and try it out for 30 days (it is actually included on the subscription CD). If you intend to carry on using the software, a registration fee is applicable (see Box “Keys and Licences”). For RPM based distributions simply use the following command rpm -ivh VMware-workstation-U 3.2.0-2230.i386.rpm


to install the software, and then run the script to complete the basic configuration. If the supplied VMware kernel modules do not match the installed Linux kernel, you will need


Frederik Bijlsma is a student of economics. He lives near Berlin and is working for his own Linux company since 2001 as a Linux consultant. You can contact him at

Dec 02 / Jan 03

issued when you attempt to compile the GCC compiler, and the header files them. Within the bounds of our test this appropriate to the kernel on your did not noticeably affect any major computer. This will allow VMware to distributions, but on a pre-release build suitable modules (Figure 1) version of Red Hat Linux 8.0 VMware 3.2 ensuring that VMware is network crashed on “Power Off” and “Suspend”. aware, and will run a start script of its own whenever you start your system. However, the quality of the code for the new kernel modules is debatable – numerous warnings are Figure 1: Building kernel modules with

VMware 3.2

Welcome, Guests!

Figure 2: Starting the Configuration Wizard

Figure 3: Each virtual machine has its own directory then asks you to decide whether your virtual machines should access the network “like real PCs”. You can use the default settings for prompts of this type when asked if the virtual machine should be allowed access to the host file system, and what kind of access this should be. At last you can launch the software by typing the vmware & command. But this is not really worthwhile until you have obtained a registration or evaluation key (see Box “Keys and Licences”).

GLOSSARY Virtual Machine: A virtual machine fools the guest operating system into believing that it is running on a physical computer. In VMware’s case, this means providing an environment composed of emulated PC standard components, allowing you to assign attributes such as memory size, peripheral devices, and hard disks within the bounds of the physical hardware specification. Host Computer: The VMware host computer can host multiple virtual machines. In other words, the VMware application runs on this computer system. RPM based distribution: A Linux distribution that uses the rpm(“Red Hat Package Manager”) to manage installed software, and for installation tasks.Mandrake, Red Hat, and SuSE Linux are all RPM based. Header file: A file containing an interface description for a function (mainly in C or C++), which does not contain the applicable code (the implementation). If you intend to link a program with external libraries, you will definitely need the corresponding header files to be included.

You can now use the Configuration Wizard (Figures 2 to 4) to configure the environment for the guest operating system. This practical tool is launched whenever you want to create a new virtual PC, and allows you to define basic characteristics, such as: • the guest operating system type, • the size of the virtual hard disk(s), • access to CD ROM and floppy drives or • the network interface type. Older versions of the Wizard also allowed you to define the memory size available to the guest – of course this cannot be larger than the amount of physical memory on the host. VMware configures this setting automatically in version 3.2, but you can still change the preset by accessing Settings / Configuration Editor… / Memory. After the Wizard has finished, you can insert a bootable CD ROM into your CD drive and click the “Power On” button to start the virtual machine. Windows (or a second Linux version) will find an environment that emulates a stand-alone PC. The guest operating system will thus not be aware of the pre-installed Linux version (except on the network, and only if so desired). You can launch the Wizard at any time later, via File / Wizard…, to install a new virtual machine, or overwrite an existing configuration. If you then restart VMware, the emulator prompts you to specify the configuration to be loaded (Figure 2). As this selection occurs on the basis of file names, you should choose easily recognizable names – VMware sensibly suggests the name of the guest operating system.


to maximize VMware’s graphical potential is included. • you can synchronize the clock on the guest operating system with that of the host clock. To profit from these features, you need to have administrative rights for the Windows operating system. Clicking on the Settings / VMware Tools Install… in the VMware menu bar automatically launches the Windows installation. You can later customize the screen resolution in Windows and select a color palette with more than 256 colors.

Virtual Hard Disks… VMware basically supports two options for storing data for the guest operating system. You can use an existing partition on your hard disk. This is referred to as Raw Disk mode. The advantages are selfevident: the guest OS can use a Windows

Keys and Licences The VMware sources are not available and the program is commercially licensed.You can apply for a 30 day evaluation key free of charge from vmwarestore/newstore/wkst_eval_login.jsp (you will be asked to supply a lot of personal details and the key will be mailed to you). An unlimited key costs US $299. Owners of VMware 2.x can upgrade to the current version for a mere US $149.The boxed product costs US $30 extra in both cases. Updates are often available at reduced rates shortly after a product is introduced, so it might be worthwhile visiting the web site. The licence key is queried by VMware when you launch the program and saved in the hidden .vmware directory below your home directory. Other users will thus require their own keys to access the VMware installation.

More Drivers Even though the Windows guest operating system may correctly identify the devices on your system, you still need the VMware toolbox to provide drivers for the guest system. This collection, known as VMware Tools, impresses for the following reasons: • it is possibile to exchange data between the host and guest operating system via a common clipboard. This allows you use Cut&Paste to copy text from a Microsoft Office application to a Linux application (Figure 5). • an SVGA driver that allows you

Dec 02 / Jan 03



VMware 3.2

partition mounted on Linux. If you have a dual-boot installation and select Windows 2000 or Linux when you boot your system, you can allow VMware to launch your Windows version on Linux, if required. You will need to create different hardware profiles to do so: one for the hardware of your host computer, and another for the standardized VMware hardware. The main disadvantage of raw disks is the need to partition your hard disk, and the accompanying loss of flexibility. Additionally, each Linux user wanting to use the raw disks needs to be added to the Linux disk group. If the advantages do not outweigh these disadvantages, you might prefer to use a different method of accessing your hard disks, virtual disks. This actually means “hard disks in file format”, or hard disks including virtual hardware information stored in files. VMware can create files of this type on any operating system. Drive image files can also be accessed via the network, and the user even has the option of transferring them between different VMware installations. This also works if the host operating system happens to be Microsoft based. However, performance suffers due to the double management load (the Linux file system and the hard disk emulation). Applications with excessive hard disk access would be better off on raw disks.

…and Fake CD ROM Drives For test purposes and to appease applications that insist on you inserting a CD ROM, VMware offers you the option of fooling the guest OS by simulating a CD ROM. To do so, you first create an ISO-Image on Linux:

system and then select Settings / Configuration Editor in the menu. To access an image as if it were a CD ROM, now select IDE Drives and choose a device you have not yet installed. Assign the Device Type CDROM Image and specifiy a file name for the image file. Click on Install to make the drive available the next time you start the guest system. You can assign a new image to the Figure 4: Choosing a Guest Operating System virtual drive at runtime – very much like changing a CD. the server can be found under To do so, click on Devices / Name_of_CD/etc/vmware/vmnet*/dhcpd/dhcpd.conf. ROM-drive/ Disconnect and Edit, which A DHCP server is added for each will take you to the dialog box used for network connection you configure. setting up a new drive. The emulator does have some difficulty with increasing network loads Windows Network in NAT mode. If a large number of connections are active, the CPU may With the exception of the clipboard, the become so overloaded that the whole network is the only way to exchange PC, including the host operating system, data with the host operating system and grinds to a halt. This is something the thence with the outside world on manufacturer will have to resolve. VMware. You might like to use a Samba However, the NAT daemon in version server for this purpose, as automatically 3.2 does create a much better impression created by the tool in than its predecessor. In fact, in our test the configuration phase. environment we did not manage to VMware additionally offers a DHCP open enough NAT connections to force server in its private network, and this VMware to its knees, in contrast to our server is used to supply any information VMware 3.1 test, where we had to reset the virtual machines need (such as IP the computer. addresses). The configuration files for

dd if=/dev/cdrom U of=cdrom-image1.iso


This command reads the CD in your drive and creates an image in the current directory myfavoritecd.iso. to mount an image file that mirrors the contents of a CD, exactly as if you had the external medium, placing it in the Linux file system. Our example shows that the data from cdrom-image1.iso is available in /mnt/cdrom. In VMware you first load the configuration file for the guest


Dec 02 / Jan 03

Dual boot installation: Dual boot refers to the case where two operating systems are installed in two separate partitions on one computer.The operating systems can be booted alternatively, and both access their own hard disk space. A bootloader, such as GRUB or LILO takes care of the OS selection process when you start your system. ISO image: You can use

mount -o ro,loop cdrom-image1.iso /mnt/cdrom NAT daemon: A VMware tool that redirects network traffic generated by the virtual machines using masquerading (a special type of NAT,“Network Address Translation”).That means adding the IP address of the host operating system to the network packets without using the guest operating system’s IP externally.


Figure 5: Drag&Drop Between Two Platforms

On Windows 2000 and ME you might occasionally notice that network access is no longer available after a while. The network drivers are at fault here – as they attempt to discover the link status, that is the connection status of the network cable – this feature is referred to as “Media Sensing” by Microsoft. Since a virtual network like the one created by VMware does not actually use cables, the driver will tend to crash at this point. The solution to this problem is described on the VMware Inc. support pages [2]: “Disable Media Sensing!”. To do so on Windows ME, open the network configuration tool in the system tools area, select Properties in the drop-down menu for the “TCP/IP -> AMD PCNET adapter” icon, and uncheck the checkbox Sense network media. For Windows 2000 you will need to edit the registry as described at;de; Q239924.

Fit for Daily Use Besides pre-compiled modules for some newer kernels, and bugfixes, the new 3.2 version of VMware Workstation does not have a lot of new features to offer your Linux host computer. In our test environment, we had three guest operating systems running side-by-side on our computer without bringing the Athlon 1200 MHz processor to its knees. We had no problems working with Windows standard applications at the same time. VMware’s strength is mainly derived from the Linux host environment. After a while you really will not want to miss being able to carry on editing your email while resetting a machine with a Windows bluescreen. ■

INFO [1] VMware, Inc. web site: [2] VMware support: ws32ts_networking2.html#1015316


Win4Lin 4.0

It doesn’t always have to be VMware

VMware Alternative Win4Lin is another product that provides Windows emulation for Linux. The developers assure us that several known issues in the 3.0 version have been resolved in the latest 4.0 version. And as 4.0 offers enhanced hardware support, we decided to take a closer look at Netraverse’s latest coup. BY ANDREAS REITMAIER AND THOMAS DRILLING


indows emulators for Linux are all the rage, either as hardware emulators that provide virtual PC machines, such as VMware, or the WINE project. WINE emulates the major Windows libraries, allowing you to run Windows programs natively on Linux. Emulators like VMware offer a complete virtual PC, including a BIOS, and support Windows desktops, and almost any other guest operating systems on a Linux host platform. Win4Lin is a kind of symbiosis and uses a customized kernel to handle the requirements of the Windows guest OS.

INFO Manufacturer: Netraverse, Price:

£57 software only £67 CD ROM and Installation Guide)


x86-CPU, Kernel 2.4 based distribution

Available from:


Dec 02 / Jan 03

This allows Win4Lin to run Windows applications directly in a KDE window without having to resort to the virtual environment of a guest PC. However, older versions of Win4Lin tended to be so difficult to install and configure that they were of little practical use. Happily, this is no longer the case, and this should open up a new user market for the new version. As Netraverse are asking quite a lot of money for their product, users have every right to demand a fully functional software package.

Chaotic Installation? As we noted in earlier tests with version 3, the Win4Lin developers obviously put a lot of effort into improving the installation routine for version 4 in order to make it more easily accessible to Linux converts. And this only makes sense, as users moving from Windows to Linux are a major target group for Win4Lin. So what’s new: Netraverse have obviously enhanced the latest installation routine considerably. Although the installation procedure still requires three steps, users no longer need to worry about where to go next. Let’s look at the installation procedure: • In step one the required packages, the customized kernel and the Win4Lin application itself, are installed. The boot loader is configured to use Win4Lin and you are prompted to restart your system.

• The second step involves configuring system defaults, and also requires you to copy the required Windows sources (CAB files) from an original Windows CD. • Finally, in step three, Windows is installed and configured in the current userspace. Although root privileges are required for the first two steps of the installation, you can use a normal user account for step three. This allows any user to create a customized Windows environment – which makes Win4Lin more sophisticated than the official Windows version. After typing the serial number, the required files are copied to your hard disk. The steps for installing the kernel and configuring the boot loader are not performed automatically, and unfortunately require the user to confirm each individual step. Happily, the whole procedure does not take too long.

Win4Lin 4.0


Figure 1: Ready made packages are available for most of the major Linux distributions.

After rebooting, the installation program needs to be re-launched. You might like to insert the Windows CD before re-launching the installer. Use a normal user account for the installation and configuration tasks – you will need to launch the same installation script from the command-line to do so. The installer first checks the system environment, in order to avoid additional manual steps. The installation procedure is far more user-friendly, although it still involves numerous steps. Bearing in mind that the design is fairly complex, it is evident that version 4.0 takes a major part of the burden of manual configuration and planning off the user’s hands.

Version 4 – All Brand New? In addition to the advanced installation routine Netraverse have also introduced a few enhancements with regard to compatibiliy and management with the latest version. More specifically, Win4Lin allows you to choose from a wider range of Windows versions, which include Windows ME, for example. Note that Win4Lin is supplied without a Windows license, so you will additionally need an original Windows boot CD. Incidentally, you will not be able to use a recovery CD that does not include a Windows license. So, for larger installations, you might like to opt for the Win4Lin network license, which allows you to copy the CAB files from a Windows CD to a network drive. Your users can then download a local Windows installation from the server.

Figure 2: Most standard applications run without glitches on Win4Lin

Hardware support has also seen some notable improvements; most wheel mice no longer pose a problem, and more granular control is available for memory allocation (between 16 and 128 MB), and virtual storage (between 80 MB and max free space on your hard disk). Improved precision with respect to CPU recognition provides for enhanced program stability at runtime, and that boils down to less hassle for users wanting to run a variety of standard applications. The seamless integration of the product in the Linux environment was a particularly welcome addition. For example, you can now use drag&drop between most Linux and Windows applications. We also approved of the option for rescaling the emulation window without display fallout, and the fact that the Windows desktop is automatically refreshed.

The Alternatives So, where does Win4Lin stand in comparison to the two alternative candidates? Most users will rule WINE out due to its less than intuitive facilities, and the fact that some applications simply refuse to run on WINE. It is a candidate for software developers wanting to port their applications to Linux without too much effort. The VMware emulator also has its pros and cons. VMware can emulate multiple PCs of course, and it supports a wide range of guest operating systems, but it does put heavier demands on your hardware.VMware’s big advantage over Win4Lin has always been ease of instal-

lation, but the latest version of Win4Lin has closed that gap considerably. Additionally, Win4Lin 4.0 is cheaper and offers better performance. The bottom line is that the Texans have managed to catch up to VMware in several areas, and even overtaken them in some. However, there is no substitute for VMware if you intend to use a Windows version from the NT familiy (4.0, 2000, XP), as Win4Lin does not support any of these systems.

Conclusion Win4Lin 4.0 quite impressed us throughout the test series. Installation is a lot simpler than with previous versions, and the execution speed of programs running in the Windows environment is fast. The question remains, as to what software you should invest in. At a selling price of £57, Win4Lin is cheaper than VMware, although the price is still too high to attract occasional users. Unfortunately, you have to add the price of a valid Windows license. And if you are interested in games, you should not consider using Win4Lin, as the manufacturer explicitly advises against running games on this product. Win4Lin is recommended for developers or professional users who require parallel access to Linux and Windows programs. If you are mainly interested in running Microsoft Office on Linux, and do not want to experiment with CrossOver Office / WINE, Win4Lin is an alternative (albeit an expensive one), and running MS Office on Win4Lin should be no trouble at all. ■

Dec 02 / Jan 03



Serial Connections

Serial Links for Computers

Instant Network Networking is one of Linux’s stronger points. But what happens if you do not have a network? You can always resort to a serial link. BY ANDREA MÜLLER


ou have probably faced this problem before: you need to transfer data between Linux and a Windows computer. If you do not have a network, you are normally forced to use external media. But a null modem cable will provide a simpler and more flexible way to link up the two computers. Of course, you have to make do with a slow data transfer rate of about 11 kbit/s, but apart from that you gain all the benefits a “real” network has to offer. Additionally, this solution is cheaper and easier to set up than a network.

Quick Brew You will need the right kind of lead for a start. Your local computer shop or an electronics warehouse is probably the best place to look. A “null modem connection set” will probably set you back somewhere in the region of £10. Now attach the lead to the serial ports on both computers, just like you would attach a modem. You can now start configuring the connection. For Linux you need to setup pppd to accept connections via the null modem lead. Assuming that you are using ttyS0 (the COM1 port), ensure that you are root and create a configuration file called /etc/ppp /options.ttyS0. Listing 1 shows a sample file. The entries preceded by “#” are remarks designed to explain the options used here. The third line from the bottom is important, as the IP addresses for the server ( and client ( are specified there.


Dec 02 / Jan 03

Additional details on the individual options are available on the pppd manpage. That would normally complete the configuration steps for the Linux end, but unfortunately Windows has a few special requirements. The last line tells pppd to use the /etc/ppp / script to call chat before opening a connection. This is required because Windows insists on introducing itself to its dialog partner before opening a PC Link. Windows sends “CLIENT” and expects “CLIENTSERVER” as an answer. To appease Windows you will need to create a script called /etc/ppp / with the following contents: TIMEOUT 3600 CLIENT CLIENTSERVER\c

The TIMEOUT value is how long chat should wait for the “CLIENT” string. Now its time to configure the Windows client. As already mentioned, we will be using the Direct Cable

Connection program, which is available in Start/Programs/Accessories/Direct Cable Connection. If the Direct Cable Connection program is not yet installed, you can install the program by going to Start/Properties/ControlPanel/Add/ Remove Programs/ and selecting Windows Setup, Connections/Details/ Direct Cable Connection. (For XP select Start/Programs/Accessories/Communication/New Connection Wizard, then Set up an advanced connection and finally Connect directly to another computer.) The first time you start the program, a

GLOSSARY pppd: The daemon responsible for setting up Point to Point Protocol connections, that is a special kind of TCP/IP connection between two computers.The pppd is usually launched whenever you use a modem device to access the Internet. chat: a program normally used to script modem dialogs. It waits for a string and sends a corresponding answerto the modem.

Serial Connections


4 packets transmitted, 4 packets received, 0% packet loss

Listing 1: Configuration file options.ttyS0 # Dial up configuration for clients via null modem cable lock # create lockfile (/var/lock/LCK..ttyS0) noauth # no authentication asyncmap 0 crtscts # enable hardware flow control local # this is not a modem silent # wait for connections # IP addresses server:client 115200 # Line speed -- may need to be reduced for long cables connect 'chat -v -f /etc/ppp/'

Wizard will help you with the configuration steps. Select Guest Computer in the first window and then click on Next to confirm. In the next window you will need to enter the serial port the null modem cable is attached to. The final window prompts you to configure the host computer. The computer will attempt to open a connection immediately after you click Finish, so you might like to launch pppd on your Linux machine first with:

Back on Linux you can monitor any activity in the terminal window where you started pppd. [andi@gemini andi]$ pppd U /dev/ttyS0 nodetach Serial connection established. Using interface ppp0 Connect: ppp0 <--> /dev/ttyS0 local IP address remote IP address CCP terminated by peer Compression disabled by peer.

pppd /dev/ttyS0 nodetach

(If you are not using the first com port, you will need to customize the device file to match.) The “nodetach” flag prevents pppd from running as a background daemon and allows you to watch any messages issued. You can now click on Finish in the Windows Wizard. A few seconds later, you should see a screen similar to Figure 1.

To discover whether data transfer is actually occurring, why not ping -c 4 $IP-NUMBER

in an xterm or in the MS DOS Command Prompt (or simply Command Prompt for the XP inclined), to test the connection? In this example “$IPNUMBER” refers to the IP address of the other computer. You do not need the -c 4 flag for Windows, as the Windows ping only transmits four data packets. The following line in the ping output is interesting:

This tells you that the communications link is working, and you can get on with some fine tuning.

Stir Well Terminate the connection by clicking on Close in the Windows in the Direct Cable Connection program window. To avoid having to remember lengthy IP addresses you might like to customize the hosts file on both computers. On Linux the file is located in the /etc directory; Windows supplies a template called hosts.sam, which is stored in the Windows directory, and needs to be renamed to hosts. After editing, the files could appear as shown in Listing 2. Incidentally, on Windows XP the hosts file should be stored in c:\windows \system32\drivers\etc\. Now open the connection again and repeat the connectivity test by typing: ping $COMPUTERNAME

using the short form of the computer name. If the output again says 0% packet loss, you know that name resolution is also working. To use the services provided by your Linux computer you need to carry out one more configuration task on your Windows computer. If you attempt to access your Linux web server using the Internet Explorer, IE will try to open a dial up connection to your provider. To prevent this happening, you should go to Start/Properties/Control Panel/Internet Options and select Never dial a connection in the Connections tab. You should also be aware of the Windows alert that appears shortly after opening the connection. This means that Windows has discovered that the server at the other end of the connection does

Listing 2: The Windows hosts file #hosts Linux

gemini.localdomain oemcomputer.localdomain

localhost oemcomputer

#hosts Windows

oemcomputer.localdomain gemini.localdomain

localhost gemini

Figure 1: Windows has set up a Direct Cable Connection to the Linux host

Dec 02 / Jan 03



Serial Connections

Listing 3: Customized /etc/ppp/ip-up file # if a lockfile (LCK..ttyS0) exists for the port # with the # null modem cable, terminate the script if [ -f /var/lock/LCK..ttyS0 ] then exit 0 # otherwise run the original script else ORIGINAL CONTENT OF FILE HERE fi

may give up after one or two unsuccessful attempts – you will need to Figure 2: Exchanging HTTP data between Apache and Internet Explorer modify the ip-up file. But make sure that you create a not have any shares. You will be backup before you attempt to edit the prompted to type the name of the host file, as most distributors advise against computer to allow Windows to look editing ip-up. again. You can click Cancel to close the dialog box. On the Linux side of the connection cp /etc/ppp/ip-up U you may have noticed that error /etc/ppp/ip-up.bak messages started appearing in /var/log/ syslog after opening the connection. The lockfile created by pppd will help us These messages are generated by prevent ip-up from being run. To do so, services that are launched by the /etc/ place the original content of the ip-up file ppp/ip-up script, and normally perform in an “if” block that runs your “internet useful tasks, such as transferring Mail commands” for connections that do not or collecting News. To maintain the use the serial port where the null modem reliability of these services – your MTA cable is attached. The example in Listing 9 PIN to 9 PIN Serial Cable DTR DSR DCD TXD RXD GND RTS CTS

4 6 1 3 2 5 7 8


4 6 1 3 2 5 7 8

3 shows you the kind of entries you will need. Add these lines after your distribution’s comment lines (which start with “#”), but before the script commands. It is also important to remember the fi at the end of the script, to close the “if” block. You may need to customize the serial port in the line starting with “if”.

Ready to Dine Now there is nothing to prevent you exchanging data between Linux and Windows, as you can see in Figure 2. Whether you intend to use HTTP or FTP, or even want to access leafnode or mySQL, is a matter of personal taste, and your requirements. However, when setting up the required services you should ensure that you allow access to your Windows client, but not to the whole internet. The documentation most servers provide includes a section on security issues, and tips on protecting your privacy. The Direct Cable Connection can be used in the other direction too, allowing your Linux computer to access Windows services. The POP/SMTP/NNTP server Hamster is useful for that task. ■


20 6 8 2 3 7 4 5


Figure 3: Null modem cable connections.


Dec 02 / Jan 03

4 6 1 3 2 5 7 8


25 PIN to 9 PIN Serial Cable

Andrea Müller is a law student who keeps herself busy with Linux whenever she gets tired of legal theory. When time permits she additionally likes to take a peek at other operating systems, such as QNX, BeOS and NetBSD, or even tries to polish her Python skills. Apart from Linux and her university career Andrea is interested in literature, European history and cycling.



Running Unix Programs on Windows

Different Windows Some people are condemned to a dreadful fate: working with Windows. That means they have to do without programs such as ls, dd and the like. Cygwin solves this problem and also allows you to take an excursion to the realms of Xfree86. BY MARTIN LOSCHWITZ


here are two distinct approaches to running Windows on Linux; emulators that fool the application into believing that it is running on a computer of its own, such as VMWare [1], and projects such as Wine [2], that place a compatibility layer (application layer) between the application and Linux, which translates Windows system calls to a format that Linux understands. However, just because our cover story is mainly about running Windows on Linux does not mean that there are no problems moving in the opposite direction. Many Linux users who are forced to use Windows in the office bemoan the fact that the command line does not offer them the flexibility they need. Others would like to use their favorite X client programs on Windows, but are dismayed by the fact that commercial X servers for Microsoft operating systems often cost an enormous amount of money, and that the free systems are not powerful enough.

Why not Try the Other Way Round? As early as 1995 Steve Chamberlain, a Cygnus developer, started to work on a solution to this problem. He envisaged a library as an abstraction layer between the Unix program and Windows â&#x20AC;&#x201C; that is, an application layer. Steve named this library Cygwin, which is composed from Cygnus and Windows. More developers contributed to the project, and when Red Hat bought Cygnus in 1999, a decision was made to keep on developing Cygwin. The Cygwin library is thus the exact opposite of Wine. It converts system


Dec 02 / Jan 03

calls produced by Unix programs to a format that Windows can comprehend. Thus, the Unix programs do not notice that they are actually talking to Windows, and Windows does not notice that it is running programs from the world of Unix. The results are quite impressive: Cygwin has become a complex and fully functional library that allows you to run a variety of Unix tools on Windows: bash, ls, and dd are some fairly unspectacular examples. With a little help from Cygwin, you can even run XFree86 and the Gnome desktop on Microsoft operating systems from Windows 95 upwards (but not on Windows CE). The package is extremely easy to install. Go to [3] for a link labelled Install now!, which allows you to download a file called setup.exe. Any doubts you might have due to the extremely small footprint of the file are quickly dispelled. Of course setup.exe does not contain the Cygwin library itself, instead it creates an installer that prompts the user to answer a few questions and downloads the required software

package from the Internet. You can accept the default answers to the prompts, however, you can choose the target directory or a proxy server individually.


Figure 1: Selecting packets for your Cygwin installation

Choose your selection of packages carefully – the default selection is somewhat spartan. When you choose a server from which to download the Cygwin packages you want to install, you may need to experiment due to irregular accessibility of the servers. You can run the setup program at any time later to install any program packages not previously installed, and to update any installed packages.

Bourne Again on Windows Assuming you left the Add to Start Menu checked, you can access the Cygwin Bash Shell via the Programs / Cygwin entry in the start menu, after completing the installation. A single click will open a DOS window and immediately launch a Bash where you can access the Unix tools you selected during the installation process. These include gcc and g++, for example, versions of the original GNU compiler, specially modified for the Cygwin environment. Programmers who are interested in programming Linux applications on Windows should note that the Cygwin gcc is not suitable as a cross compiler. It produces only Cygwin compatible programs that can only be run natively on Windows if the cygwin1.dll library is included in the Windows library path. If a program authored on Linux does not use special libraries, it will normally be quite easy to port it to Windows: Often you need only recompile the source code with Cygwin gcc, and copy cygwin1.dll to the Windows systems


Figure 2: KDE 3 using X11-Forwarding on Cygwin/XFree86

library directory during installation. The XFree86 port for Windows is undoubtedly one of the greatest successes of the Cygwin project. Cygwin/XFree86 theoretically allows you to run all kinds of Unix/Linux software programs with the Cygwin DLL, from xclock, through window managers such as fvwm, up to complete desktop environments such as KDE. That should not only appeal to anyone wanting to add the look and feel of their Linux home desktop to their Windows system, but to those wanting to launch an X client program on a remote Unix computer and display the results on a Windows machine. You can use ssh with X11 forwarding for this purpose, provided your Internet connection is quick enough (Figure 2). Installing Cygwin/XFree86 turns out to be just as easy as installing Cygwin itself: If you did not select this package during the initial installation procedure, simply run setup.exe again, enter the same data as previously and select the XFree | XFree86-base package. The same menu item includes pre-compiled programs for Cygwin/XFree86, fvwm and Window Maker, for example. The Cygwin based X11 system is extremely stable and thus a genuine, free alternative for users who till now have resorted to a commercial implementation of X11 when looking for a an X server. No matter how fine the Cygwin world may appear at first glance, there are some drawbacks: In order to counteract Windows deficiencies, some program

code added to the Cygwin library means that you will need to modify the source code of some Linux programs. This mainly affects the limits for file and memory sizes. Running a more complex piece of Unix software on Cygwin, may require active development of the program, and not every Open Source project can offer this. Although this work may not be too substantial for smaller projects, larger projects, such as KDE and Gnome have assigned this task to other developers or teams [4], [5], specifically with a view to quickly changing interfaces.

Future There is still plenty to do at Cygwin: The to-do-list for the project includes IPv6 and 64 bit support, but also better performance and even more stability. Apart from this, the success or failure of Cygwin will depend on it supporting additional programs – a goal that will require close cooperation with software authors. ■

INFO [1] VMWare: [] [2] Wine project: [] [3] Cygwin download: [] [4] KDE for Cygwin: [http://kde-cygwin.] [5] Gnome for Cygwin: [http://homepage.] [6] Cygwin/XFree86: [ xfree/]

Dec 02 / Jan 03



MS Windows XP

Windows XP Tested to Unix Standards

Windows XP on test Microsoft, who manufacture and distribute their own unique OS have been around for quite some time. Their latest version,Windows XP, comes in two flavours, the ‘Home Edition’ for the desktop user and the ‘Professional Edition’ for those that know no better. We felt it was time to find out whether these products can keep up with current Linux systems, especially for our readers who may be unfamiliar with Microsoft products. BY HANS-GEORG ESSER AND COLIN MURPHY


indows XP makes a regular appearance in many of the Linux forums, especially places like the Usenet group comp.os.mswindows.advocacy. It is best described, by its makers, as a modern and robust PC operating system, equipped for many of the tasks that will be called for in a contemporary computing scenario. Of course, your scenario may differ from the one imagined by Microsoft, but we will give them the benefit of doubt, for now. Luckily, for most users, you will find that XP will have been already installed on many of the computers sold today, if they run on an Intel or compatible microprocessor. This is very convenient for many, as it can save the user from the considerable chore of having to install XP for themselves. This convenience is dulled, if and when things go wrong with the installed system because, in many cases, you are not provided with a copy of the operating system and frequently users will need to submit their hardware to service engineers to fix software problems. The ability to download ISO images of the OS from the manufacturers web site, a practice you would have now thought common place, is not available. Luckily, many high street computer stores will have copies of XP left on the shelf, so you do have the choice of paying £180 to be able to fix problems yourself or, possibly, to pay much more to have someone else fix problems for you. It has been suggested that Microsoft will stop this form of distribution in the near future, possibly with a view to


Dec 02 / Jan 03

moving towards a modern distribution method, like making ISOs available. Then again, maybe not. Should you find yourself in the unfortunate position of having to do your own install of XP then there are several caveats that must be borne in mind. It is a pleasure to see that Microsoft have continued to produce bootable CDs, in years past this has not always been the case. This benefit comes at a price for the unwary user. The installation process does not recognise standard Linux partition types and will, quite happily, format a pre-installed system. As with every install, make sure you have made a back-up because who can say when things will go disastrously wrong. XP comes with no provision for resizing partitions, so, should you be trying to shoehorn this OS onto a hard disk you will need to rely on tools that, luckily, come with all the usual major Linux distributions.

Half way house Assuming that you have sacrificed enough disk space for a Windows partition, you will find that the installation process is pretty much a hands off affair, so you will have to configure your system to meet your needs once everything is installed. The installation process will, that is ‘WILL!‘, overwrite your boot loader, so, make sure you have made a working boot floppy, so you can get back to your real OS of choice and put things right. This is an odd state of affairs because XP does come with its own boot manager

but doesn’t make its use very obvious. Maybe Microsoft will work on this for a later release. Once XP is installed you can start to configure it to make it useable. To make best use, or, possibly just use, of the peripherals attached to your machine you will need to install their drivers. These drivers will have been provided by the peripheral manufacturers to make their hardware work with Microsoft’s software. These drivers will be on a floppy or CD, and can be found in the box which came with the device.

INFO XP Home Edition Price


XP Professional Edition Price


Available at

MS Windows XP


Features Feature

Windows XP

Standard Linux

ISO images






Partition manager included



Recognise other partition types



Multimedia support



LSI support



Shell access



GPL compatibility






Quake compatible



Multiprocessor support



Wine compatibility



NFS support



Other architectures



1:With XP Professional edition up to two processors supported. 2:XP had the best Wine compatibility of all products tested.

Should you have unknowingly thrown this disk out with the box, or purposefully thrown the disk out for fear of being buried by the mass that can so easily be accumulated, or decided to save some money by buying an OEM version of the device that didn’t come with a disk, then you will have to reboot back into Linux and see if the device drivers are available on the Internet. You might prefer to do this even if you do have drivers, getting the latest versions of drivers usually guarantees that any incompatibilities between devices have been shaken out of the system. The root cause of this seems to be the considerable lack of information available about the inner workings of the OS itself and having to rely on many assumptions as to what the other device drivers will be doing. Most of the device drivers will be closed source, as is the OS itself. Closed source is a curious software system that denies the end user the confidence that the code has been subjected to public scrutiny, leaving you with the uneasy reliance on the manufacturer alone to have found any security or stability issues.

Applications, where? The limited range of applications on the now installed system is likely to disappoint. This gap has been filled by the Cygwin project http://sources.redhat. com/cygwin/ which will give you access to many of the more familiar and fully

Figure 1: The XP Desktop in fvwm95 look – more or less useable, if you make the effort to install the GNU Tools

functional tools users need, to get started at least. For users who prefer shell access, a very rudimentary system has been installed, cmd can be found in the depths of the Start menu, which, rather confusingly, also contains the option to Finish your XP session, something you will want to do sooner, or later. There is a considerable learning curve to using XP for the first time as it deviates from many of the common standards. The first, and for me the hardest to get to grips with is the unusual method provided to gain access to files. For this, the user needs to know physically where on the system hardware a file might be, having to select that location with the use of a ‘drive letter’. For instance, to find a file on a floppy disk you have to look at something called a:\ instead of the more usual /floppy. With a floppy, its location is known, on a disk array this is not the case. The ‘slash’ is not there to format output either. Some of this pain is taken away if you are happy to use the desktop tools available, like the file manager explorer, which does a sterling job of hiding this inconvenient feature. While there is Multimedia available on the basic installed system, it lacks many features. The most annoying is with the Windows Media player, which, by default, will only create sound files to the proprietary WMA format, denying the user access to the higher quality of Ogg.

The biggest reservation I have about the product is with Product Activation. Once you have your system installed you need to seek approval from the manufacturer (sic) that the system is ready to use. Should you be in the habit of swapping hardware on a regular basis, you will find this to be a big pain because you are limited to 6 changes of hardware per activation. The types of hardware change include replacing motherboard or CPU. If you are prepared to pay the extra money for XP Professional, you will find support up to 4GB of RAM and up to two symmetric multiprocessors. While this is definitely a step in the right direction, it still lags behind the more usual Linux fare with no justification in cost. While it is always disheartening to give a bad review I have to admit that the space set aside for the Windows install has now been returned to a much better use, my collection of .ogg files. ■

Figure 2: Errors keep someone happy

Dec 02 / Jan 03



United Linux

United Linux Tryout: Installation, Security and Software Components

The Candidate U

nited Linux 1.0 Release Candidate 2 reached us shortly before this issue went to press. The version profile in Table 1 shows how current the packages are in comparison to Red Hat 8.0. SuSE Linux 8.1 and United Linux include identical versions, and we noted very few differences during installation. We could not find a SuSE to United Linux update function, but the frontend and user interface are otherwise identical. You can choose between a Minimal System, a Minimal Graphic System (without KDE), and a Standard System for United Linux. The advanced selection allows you to install individual groups such as Gnome, compilers, analysis tools, and various services. You can also select individual RPM packages. The Standard System for United Linux includes most of the available packages, and includes all groups with two exceptions: Compilers and YaST2 configuration modules. The Compilers group comprises typical packages, such as bin86, gcc, autoconf, cvs, and Emacs. The YaST2 module group comprises online update, LDAP, security and more. When setting up XFree86, YaST2 and SaX2 both failed on our HP Workstation X2100, which we use to test video adapters. United Linux was unable to locate a useable X11 configuration for the nVidia Quadro 4 900 XGL card and the HP L1820 digital TFT display; SaX2 even refused to launch a graphic setup routine. The same combination was no problem whatsoever on Red Hat 7.3. We went on to configure the server system, an Athlon MP (1.7 GHz) with an Asus A7M266D mainboard and a single Compaq NC3120 Fast ethernet adapter, and noticed a strange duplicity. Although the NIC was configured automatically, it also turned up quite unexpectedly as a non-configured adapter, which we had no trouble setting up (Figure 1). The settings for the first entry were actually used. The configuration steps at the end of the installation


Dec 02 / Jan 03

Linux Magazine was allowed an exclusive look at United Linux Release Candidate 2. Our test focuses on installation, components and security. BY DANIEL COOPER, ACHIM LEITNER procedure were much like SuSE Linux 8.1; apart from the fact that there is no support for TV adapters, and you cannot launch the YaST2 control center directly.

First Login The first time we logged on in KDE 3.0.3â&#x20AC;&#x2122;s graphic mode we noted that the installation language chosen had no effect on the language setting for the KDE user. The desktop was in English and the language was set to C. There were no tools for setting up services or modifying system configuration in the menu structure. YaST2 is neither integrated in the KDE control center, nor in the K bar. KDE would appear to be the completely unchanged original, including the icons.

Sparse YaST2 That meant launching YaST2 manually. The Install/Remove Software function also behaved rather strangely: every time we restarted the function within the YaST2 control center, it displayed only the installed packages, but closing and relaunching the control center sorted this glitch out. With the standard installation, YaST2 offers few modules. Installing the YaST2 Configuration modules group, provides YaST2 with its usual functionality. It then offers similar features to its SuSE Linux Professional counterpart, including the Yast Online Update.

Using SuSE to Enhance United United Linux is designed as a lean distribution. Products built on the basic system will provide additional packages. The choice of developer tools in United Linux is correspondingly sparse.

Although the major languages and basic tools are included, you should not expect to find an IDE â&#x20AC;&#x201C; unless you regard Emacs as an IDE, that is. There are no Office tools at all, only Antiword seems to have strayed into the distribution. Interestingly, United Linux allows you to use SuSE Linux 8.1 CDs as an additional installation source (Figure 3). If you do so, YaST will present packages from both distributions. Due to widespread similarities, issues with incorrect or obsolete libraries are unlikely. We managed to install and use Imagemagick and Cdrecord from SuSE 8.1 without any trouble. However, this might endanger your United Linux 1.0 certification.

Open and Secure United Linux adopts an open stance towards other platforms: Samba 2.2.5 is included for attaching Windows clients and servers, Netatalk for Apple computers, and Mars NWE 0.99pl20 for the Novell community. You can use a number of services for authentication, for example NIS, Radius, Samba, LDAP,

United Linux


Figure 2: Decidedly rustic in appearance â&#x20AC;&#x201C; YaST2 on United Linux Figure 1: Strange duplicity: YaST2 made 2 NICs out of the single adapter actually installed in our server system

or Kerberos. Postfix is installed as a mailer, but the distribution also includes the Sendmail application. Security plays an important role in United Linux, as is evident in the selection of packages. The developers of the ftp server Vsftpd 1.1.0 (Very secure ftp daemon) paid particular attention to security in their code base. Three known security issues affect the Apache version included with United Linux, 1.3.26. The shared memory bug has been fixed, according to the RPM changelog, the cross site scripting problem does not apply, thanks to the UseCanonicalName On configuration. However, we are not certain whether the buffer overflow in the Apache bench tool still exists. However, this issue only applies when the benchmark tool is used to analyse untrusted servers. The security goal of running a minimum number of daemons in the default installation has been consistently implemented. The process list is compact, not even inetd is enabled. However, it is strange to note that the portmapper is running, although neither NFS nor any other RPC services are active. The output from rpcinfo -p host confirms this. Apart from SSH, only the XDM server is accessible from the outside world, X -query host displays a logon window. However, root is not permitted to log on this way. The CDs include a number security tools, which are installed by default. If you want to launch your server in a chroot jail, assign a specific user an

group ID, and at the same time restrict Linux capabilities, you will probably appreciate the compartment tool. It combines exactly these steps. However, we were unable to locate an init script that used compartments.

Storming the Bastille Bastille Linux analyzes and configures security settings for Unix and Linux systems. It is installed by default (version 1.3.0), but refused to run in our lab environment. The Tk GUI should appear after invoking InteractiveBastille, but instead Bastille informed us that it did not recognize the distribution and terminated after issuing an internal error message. Our attempt to launch the curses version was similarly ill-fated. United Linux does not offer the curses Perl module, and that causes Bastille to terminate with a FATAL ERROR. Obviously, this dependency has not been considered in the release candidate. Seccheck 2.0 is designed to perform regular checks on system security. This selection of shell scripts was developed by SuSE, and the installed version is identical to the version offered in SuSE Linux 8.0 distribution. However, it was necessary make some changes; seccheck complains about the user being wrong for any home directories and the dotfiles they contain: user gdm : U home directoryU is owned by shadow

The tool obviously confuses the owner and the group. The owner is gdm, the

group is shadow. The reason for this is the fact that seccheck runs the ls command with the -g flag set. However, the meaning of this flag has changed over the years: whereas SuSE Linux 8.0 ignore the flag, United Linux 1.0 interprets the flag by removing the owner from the output. We found that a similar problem occurs when checking mailbox privileges.

IP Logging Ippl analyzes IP packets sent to the local host. ICMP messages, unsuccessful TCP connections, and UDP packages are logged. Since the file /var/log/ippl/all.log exists, you might expect the messages to be logged there. However, Ippl uses syslog to write them to /var/log/messages. You can change this setting; you will find the required entry commented out of /etc/ippl.conf. Although Ippl creates a whole bunch of messages for a portscan, scanlogd will compact them to a single line. Scanlogd recognizes and reports TCP portscans launched against the local machine. Arpwatch works on a lower level and mails root, whenever the MAC address of a computer on the LAN changes. On the subject of mail to root: United Linux will relay mail for root to the user account created on installation. This prevents reading of email with root privileges. If you are interested in specific network traffic, you can use either the tcpdump or ethereal sniffers with appropriate filter rules, or switch to ngrep, a network enabled grep.

Dec 02 / Jan 03



United Linux

Nessus & Co. Although it makes sense for most packages to use stable, tried and trusted versions, and to close any security loopholes, this does not apply to security scanners and intrusion detection systems. Newer version recognize newer vulnerabilities and exploits – and this is why we took a closer look at the version numbers in this field. United Linux installs version 1.2.3 of the Nessus security scanner, which dates from early July, 1.2.6. is current. Saint 3.4.2 is about a year old, the current version, released under the Satan license is 3.6.2 (August 29). Version 1.8.7 (July 31) of the Snort intrusion detection system is installed, its successor, Snort 1.9, was only released on October 3. The nmap portscanner 3.0 is current. The commercial mailscanner avmailgate by H+B EDV and the Openantivirus Samba plugin are supplied as antivirus solutions. Tripwire and Aide are installed to calculate cryptographic hashes and prevent manipulation of the file system. SSH in the OpenSSH 3.4p1 version is one of the few servers enabled by default. The current OpenSSL 0.9.6g version is used as the SSL library. Several tools are available for network tunneling:Cipe, Freeswan, Stunnel, and Vtun. The package also includes Cyrus SASL, the Simple Authentication and Security Layer is used to enhance insecure protocols such as SMTP, providing authentication and

Table1: Versions

encryption. And of course, GPG is also included.

Network Management We were surprised at the abundance of network management tools. Net SNMP provides a foundation. Although the community string defaults to public, it applies only to queries from localhost. The read-write community is not configured. These defaults make sense: They protect the host, even if the service is enabled without performing any additional configuration steps. The SNMP agent reveals more about the system than an admin would prefer, and thus needs careful configuration. This also applies to Nagios, the successor of the classic network management tool, Netsaint. It does not make sense to use the tool without careful configuration. Mon caused us a few problems: It initially refused to run in our lab environment; /etc/init.d/mon start caused the mon is not installed message. Of course it was installed, but the binary was in /usr/sbin, and not in /usr/lib/mon. A symlink helped – but then a syntax error occurred in the start script: ;; was missing in one branch of the case construct. After correcting this, we were finally able to launch mon, the service monitoring daemon. The default configuration does not enable a monitor, so we tried a simple telnet monitor for a start. However, this tool required the non-existant Net::Telnet Perl module, which caused the monitor to fail. Red Hat 8.0


United Linux RC2























































However, the Fping monitor worked perfectly. Our first attempt to launch ntop led to the script informing us that we should assign an administrator password. After doing so, we were able to launch the analysis tool without any further incidents. However, access was only permitted for localhost – which makes perfect sense from a security point of view. However, if you do need external access, you should change the value of the NTOPD_PORT variable in /etc/sysconfig/ntop to 3000. More tools are available for network traffic statistics. MRTG and Rrdtool produce attractive graphs, and Iptraf displays statistics in a curses interface on your console. Argus, Netacct and Trafficvis complete this group.

Conclusion Due to the similarities between the two systems United Linux will look familiar to experienced SuSE Linux 8.1 users. The range of packages available for United Linux indicates that it was designed for use as a server. The security conscious orientation of the distribution is apparent in various elements, very few daemons are launched, for example, and the configuration is convincing. The errors we found are probably due to the beta status of the distribution, and cannot currently be applied as assessment criteria. We were pleasantly surprised by the number of network management and security tools. And we hope that security scanners, IDS tools and antivirus software will be kept upto-date by regular updates. ■

Figure 3: Version mix: United Linux 1.0 and SuSE Linux 8.1 are so similar that you can install any missing packages from SuSE 8.1 CDs without any trouble


Dec 02 / Jan 03

Free Standards Group


Free Standards Group

Birds of a Feather The founders of United Linux are also members of the Free Standards Group, which emerged from LSB and Li18nux. Their goal – to unify the structure of the various Linux versions and make it easier for users to comprehend. The Beta version of United Linux performed extremely well in all areas of the LSB test suite. BY RÜDIGER BERLICH


The Shepherd’s Triumph The first public LSB proposal, dated May 18, 1998, which is available at [3], contains the following sentence: “Linux distributions should maintain the base system collectively, as the kernel is maintained, rather than individually.” This precisely sums up the motivation for United Linux. And what could be more satisfying for the shepherd (Scott McNeill) than to see his sheep (the distributors) happily walking through the fields into his pen? The United Linux initiative certainly would not have arisen without

Mario Becker,

ccording to its members, the main aim of United Linux is to unify uncontrolled developments within the various Linux distributions. United Linux is mainly based on SuSE Linux, a heritage that is apparent in many areas (see test on page 34). But the novel thing about United Linux is not the distribution technology, but the co-operation of the participating distributions SuSE, SCO/Caldera, Conectiva and Turbo Linux. However, the proximity to SuSE Linux leads to the standards adhered to by SuSE 8.0 and 8.1 applying almost unchanged to United Linux. This article aims to detail these standards and the parallels, but also the contradictions of United Linux. It is safe to assume that Scott McNeill (Figure 2), the Executive Director of the Free Standards Group (FSG) [1], considers the foundation of the United Linux group to be a confirmation of his aims. After all the goal of his organisation is to apply standards to the Babylonian confusion of uncontrolled distribution growth, and to organize the applicable benchmarks.

considerable market pressure. It would appear to make more sense in particular for SCO/Caldera (who will probably move their focus away from Linux) and Turbo Linux to reduce their own expensive development activities, or to terminate them completely. Conectiva on the other hand does not appear in the list of certified Linux distributions on the LSB web site. It makes sense for the Brazilians to achieve LSB certification via the United Linux detour. This will additionally allow them to open up markets outside of South America for their business products, and these markets would be inaccessible to Conectiva without United Linux. But for SuSE, who are developing the United Linux kernel more or less on their

own, the initiative will mean a more acceptable distribution of development costs, due to the participation of other partners. Due to the dominance of Red Hat’s main competitor, SuSE, within United Linux, Red Hat are unlikely to join at any point in the near future, although they have been invited to do so. This also applies to Mandrake, who are a lot closer to Red Hat than other major Linux distributors. Apart from this, Mandrake would also compromise one of its most important marketing arguments by joining the United Linux initiative, that is the installer.

The Free Standards Group FSG was founded in May 2000 as a cooperation between the Linux Standard

Dec 02 / Jan 03



Free Standards Group

The LSB Test Suite To monitor the LSB certification of your chosen distribution you will need the binary RPMs from the LSB suite, which are available from [1]. Additionally you should install your distributionâ&#x20AC;&#x2122;s LSB packages. In the case of United Linux this means the lsb package, which you can easily locate by searching with the Figure 1: United Linux during an LSB conformity test YaST2 tool. After installing all need a password, however in our lab of these packages, you log on as the environment we had to log on as root user vsx0 (according to [1], you do not 01 tp38() 02 { 03

tpstart "Reference 3.7-38(C)"


tet_infoline "If the system supports the subsystem"


tet_infoline "the /etc/sgml directory exists and is searchable"

06 07

lsb_test_dir_searchable /etc/sgml &gt;out.stdout

2&gt;out.stderr 08

check_exit_value $? 0

# should be zero



# should be no stdout



# should be no stderr


if [ $FAIL = Y ]






tet_infoline "This test result needs to be manually resolved, returning FIP result"


tpresult FIP







# set result code

19 } creates the following lines in the test report:

Test Information: Reference 3.7-38(C) If the system supports the subsystem the /etc/sgml directory exists and is searchable /etc/sgml: directory not found exit code 1 returned, expected 0 This test result needs to be manually resolved, returning FIP result


Dec 02 / Jan 03

to reset the password), and run the ./run_tests script. After initially launching the script you are then required to answer a few simple questions about your Linux system. The test suite comprises several brief individual tests that ensure practical conformity to the standard. The whole procedure takes about six hours on a quick computer. Finally, a report comprising approximately 320 pages detailing the results is produced. Most of these scripts take the form of shell scripts. The script is used to check the conformity of the /etc directory, for example. Each of the 38 individual tests in this script can produce an error. If the file /etc/printcap does not exist, which originally was the case on our system, or if the /etc/sgml does not exist, you will find a note to that effect in the test report. See the example test script and resulting output that we have supplied on this page. The test performed on United Linux Beta 3 in our Linux Magazine lab environment, produced only one error and a few harmless warnings, which may be an indication of the lack of TLC in our installation. Each report contains the Unresolved category. These are tests that produced ambiguous results for some reason. Although this may take some time, you should investigate these reports if conformity is important to you. The test series is fairly irrelevant for most users. The distributor is responsible for ensuring conformity, not the user! Testing programs for LSB conformity is a different question, however. The lsbappchk tool performs this task (see figure 1), and amongst other things, it points out any of the dynamic libraries and functions that do not conform to the standards.

Free Standards Group

Figure 2: Scott McNeill, former head of SuSEs USA branch in Oakland and now Executive Director of the Free Standards Group

within the FSG is concerned with establishing standards for a scalable print environment within the Linux field. LSB integrates the Filesystem Hierarchy Standard (FHS), which dates back to an even older standard. The Free Standards Group can thus be regarded as a natural development and reservoir for standardization efforts in Linux land. The Free Standards Group has had the support of a wide range of enterprises right from the outset. Even SAP were originally involved, and although they are no longer in the list of FSG members, interestingly the United Linux press release again mentions them. Today’s FSG members are “all the usual suspects”: IBM, HP, and Intel, but also Sun and Dell. And of course the major Linux distributions, Red Hat and Mandrake, and all the founder mem-

bers of United Linux are on board [2]. This concentration of interests indicates how important the FSG’s work is. Thus, there is a consensus on standardization, although the uncertainties of commercial marketing of the Linux operating system occasionally give rise to a few new questions. To do justice to Scott and the FSG, the list of LSB certified distributions (available at [8]) includes Mandrake 9.0, SuSE 8.1, and Red Hat 8.0. And all three have reached the second generation of certification. So, LSB is very successful.

Commercial Bias Causes Concern Although this may seem strange, United Linux has caused many of the Free Standards Group some concern. And the strong commercial background of United Linux will not do much to bridge the gap between Red Hat and SuSE. A base distribution without a commercial background would have been more to the FSG’s liking. No punches are pulled in this competitive environment, and this leads to a slightly arrogant undertone. The press release for United Linux dating from 30.5.2002 [7] thus announced “Linux Industry leaders Caldera International, Inc. (Nasdaq: CALD), Conectiva S.A., SuSE Linux AG, and Turbolinux, Inc., today announced the organization of UnitedLinux, a new initiative that will streamline Linux development and certification around a global, uniform distribution of Linux designed for business.” To be fair, it has to be admitted that SuSE also uses a more moderate version of this joint statement by the UL members : “[…] these four enterprises thus agree to jointly develop a Linux operating system, specifically designed for use on enterprise servers.”

distribution became apparent in our lab environment: United Linux Beta 3 (and SuSE Linux 8.1) both failed to recognize our test system’s Tekram DC 390U2W SCSI controller automatically, leaving us no alternative but to manually load the module in linuxrc. This problem was not observed in SuSE 8.0. Errors of this kind are thus inherited by any distributions based on United Linux. Medium term product quality will tend to benefit from the extended user base and the experience of the other distributors in maintaining the installer hardware database.

United Linux Headed in the Right Direction Still, United Linux is headed in the right direction. Of course this will not always mean straight ahead as the pervasiveness of Linux is inseparable from the success or lack of it among commercial Linux distributions. But what the United Linux members have already achieved with respect to standardization is quite considerable. ■

INFO [1] Free Standards Group: http://www. [2] List of FSG members: http://freestanU [3] LSB Proposal of 18.5.1998: [4] Linux Standard Base: [5] OpenI18N:] [6] LANANA: [7] United Linux press release of 30.5.2002: press_releases/archive02/united_linux.U html [8] List of LSB certified distributions: cert_prodlist.tpl?CALLER=conformance.tpl

The Test Suite The LSB’s success is due to its free test suites (see Box “LSB Test Suite”) rather than using a common basic distribution for all commercial and free distributions. The test suites guarantee that LSB conform applications will automatically run on any LSB certified system. One of the disadvantages of the approach using a common base


Base (LSB, [4]) and the Linux Internationalization Initiative Li18nux, now referred to as OpenI18N [5]. The Linux Standard Base mainly defines a base system to which all Linux distributions should be compatible. Applications and packages only need to be developed for the base system, and will automatically run on any distributions that follow the LSB standard. LANANA (The Linux Assigned Names And Numbers Authority, [6]) recently joined the FSG; its task is to manage the Linux namespace. The aim is to avoid naming conflicts for applications and drivers. The Open Printing initiative


Rüdiger Berlich worked for various subsidiaries of SuSE Linux AG between 1998 and 2001 and is currently involved in Linux clustering and GRID computing. Rüdiger started using Linux in 1992.

Dec 02 / Jan 03





Search and Replace! T

he stream-oriented editor is not an interactive editor, but a kind of text filter that searches for and replaces specific characters or strings. Streamoriented, or “character-oriented” means that files can be accepted from stdin and output to stdout after processing. There are two basic methods for launching sed on the command line: • sed [-n] [-e] ‘command’ file(s) – a ‘command is output in the command line and applied to file(s) • sed [-n] -f script file(s) – runs an external script file containing sed commands and applies it to file(s) If the commmand does not contain a value for file(s), sed will read from standard input. The sed command processes each each line it reads, placing it in a kind of buffer, and displaying the content of the line buffer on standard output when finished.

The stream editor, sed, helps perform automatic modifications of one or more files, simplifies repetitive operations, or builds complete conversion programs, allowing you to solve word processing problems in next to no time. BY HEIKE JURZIK

Expressed Regularly

Brigitte Miltenberger,

To use sed effectively it makes sense to use a few regular expressions (you might also like to try man 7 regex). These search patterns normally comprise of two elements: the characters to look for, and a counter stating how often the characters are allowed to occur. This sounds complicated, but is fairly simple to visualize using a table (see Table 1).

Sought and Found A sed script will first look for a matching pattern and then perform an action. For each line of the script output is created, and the process repeated. sed then moves on to the next line of the input file, and starts again. The best way to understand the way sed works is to look at a brief example. Let’s assume you have a HTML file where a specific URL reoccurs, and need to replace the URL. In our test file url.html any links matching <a href=U "">U</a>;


Dec 02 / Jan 03

need to be replaced by <a href=U "">U</a>

The first approach would be to use the following command: huhn@asteroid:~$ sed U 's/' U url.html

The section enclosed in ticks contains the information for the replacement – the pattern to be replaced follows the

first slash (/), the replacement string follows the second slash, and the third slash marks the end of the replacement. If the pattern contains slashes, you can use other alphanumeric characters as delimiters – # or | for example: huhn@asteroid:~$ sed U 's#' url.html

Placing the sed commands in ‘ ‘ to mask them is recommended, to protect the input from the shell, and thus avoid having some characters interpreted as metacharacters. The command just


shown sends its output to stdout, allowing you to check it. If you are working with a longer input file, you might prefer to pipe (|) the output to a Pager of your choice, for example: huhn@asteroid:~$ sed U 's/' U url.html | less

To direct the output to a file instead, simply use the greater than sign, >, with the name of the file that will store the output, that is sed ‘s/‘ url.html > new_url.html. The command line output shows that sed is only replacing the first instance of the search pattern in each line: ... <a href=U "">U</a>

To replace all the instances of a search pattern in a line, you need to tell sed to go “global” by adding a g switch: huhn@asteroid:~$ sed U 's/' U url.html ... <a href=U "">U</a>

Line by Line? A more exact way of referring to individual lines is to use their line numbers. If you only want to search and replace in the first line, add a “1” at

the beginning of the command: ‘1s/’ – the g is again correct in this case, as the string may occur more than once in the first line. To perform the search and replace action on multiple lines, for example lines 1 through 20, you would need to use the following command: ‘1,20s/’. A similar pattern is used to exclude lines: Typing ‘5-15!s/huhnix. net/’ will perform the search and replace command in the whole file with the exception of lines 5 through 15.

huhn@asteroid:~$ ls 01.%20Welcome%20To%20Cabaret.mp3 02.%20Natives.mp3 03.%20Fairytale%20In%20York.mp3 04.%20Delirium%20Tremens.mp3 05.%20Black%20Is%20The.mp3 06.%20Missing%20You.mp3 ...

You would like to remove the dots following the track numbers, and the “%20” entries, and replace them with underscores. This requires two separate sed commands, which you now add to your script file: s/\.%20/_/g s/%20/_/g

GLOSSARY stdin, stdout: There are three “standard channels”, stdin (standard input), stdout (standard output) and stderr (standard error output). A user normally has the keyboard as standard input, and the screen as standard output. If you use zcat (“gzip -d -c”) to expand a file it will normally be output on screen, unless you redirect the output. If you do redirect the output using a pipe (|), the output will be used as standard input by the reading program. Pager: A program that accepts the screen output of another program and displays it page by page.The less and more pagers should be part of any Linux distribution.

Table 1: Search Patterns – “regular expressions” Pattern



exactly this string:“abc”


one of these characters:a,b,or c


none of these characters is allowed to appear


a character between a and c


any character


the pattern preceding ? can occur once or not at all


the pattern can occur many times or not at all


the pattern can occur any number of times,but must occur at least once


the pattern must occur at least n times


the pattern is allowed to occur n times at the most


the pattern must occur at least n times


the pattern must occur at least n times and at most m times

Scripting The -f flag allows you to leave out the commands in the command line, and run a script containing sed commands instead. A file of this type will normally contain a collection of commands which are applied to the input sequentially. Let’s assume that you have a directory full of MP3s, and are unhappy with their names:

Note the backslash preceding the dot: As mentioned on our short detour into the realm of regular expressions, the dot normally refers to an arbitrary character. If you left out the backslash, the command would match and replace any character up to “%20”. Our version first looks for a “.” preceding a “%20” and uses “_” to replace the complete expression – the second command processes the remaining “%20“s without a preceding dot. Use a file name of your choice to save the script, script for example. As sed


reads from standard input, you need to pipe the output from ls *.mp3 to the script in question: huhn@asteroid:~$ ls U *.mp3 | sed -f script 01_Welcome_To_Cabaret.mp3 02_Natives.mp3 03_Fairytale_In_York.mp3 ...

If the new file names are to your liking, and you now want to rename the files, you now have to add another command: the mv command (for “move”). A short for loop will help manipulate your files: huhn@asteroid:~$ for file in U *.mp3; do mv -v $file U `echo $file | sed -f script`; "" done `01.%20Welcome%20To%20The%20U Cabaret.mp3' -> U `01_Welcome_To_The_Cabaret.mp3' `02.%20Natives.mp3' -> U `02_Natives.mp3' ...

This translates to: For any files ending in “*.mp3”, do the following: move the files so that the user will find them stored under the names produced by the sed operation. ■

Dec 02 / Jan 03




Sylpheed – Quick and Extremely Configurable

Mail Enough for Anyone Sylpheed is a quick and flexible mail client that helps you manage even large amounts of mail. Based on the GTK+ toolkit and running on the X Window System it is aimed at giving a quick response with a graceful and sophisticated interface. It has easy configuration and intuitive operation. BY THOMAS ZELL


ail clients are very common. A Freshmeat [1] search produces innumerable results. In addition to heavyweights, such as Kmail [2] or Evolution [3] there is a GUI program that offers just what the doctor ordered for Joe Public. This program is called Sylpheed. Sylpheed is an email program and news reader, based on GTK+, and mainly designed for speed. The program looks very much like any other mail program, so you will feel at home almost immediately. Amongst other features the program supports: • Multiple user accounts • Thread display • News filtering • Attachments • SSL/TLSv1 for POP3, SMTP, IMAP4rev1, NNTP • X-Face • User definable headers • Multiple MH directories • Mbox import/export • Automatic mail checks • Line wrap for overlength lines • Clickable links • XML based address book

Installation You will need to obtain the program first. The current version is 0.8.3. Look for it on the subscription CD or on the Sylpheed [4] home page. If a compiled package is unavailable for your distribution, simply expand the archive file in a directory of your choice and cd to the directory. Before you start installing, make sure you have a GCC compiler, GTK+ 1.2.6 or later and a Unix based operating system. You can normally launch


Dec 02 / Jan 03

configure without setting any other flags. If you additionally intend to use GnuPG [5] or OpenSSL [6], you should start by typing the following: ./configure --enable-gpgme U --enable-ssl && make su make install

Configuration After launching sylpheed & in a terminal window, Sylpheed welcomes you by suggesting a directory for your mailbox. This directory is created below your home directory if it does not already exist, so make sure that you do not overwrite any existing files. Accept the default directory only if you are sure that nothing can go wrong.

The next step is to set up your accounts. If you intend to use multiple accounts, ensure that you choose a memorable name, such as the name of the provider, for each one, and assign one account as default. Fill out the Personal Information and Server data fields. Most users will select the POP protocol from the menu. Enter your providers data in the Server for receiving and SMTP Server fields. For some providers (such as GMX) note that your User ID will be your complete email address, in contrast to most providers. Click on the next tab to specify if and when Sylpheed should delete messages on the mail server. You might like to enable the Download all messages from server option. This depends on whether you use other programs on other



This will open any links you double click in a Galeon window. Of course, you can choose Konqueror or Netscape as your default browser, but make sure that you add ‘%s’ after the program name and any options.

And now for the nice bits…

Figure 1: Sylpheed on initial start-up

operating systems to view your mail, or you possibly use a web interface from time to time. You can use the other defaults for the time being. Before you can modify the settings for Private and SSL you must first add support for GnuPG and OpenSSL. To discover what plug-ins you have installed, click on Help/About.

Additional Settings After downloading your mail (Message/Receive new mail) you can carry on setting up Sylpheed to meet your requirements. To do so, select Configuration/Common Preferences…. If

you do not like the default font, simply select Display and specify a font size. Figure 2 shows a useful setting for date display in day-month-year format. The default setting may be confusing to American users. Sylpheed opens links when you double click on them. If nothing happens, the wrong browser may have been preselected. In this case you can select “Other” in the Common Preferences and enter galeon -n ‘%s’, for example.

If you receive several email messages every day, and would like to tidy up your Inbox, you might like to consider filtering your messages and creating new directories. Right click on your mailbox and select Create new folder. Sylpheed will then create a directory with the name you supply below ~/Mail. Take some time to think about how you will want to organize your mail, and create a few folders, such as “Private” and “Work”, for example. You can delete the filters later, simply by right clicking them, but if you have defined filters and then delete the target folder, all that effort goes to waste. You can click on a folder to create any number of subfolders and sort them by sender, mailing list, priority, and similar criteria. After creating multiple folders, you can now let Sylpheed sort your mail. This may be superfluous if you only receive one or two messages a day as Sylpheed version 0.4.61 or later allows you to sort mail by drag’n’drop, however, if you subscribe to a number of mailing lists, you might find manual mail

GLOSSARY GnuPG : GNU Privacy Guard. Also known as GnuPG. GnuPG is a complete and free substitute for the well-known PGP encryption program. As it does without patented encryption algorithms, you can use GnuPG without any restrictions.To use GnuPG with Sylpheed you will also require GPGME (Gnu PGMade Easy) version 0.3.5 or later. GPGME is also available from [5]. If you use distribution packages instead of compiling the sources, make sure that you also install the developer packages. OpenSSL : A free implementation of Secure Sockets Layer. Communication between server and client is encrypted rather than in the clear text form. Galeon : A GNOME based browser that allows Mozilla to render the pages. As the appearance of this tool is based on the GTK theme used, it is extremely flexible and quick. Galeon also offers convenient features, such as deactivating popups with just two mouse clicks.You can download Galeon at [9].

Figure 2: Fonts too small, links broken? Modify your common preferences

Dec 02 / Jan 03




Figure 3: Using Filters to tidy up

sorting somewhat tedious. Select Configuration/Filtering in the menu. Figure 4 shows you a filter setting for the mailing list. If an email message is received from incidents@, it will automatically be placed in the Incidents subfolder. When ever you create a new rule, make sure that you apply the rule by clicking on Register, to inform Sylpheed that you really mean to implement the change. When you modify a rule, you need to confirm by clicking on Replace. You can apply fairly restrictive filters, for example defining as the sender (“From”) and stipulating the option Do not accept. This will protect you from tons of spam but also means that you cannot receive mail from friends with Hotmail accounts. You will need to create another rule to avoid this and allow electronic messages from Create the rule first, and then choose an

appropriate target. Now register the rule and ensure that Sylpheed Figure 4: Even the toolbar is customizable in sylpheed applies this rule then you can start typing. Fill out the before the generic Hotmail rule. To do Subject and To fields as appropriate. If so, select the rule and click on Up to you have created multiple accounts, a move the rule up to a position before the drop-down menu appears in the line for generic rule. As you will not want to your own data, allowing you to select an apply your filters manually, you should email address. now select Preferences/Preferences for The fact that you only need to type the current account and then Receive/Filter first letter of the recipients name and messages on receipt. then press Tab, to have Sylpheed display If you have explored the art of filtering a list of possible recipients is a nice somewhere else and want to apply some feature. This only works for addresses filters you have already fine tuned, you you have added to the address book – can easily do so. In the Receive area of although adding address book entries is your Common Preferences you can easily quite simple. Right click on an email import programs such as procmail. message to display a drop-down menu You can now receive and autoallowing you to add the email sender to matically sort or delete mail, but you the address book by selecting Add to may occasionally want to compose address book. a message. To do so, select the menu After composing an email message, item Message/Compose new message and you can either send the message immediately using Message/Send or Send GLOSSARY later. If you appear to have forgotten Patch : Patches are applied to repair program errors or introdce improvements.The advantage is something Sylpheed will let you decide that you do not need to download the entire source code, just the patch.This is not much of an whether you to go ahead or correct your advantage for a program like Sylpheed, which has a relatively small footprint, but it can be a major mistake. If you have any questions advantage if you require a kernel patch. Sylpheed has its own patch page [12], where you can download the latest Sylpheed patches and a guide to applying patches. regarding Sylpheed, or some of the Procmail : Procmail is a versatile program for sorting and filtering electronic mail.You can launch options are causing you difficulty, you various programs, or play different sounds, based on criteria you define yourself, or forward email to should take a look at the FAQ (Help/FAQ) other email accounts.The home page is located at [13]. or join the Sylpheed mailing list at [4]. Pspell : Portable spellchecker. A program that checks documents for typos.The dependencies and differences between GNU Aspell, Pspell, and the now obsolete Ispell are not easy to comprehend. In case of doubt, simply install the Pspell and Aspell packages your distribution provides, and apply the required languages. Additional information is available from [11].


Dec 02 / Jan 03

Claws? Sylpheed provides you with almost everything you need, so what is


Sylpheed Claws? Sylpheed Claws is the developer version that contains the latest features, but may not be as stable as Sylpheed. If you really need a spellchecker, and have never patched a program, take a look at Sylpheed Claws. You can download the program from [10] or simply copy it from the subscription CD. To use the spellchecker you will need the source package for Pspell or the pspell-0.12.2-devel package or better. Sylpheed Claws includes the following enhancements: • selective download (preview of sender and subject line on the server) • different appearance by applying customized themes (see Figure 5) • enhanced filtering system • customized toolbar

And? Sylpheed is a versatile email program that is useful for email users with a high message volume. Additionally, the


Figure 5: Sylpheed Claws with stw themed customized icons in comparison to Sylpheed Claws 0.8.1 mozilla look

program provides ample configuration facilities to make working with Sylpheed a pleasant experience. If you do not require KDE (KMail) or insist on an address book that reminds you of your friends birthdays and anniverseries (Evolution), Sylpheed is a safe bet. The only drawback at present is the lack of a spellchecker, however, you can remedy this situation by applying a patch, or simply installing Sylpheed

Claws. Despite the warnings about possible crashes on the Claws home page, this version was extremely stable, and only crashed once in the authors experience, when subjected to extreme pressure during a selective download. ■

INFO [1] Freshmeat: [2] Kmail home page: [3] Ximian Evolution homepage: http://

Using GnuPG The following section contains only a brief introduction.You should refer to [7] and [8] for more details.You can type gpg -version to display the currently installed version. If this happens to be earlier than 1.0.6, you should consider updating, as the older versions are buggy. If an error message is displayed to the effect that GnuPG has not been installed, download the packages for your distribution from one of the usual servers, or compile them from source code.

[4] Sylpheed home page: http://sylpheed. [5] German GnuPG home page: http://www.

gpg -gen-key

[6] OpenSSL home page: http://www.

You are first prompted to specify an algorithm. Choose the default setting (DSA/ ElGamal).The second prompt refers to the key length – the longer a key is, the more secure, but of course, operations using the key will also take longer.

[7] GnuPG Keysigning Party HOWTO:. http:// html

If you specify a key length of more than 1536 bits, GnuPG asks you if a key this size is really necessary. Key lengths of at least 2048 bits are recommended for some needs. DSA typically uses 1024 bits. Following this you are asked to supply your name, a comment, and your email address. This information is used to identify the key.You can change or complete these entries later.

[8] GnuPG manual: gph/de/manual/

The following command creates a new keypair:

Make sure that you choose an email address that you intend to keep.This will save you and your correspondents trouble with invalid and revoked keys later. Finally, you are prompted to enter a mantra that you want to use for storing your private key. Make sure that you choose a good mantra. A good mantra is defined as

[9] Galeon home page: [10]Sylpheed Claws download: http:// php?group_id=25528 [11] GNU Aspell: [12] Sylpheed Patch home page: http://www.

• not too short • containing special characters,

[13] Procmail home page: http://www.

• not a name and

Use a combination of lower case, capital letters and space characters randomly to add an additional level of security. Also, you will need to be able to remember your mantra easily, as your private key is useless without it. It is also a good idea to create a revocation certificate at the same time, and to store the certificate in a safe place: gpg --output revoke.asc --gen-revoke mykey where mykey is either the key ID of your first keypair or part of a corresponding user ID.The revocation certificate is stored in revoke.asc, or if you leave out the -output option, the output is written to standard I/O.


• not easily guessed from prior knowledge of the user (such as a phone number, bank account number, name and number of children, or pets)

Thomas Zell lives in Berlin. His interest in Unix and Linux was raised in 1998. Having tried many distributions he finally developed his own version of the WindowMaker desktop which can be found at

Dec 02 / Jan 03



LaTeX Workshop

LaTeX Workshop

Making Up With LaTeX L

aTeX will immediately complain, whenever it encounters an unknown or incorrect command in an input file. This command-line output may seem a little strange at first glance, but a modicum of background knowledge will help put you in the picture. Simple typos are the most common cause of errors, but LaTeX will not hesitate to warn you if it discovers missing control sequences or problems with line or page make-up (see Table 1).

Things are livening up: In this part of our workshop we take a closer look at error messages, troubleshooting, modifying font appearances, and inserting images. BY HEIKE JURZIK

Line Make-Up The famous Overfull \hbox appears so often – especially in longer documents – that we thought it might be useful to take a closer look at this warning. LaTeX not only left and right justifies your document perfectly, but also ensures that the placement of the words in a line is optically pleasing. This avoids the larger gaps typical for some other word processors. But LaTeX occasionally has a little trouble with the line make-up – especially if you use foreign words, special characters or words containing a large number of consonants. Listing 1 includes a text that caused an Overfull \hbox message on launching LaTeX, as shown in Listing 2.

Listing 1 \documentclass{article} \usepackage{american} \usepackage[T1]{fontenc} \usepackage[latin1]{inputenc} \begin{document} aaaaaaaaaaaaaaaaa U bbbbbbbbbbbbbbbbbbbbbb U chloroquininesulphate U "chloroquininephosphate U eefghhhhhh" iiiiiiiiij U kklmno pqrrrrrrrrrrs U tuvvvv wwwwwwwwwwwxx yz \end{document}


Dec 02 / Jan 03

The line containing “chloroquininephosphate” would seem to be 13.91602 pt (1 pt, “point” corresponds to 0.35 mm) too long. As you can see, “chloroquininephosphate” obviously overlaps at the end of the line. In cases like this, you can often remedy the situation by using manual hyphenation:for example you can type “chlo\-roquin\-ine\-sulphate” for the word “chloroquininesulphate” This allows LaTeX to hyphenate the word at the positions you indicated, however, this will only occur if the word is at the end of a line and really needs to be hyphenated. Unfortunately, this approach did not work for our first example. LaTeX attempted to hyphenate at the only possible position (after “chloro-”). This does not often happen. If it does happen to you, you might like to change the page format, or simply the word order. You can type “ eefghhhhhh” before “chloroquininephosphate” to remedy the hyphenation problem. Incidentally, you can use a tilde character to suppress hyphenation attempts: thus you could type “e.~g.” to stick the characters together with a “hard space”, thus retaining the space character, but avoiding the end of line separation.

Changing Font Appearances LaTeX normally uses a standard font (“Roman”) and a standard point-size

(10 pt) for document output, but you can change both for single words, text passages, or the complete document (in the preamble). To change the font size within a document, simply type a backslash followed by the desired font size. You can enclose the size command and the text passage it should refer to in curly brackets: This is a {\large large} word.

(see Figure 2). You can also apply this function to blocks of text (indicated by \begin and \end):

Listing 2 [john@black Latex]$ latex test5.txt This is TeX, Version 3.14159 (Web2C 7.3.1) (test5.txt [...] Overfull \hbox (13.91602pt too wide) in paragraph at lines 7--8 []\T1/cmr/m/n/10 aaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbb chloro-quini-ne-sul-p hate "chloro[...] [...]

LaTeX Workshop

Figure 1: Looking at overfull hboxes in Kdvi

Figure 2: Changing the Font Size

\begin{large} This text block will now be printed in a slightly larger typeface. \end{large}

If you want to change the default font size for the whole document, you can use an entry in the preamble section. The available point sizes are 10 pt (default), 11 pt, and 12 pt. You can also use the preamble to define a new point size for a document class as a parameter in square brackets: \documentclass[12pt]{article}

By default, LaTeX offers a range of fonts that you can easily select (see Figure 3). Just like point sizes, font changes that apply to short excerpts of the text are best effected using a block with curly brackets and the corresponding command. If you want to set the name “Linux Magazine” in small capitals, you simply type {\sc Linux Magazine}. To

apply this to a whole block of text, you will again use the \begin{sc}…\end{sc} syntax. To highlight a word in a document, you might like to use the \emph{…} command (which obviously refers to the word “emphasize” ) – the text within the curly brackets will appear in italics, if you are using an upright font. If you are already using italics, the command will use an upright font. To complete our trip to the realms of typography, let us take a quick look at one or two metacharacters. LaTeX will not only print “normal” keyboard characters, but also special characters and even complicated formulas. However, some metacharacters, such as curly brackets for example, are used for LaTeX commands and you need a special command to generate them.

Special Characters The following characters play a special role in LaTeX and are called special printing characters, or simply special characters. # $ % &amp; ~ _ ^ \ { }

Whenever you put one of these special characters into your file, you are doing something special, as described in the Table 2. If you simply want the character to be printed just as any other letter, include a \ in front of the character. For example, \$ will produce $ in your output. Exceptions to this rule: \ itself because \\ has its own special meaning. Use $\backslash$.

Pictures please! LaTeX is a layout program and provides for perfect text make-up. Although you can use the so-called picture environment to draw simple graphics, such as lines, arrows or circles, the commands required for creating line

Table 2: Special Characters Character



The number (hash) sign is used to define use of arguments,for example, in the \newcommand command.


The dollar sign is used to delineate math and displaymath Enviroments.


The percent sign is used to insert Comments in the input file,and to allow line breaks without generating a space.


The ampersand is used to separate items in the array and tabular enviroments.


The tilde generates a nonbreaking space.To create a tilde in the output, use \verb or the verbatim environment (or cheat by using \~{},i.e.,placing a tilde accent over a “blank”letter).


The underscore is used to create sub scripts


The carat (circumflex) symbol generates superscripts.To create a carat in the output use \verb or the verbatim environment.

\ ,{ , and }

The backslash and braces are used in command definitions,for enclosing command arguments,and for delimiting scopes of declarations.

Table 1: LaTeX Error Messages and Warnings ! Undefined control sequence. […] ?

[…] *


A question mark occurs at the end of the message; this is most commonly caused by mistyping a command.The question mark is preceded by a reference to the line where the error occurred – this is normally quite easily rectified.You can type [h] for additional information on the error,and [x] to stop LaTeX, which allows you to correct the error in your document. You probably forgot to type \end{document}. If you type \stop after the asterisk,LaTeX will probably complete the job without any further errors.You can then go on to correct the error in the document.

! LaTeX Error:File `x.tex’not found. Type X to quit or <RETURN> to proceed…

The document uses a file that LaTeX cannot find.This may occur when you insert multiple documents into a single file (we will be looking at this topic in detail in a later issue).You can use the [x] key or [strg]-[c] to quit.

Overfull \hbox […] in paragraph at lines 9-11

This warning indicates that LaTeX has encountered a line make-up problem.You can often ignore this error,however,if the text really does overshoot the margin, you might prefer to help LaTeX out with this hyphenation task.

Dec 02 / Jan 03



LaTeX Workshop

Figure 3: Available Fonts

drawings of this type may appear somewhat cryptic at first glance. In most cases, you might prefer to use external tools to create illustrations, which you can then insert into the document. If you are lucky, your picture may already be in EPS format (“Encapsulated PostScript”). Most programs (such as The GIMP and xv) allow you to save files in this format. In other cases, you can use the command-line tool, convert. The \includegraphics command from the graphicx package, allows you to insert, rotate, and scale your images. You will need to load the package in the preamble for the document: The parameter in square brackets refers to the output driver, dvips, which is needed for printing later. The image itself is inserted using the following command:

make-up from time to time using the xdvi program. Note that the tool may take a little Figure 4: Inserting PostScript Images longer to process and display large PostScript graphics. To \begin{figure}[t] accelerate this process, you can click on \includegraphics[height=4cm]U the View PS button, which will display {pingus.eps} the image as a small box. Click again to \caption{Penguin with set U toggle back to the full view. The details square} on the image size can be given in metric \end{figure} units (centimeters “cm”, millimeters “mm”, and so on), or in typographic You can even use combinations of units (point “pt”, pica point “pc”, etc.); positioning parameters; that is LaTeX the angle parameter defines the angle will attempt to set the specified objects through which the image is to be rotated in the defined order. If you type [tbp] in (see Figure 4). square brackets first, LaTeX will first attempt to display the object at the top of Picture, Picture, Where Will the page. If there is no room to do so, it You Wander…? will then try the bottom of the page, before finally resorting to a new page. \includegraphicsU LaTeX allows the images or tables you In the next issue we will be taking a have inserted into a document to roam [height=...,U closer look at the maths and formula through the document. You simply width=...,angle=...]{bild.eps} modes, and supplying tips for define the criteria for positioning the typesetting scientific documents. ■ image, and let LaTeX take care of the If you do not define the height and details. You can also position images (or width, the image will retain its original tables) within the figure environment size. You might like to check your page Listing 3: and add a caption. \documentclass{article}[12pt] Table 3: “Roaming” Objects – Parameters Various positions \usepackage[T1]{fontenc} in the document \usepackage[latin1]{inputenc} [h] Meaning “here”: The image is placed at the document can be used as position where the figure environment \usepackage[dvips]{graphicx} parameters (see occurs. ... Table 3). [t] Meaning “top”: The object is placed at the top of the current page, provided the body text still The following \begin{document} fits. If not, the image will be placed at the syntax inserts top of the next page. the penguin image ... [b] Meaning “bottom”: The image will be placed at the bottom of as shown in the current page.The space preceding the Figure 4 at the object will be filled with text; if the object \end{document} does not fit on the current page, it will be start of a page, moved to the bottom margin of the next within a multipage. page document INFO [p] Meaning “page of floats”: Any images or tables will be collected and and also adding a [1] The LaTeX Cookbook: http://www.starlink. occupy a page of their own that contains caption at the only images/tables. same time:


Dec 02 / Jan 03

Charly’s column


Stress Tools: The Sysadmin’s Daily Chores

That’s What I Call Stress! Production servers that collapse under the burden of user activity are not exactly the kind of visiting card an admin would want to hand out. So it makes sense to give your system a really tough load to cope with before you roll it out in your production environment. Is your server looking for trouble, mate? Look no further! BY CHARLY KÜHNAST

SYSADMIN OpenSSH ....................................50 Part three of our OpenSSH series takes a look from the admin’s viewpoint. This time we look at the keychain tool.

Postfix .........................................56 How to configure your mail server to prevent it from being misused as a relay station by spammers.

servers, but this feature is relatively new, and I for one would not like to vouch for the results at the present time. So let us leave this job to a more capable and dedicated tool, such as the SSL Stress Client [2].

POP3 Under Fire Servers that need to open and close hordes of connections and handle logons at the same time, such as POP3 servers are likely candidates for load testing. Smpop [3], a tool with an extremely small 5 kb footprint, is pre-destined for this task. As might be expected from such a small package, the tool is not exactly convenient to use, but it does exactly what it is supposed to – that is, it reads username/password pairs from a file and attempts a configurable number of simultaneous logins on POP3 servers. If you set the threshold high enough, you can make even the most powerful mail manglers skip a beat. But there is no need to restrict your attack to a single service. If I want to know if a specific application will continue to perform smoothly, or if the CPU is in danger of melting down, or if the hard disks are taking a thrashing, it’s “seconds out and round one” for stress [4]. Stress is capable of precisely simulating specific loads on a computer. Typing stress --loadavg 20

will set a load of 20 (plus/minus ten per cent). And stress --hogdisk 10000m test

tells stress to write 10 Gbytes of trash to the test file – that should keep your hard disks occupied. You might like to spend a few minutes experimenting to approach the load level you would expect in a production environment. A word of warning: Any tools that allow you to perform effective load tests are also suitable for DoS attacks. If you are tempted to try out these stress tools on other people’s computers, you can expect cars with flashing blue lights to pull up outside your door within no time at all – and the “admins” driving them might not be too friendly! ■

INFO [1] Hammerhead: http://hammerhead. [2] SSL Stress Client: http://sslclient. [3] Smpop: [4] Stress: projects/stress/



hen configuring servers I know in advance that some of them will be subject to extreme loads. In this case, I like to find out whether the server will be able to handle the task in hand, or if one or two nuts and bolts need tightening up. Let’s get in some target practice on a web server first. The range of tools I could use for this task is enormous, and it is admittedly difficult to see the wood for the trees. But hammerhead [1] seems to provide a fair compromise between performance and ease of configuration. The tool, a 120Kb tarball, is easily customized to suit your current environment. You can use the config file /etc/hammerhead/ hh.conf to specify the number of threads, timeout thresholds and so on. And hammerhead can use IP aliasing to simulate a number of users on various machines. After letting hammerhead walk rough shod across your web server for a few seconds, you will be presented with a short report. The ratio of successful and unsuccessful access attempts, and the average response times, which indicate any shortcomings. Hammerhead also offers an option for testing SSL web

Charly Kühnast is a Unix System Manager at a public datacenter in Moers, near Germany’s famous River Rhine. His tasks include ensuring firewall security and availability and taking care of the DMZ (demilitarized zone). Although Charly started out on IBM mainframes, he has been working predominantly with Linux since 1995.

Dec 02 / Jan 03



OpenSSH: Part III


ou often need to look at the details to discover how versatile SSH really is. Using public keys to automatically run commands is just one example. You can stipulate a command that SSH will run after logging on in order to restrict user accounts, simplify administrative tasks or allow logging on through a firewall. Using this in combination with keychain can simplify your cronjobs and make them more secure. This article also contains a case study on SSH for webmasters and shows how you can use SSH to protect the notoriously insecure NFS. First, let’s look at a few tips for the SSH client. There are three ways of configuring the client: system defaults are stored in /etc/ssh/ssh_config, user configurations in ~/.ssh/config, and you can also use command-line options. If these three resources have different settings, the command-line will have the highest priority, followed by ~/.ssh/config, and then /etc/ssh/ssh_config. In contrast to the SSH server, sshd, the user can completely control how the client program behaves. Users can override any admin settings in the system file /etc/ssh/ssh_config with SSH commands of their own. Only the server program can enforce limitations.

OpenSSH from the Admin’s Perspective – Part III

In the Know SSH is not only secure, but also convenient – provided you get the configuration right. Tools such as ssh-copy-id and keychain are useful for managing private SSH keys. Adding a command to your key will allow you to handle out of the ordinary tasks. And: SSH can tunnel NFS too. BY ANDREW JONES

Configuring the SSH Client The ~/.ssh/config file is used to configure client options which might otherwise need to be passed to SSH as command-line flags. Each line in the file starts with a keyword followed by an applicable argument. Parameters can apply to connections to any target hosts, or refer to a specific host. Any instructions following Host Name apply only to the specified target host. The CONFIGURATION FILES section in the SSH manpages give valid entries. The example in Listing 1 shows a few global and host-specific settings. SSH evaluates the entries in this file sequentially, and only uses the first match for each individual parameter. This is why defaults are stored at the end of the file under Host *. Our example shows the user enabling agent forwarding and data compression, and using the quick Blowfish algorithm for encryption purposes. Settings that refer to specific targets are nearer the start of


Dec 02 / Jan 03

the file. Entries that follow Host must comply with the name the user supplies in the ssh command. The configuration also allows the * and ? wildcards. For security reasons DNS resolution does not occur here. Version one of the protocol is used for all the hosts in the domain, and both the identity file and the user name have been set.

Shortcut to the Target In the buug and lux entries, the HostName keyword defines the actual target as an FQDN (fully qualified domain name), or as an IP address. This saves additional typing and allows the user to simply type ssh lux to open a connection to This type of address resolution is more secure than DNS as DNS entries can be manipulated, whereas the configuration file is managed by the user.

The parameters in this file also apply to scp and sftp, however, users can customize them by adding commandline parameters for individual connections. A fully customized client configuration in ~/.ssh/config, combined with an ssh-agent can save you a lot of unnecessary typing.

Copying Keys between Computers Part 1 of our series [1] explained how to use scp to copy keys to other computers. The same steps can be scripted using ssh-copy-id from the OpenSSH package: ssh-copy-id -i ~/keynew/id_dsa U webmaster@vaio

This command (see also Figure 1) transfers the public key in to the home directory for webmaster on

OpenSSH: Part III

host vaio, creates the directory .ssh (if not already present), applies appropriate rights, and stores the public key in .ssh/authorized_keys, creating the file if necessary. If the file already exists, the command simply adds the new key. Figure 2 shows a logon process that uses the recently transferred key to authenticate. One precondition for sshcopy-id is that PasswordAuthentication yes has been enabled for sshd on the

remote computer before transfering the key.

Keys On a Chain If you need to use SSH keys in various environments (character based consoles and X11), you will probably appreciate a useful tool called keychain [3]. Keychain is an


intelligent script extension for the SSH commands ssh-agent and ssh-add. Traditional key management with sshagent and ssh-add for X11 logons has already been integrated by many distributions (Debian, Mandrake â&#x20AC;Ś).

Figure 1: The ssh-copy-id command transfers public SSH keys to other

Figure 2: The test key created and transfered in Figure 1 immediately works.

computers, placing them in ~/.ssh/authorized_keys

The key was added to the list of authorized keys by the ssh-copy-id tool



OpenSSH: Part III

SuSE users simply enable usessh="yes" in the X11 session file, ~/.xsession to make the SSH agent available via $SSH_AUTH_SOCK in every X11 terminal. However, this leaves you without a key manager for character based consoles. As they are not child processes of the X11 session the environment variables are not accessible to the Linux console, and thus SSH cannot access the SSH agent. You would normally need to re-launch ssh-agent on each console and use sshadd to import your key. This is where keychain can help: You can use the tool to allow access to keys both on X11 and on any character based console after importing your keys. After you log off, your private keys will still be accessible (kechain uses nohup to launch the SSH agent). Users will not need to retype their passphrase each time they log

on, but only after rebooting or if they remove their keys from the SSH agent.

Insecurity Potential All this convenience can be a source of danger: private keys remain in memory and can thus be exploited by root. Of course, a superuser could equally exploit a scenario where an SSH agent is put to normal use, although the time slot would be more restricted – logging out terminates the agent and deletes the decrypted secret keys. In other words, keychain should be used only on trusted hosts. This particularly applies to root users – ensure that you have taken precautions to secure your host. There should be no need to re-emphasize the fact that your private key should always be protected by a passphrase. Key authenticated cronjobs are an area where keychain can really shine.

Much Ado About a Vulnerability The 24th to 26th June 2002 saw security conscious networkers and system administrators in turmoil. It all started on Monday night with a Bugtraq report from Theo de Raadt [12]: Upcoming OpenSSH vulnerability. In contrast to earlier reports Theo pointed out a remote vulnerability in OpenSSH without actually referring to the bug or the affected platforms. The only real fact in the report was an urgent recommendation to update to version 3.3 immediately.

Enabling Privilege Separation This version implements a feature called privilege separation, designed to protect systems from imminent exploits without actually removing the bug. The privilege separation code, which was authored by Niels Provos [13], requires only a small portion (2500 lines) of sshd code (which totals some 27000 lines) to be run with root privileges, and places the rest in a chroot jail. This means that a compromise will not automatically lead to root privileges. Most Linux distributors made


Dec 02 / Jan 03

SSH package updates available for downloading on 25th June (version 3.3p1 with privilege separation enabled). At this point the bug was still unknown. On 26th June ISS disclosed further details publicly [14], indicating a bug in the challenge-response authentication mechanism that affects all platforms, and an additional bug in OpenBSD and FreeBSD that affected the SKEY and BSD_AUTH authentification. Thus it became apparent that sshd on Linux was not actually vulnerable, if ChallengeResponseAuthentication no had been set in sshd_config. Neither SKEY nor BSD_AUTH are included in the binaries normally supplied by Linux distributions (compile options). The vague insinuations in the first reports and a rather strange disclosure policy led to controversial debate. A patched version became available on various FTP servers around midday of 26th June. The security advisory supplied by the developers of OpenSSH was revised several times – the fourth edition includes a detailed response. txt/preauth.adv

Without keychain a cronjob can only use private keys that are not passphrase protected. If you assume the user ID for the cronjob to launch a keychain session, and then import the key, the cronjob will be able to use that key, which remains in the SSH agent’s cache [4].

Automatically Launching Keychain The init file for your login shell is the right place to launch keychain – this would be ~/.bash_profile, if you use bash. The following syntax will launch keychain from your .bash_profile for a normal logon: /usr/bin/keychain ~/.ssh/id_dsa . ~/.ssh-agent-${HOSTNAME}

For accounts that run unattended cronjobs, you might like to use ~/.bash_profile to protect keychain from intruders. If an attacker attempts to exploit the user account and launches a logon shell, the secret is removed from the keychain: /usr/bin/keychain --clear U ~/.ssh/id_dsa . ~/.ssh-agent-${HOSTNAME}

The command adds the key again immediately, however, the passphrase –

Listing 1: SSH Client Configuration 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20

# ~/.ssh/config Host * Protocol 1 IdentityFile ~/.ssh/identity User kh Host buug HostName User kh Host lux HostName ForwardAgent no User root Port 222 # Host * ForwardAgent yes # ForwardX11 yes StrictHostKeyChecking yes Compression yes Ciphers blowfish-cbc

OpenSSH: Part III

which the attacker will hopefully not possess – is required. Admin users should be prepared to sacrifice convenience for the sake of security. Keychain based cronjobs are far more secure than jobs using unencrypted private keys. They do require some attention after rebooting: The admin user will need to log on using the cronjob account, and type the passphrase.

Using Keys to Force Commands Forced commands are a fairly obscure SSH topic, although this feature of SSH keys is particularly interesting for administrators. System administrators can use a keypair to launch pre-defined commands at any point of the network without having to open the corresponding account. In a public key infrastructure, public keys are stored in ~/.ssh/authorizedU _keys in the home directory of the server you want to log on to. Each line of this file contains a single public key. Each key entry for a protocol version 2 key comprises several hundred characters and four distinct areas: • Options • Key type • Base64 encoded key • Comment The option field is empty for newly created keys. To use the option field, add the required options at the start of the line and comma separate them. Spaces are not permitted unless they are enclosed in quotes. From the administrative point of view, the from and command options are probably the most useful. from="hostU


Figure 4: NFS over SSH: From the client’s viewpoint the NFS directory appears to have been mounted from localhost. In reality, the SSH tunnel is forwarding the connection to a genuine NFS server

_pattern" will allow you to define the hosts from which a user can log on with the current key. To deny a host, simply insert an exclamation mark, !, in front of the host identifier. The sshd manpage quotes in the AUTHORIZED_KEYS FILE FORMAT section the following: command="dump /home",no-pty,U no-port-forwarding 1024 33 U 23...2323

No matter what command a client using its private key to authenticate attempts to launch, the server will only run the supplied command dump /home. This will allow the key to perform a backup, but not permit an interactive login. The additional parameters, no-pty,noport-forwarding prevent the command from running in a pseudoterminal (PTY), and stop the client from redirecting TCP ports. Of course, you can attach additional SSH commands to the key: command="ssh -A -2 -l kh tux" U ssh-dss U AAAAB3NzaC1k...KElw== kh@vaio

This mechanism is very practical if a computer (tux in our example) cannot be accessed directly, and you need to use

another computer as a proxy. This is often the case when logging on to a host behind a firewall: If the public key shown above has been installed on the proxy, sshd will immediately re-direct the client to the final target. The same valid public key must also be stored on the target, of course. The matching private keys are only on the client. This procedure allows you to access computers on a LAN despite your firewall, and without opening an inward bound port. Key-based forced commands provide plenty of potential for regularly recurring jobs, specially for unattended cronjobs. The whole idea is to restrict the type of access a cronjob has to a specific task. Of course, you can script any commands you need to run.

Limitations of Forced Commands Depending on the command you launch, you might find it difficult to restrict the type of access a forced command permits. Imagine you wanted to permit a user to use vi to open a file, but not to run any other commands – you might be in for an unpleasant surprise, as vi would allow the user to escape to a shell and run arbitrary commands. The same principle applies to other programs.

Listing 2: NFS Exports for SSH-NFS # /etc/exports: # the access control list for filesystems which # may be exported to NFS clients. See exports(5). /export/home,insecure,root_squash)

Listing 3: /etc/fstab for SSH-NFS # /etc/fstab: static file system information. # Test with NFS --> SSH

Figure 3: The patched kernel provides an additional NFS server configuration

localhost:/export/home /mnt/sshmnt nfs U tcp,intr,bg,port=2002,mountport=2030,user,noauto 0 0

option: NFS over TCP. This allows you to tunnel NFS through SSH

Dec 02 / Jan 03



OpenSSH: Part III

Case Study: SSH on the Web Many security conscious admins will be familiar with this situation: the webmaster needs a quick and dirty FTP and telnet server for uploads and to allow work to continue on your webspace. The webmaster wants to be able to restart the web server at any time, and view the logfiles. All of these points are important to your enterprise and need to be implemented a.s.a.p. If not sooner. In this scenario webmasters are often assigend root priveleges, although they might not even be on the payroll. By the time you discover uninvited guests fooling around on your network, or that your server configuration has gone haywire, it might already be too late. A well thought out security policy should be geared to resolve conflicts, but some enterprises may have not implemented a policy of this kind.

SSH helps Admins and Webmasters Help is near in the form of SSH and sudo, and the additional effort is minimal. Webmasters basically need to be able to write, delete and edit files in the DocumentRoot, occasionally customize the configuration of the web server, stop and restart the web server, and analyze error messages in the web server logfile. They do not necessarily need to log on to do so. The following steps are required: • Install OpenSSH on the web host and configure OpenSSH to allow only key authenticated logins, preferably using only version 2 of the protocol.

from combined with command provides another useful option: from="",command=U "/etc/firewall/iptrans" U ssh-rsa AAAAB3N1y...eKGzw== U IP_transfer

If the connection originates from and the private key is available, the SSH daemon launches the /etc/firewall/iptrans script. However, this


Dec 02 / Jan 03

• Create a normal user account and a home directory for webmaster on the web host. • Create the Apache directories ServerRoot, DocumentRoot, and the logfiles in /home/webmaster/. • Assign write privileges for httpd .conf and the web documents to webmaster. • Use sudo to allow webmaster to start and stop Apache. • Use the webmaster’s SSH client program to create a keypair and store the public key on the webhost under /home/webmaster/.ssh/ authorized_keys. To apply additional restrictions to the webmaster account, for example where the account will be used by a subcontractor, you can additionally perform the following three steps: • Install the restrictive nosh (or lsh) shell [5], and assign it as the login shell for the webmaster account. • Add this shell to /etc/shells as a login shell. • Allow the required binaries and paths for the webmaster in the configuration of the restrictive shell. After these steps have been completed, the webmaster can use scp (or a Windows or Mac client for SSH) to store web documents in DocumentRoot and log on to the web host via SSH. If the admin user prefers, the restrictive shell can be used to deny the webmaster access to any other areas of the system, restricting access to the home directory only. The admin can specifically permit the webmaster access to any required programs,

connection will only work if it originates from SSH only runs the key commands, and ignores any commands stipulated by the SSH client. However, you could allow the script to evaluate the commands using the $SSH_ORIGINAL_COMMAND, and log them if appropriate. The SSH server option PermitRootLogin forced-commands-only is also extremely versatile – it would seem to be tailor-made for some administrative tasks.

excluding any others. The webmaster will still be able to start and stop Apache using sudo and appropriate configuration options. The security gains with this configuration are considerable: a web server without an FTP server and the accompanying security problems, a network without clear text passwords and a consistent web host configuration for the administrator. These results can be achieved without even using a restrictive shell. Extremely paranoid administrators might prefer a chroot environment, but the configuration effort involved is much greater.

SSH for Windows and Macs If you intend to use SSH exclusively, you will need to ensure that any client operating systems used in your environment can log on. OpenSSH is available for any Unix derivates but Microsoft Clients will need something like Putty [6] (MIT license similar to BSD). Putty can handle version 2 and the current snapshot version can even read private keys created with OpenSSH. Putty also provides the file transfer tools pscp and psftp, as well as a neat GUI tool called iXplorer [7], which is based on pscp. Mac clients can use MacSSH [8], which is available under GPL and implements version 2 of the SSH protocol. It will run in the Mac Classic environment of Apple hardware using MacOS X. The developer does not plan to port MacSSH to Darwin (Mac OS X). Additional clients are available on the OpenSSH home page [9].

Secure NFS Tunneling The security of NFS and the other usual suspects, such as telnet or POP, is questionable to say the least. Of course, SSH can’t change the original security model, where the server trusts the client, but it can at least secure the dialog and make sure that the client is required to authenticate. Large NFS networks that attach to the Internet might be better off using an encrypted VPN such as FreeS/WAN or

OpenSSH: Part III

NFS over TCP Our test environment was Debian 3.0 with a 2.4.18 kernel including the NFS TCP patch that we compiled ourselves, allowing alternative use of the userspace and kernel NFS features. There was no noticable difference either in operations or management. To tunnel NFS, SSH redirects two local TCP ports to the TCP ports of nfsd and mountd on the NFS server[2]. NFS client does not require a portmapper in TCP mode, and the UDP ports on the NFS server are irrelevant. External access is only required for the SSH server port, allowing you to block any other ports on your firewall. The client uses two ports on localhost for local forwarding. The SSH tunnel forwards these ports to the server where it connects to the local target ports (see Figure 4). The client and the server must respect this arrangement, for example, the NFS server will

need to export shared directories to its own IP in /etc/exports (Listing 2).

Exporting to Your Own IP The NFS client mounts the directories from localhost, adding the tcp, port and mountport arguments to the mount command (Listing 3). The actual redirection requires you to know the TCP ports where nfsd and mountd are listening on the NFS server: rpcinfo -p NFS-Server | grep tcp

SSH then needs to redirect two free local ports on the NFS client host to the server ports, such as: ssh -L LocalNFS:NFS-Server:U RemoteNFS-L LocalMount:U NFS-Server:RemoteMount U -c blowfish User@NFS-Server

The client mounts the directory exported by the NFS server from localhost (see Figure 4), and can either use /etc/fstab (Listing 3) or a manual mount command to do so, supplying the redirected TCP ports as arguments: mount -t nfs -o tcp,U OtherOptions,port=LocalNFS,U mountport=LocalMount U localhost:/Export/Servers U /Mountpoint

You can use a shell script to perform the required steps, port discovery redirection, and NFS mount, automatically. If you want the client computer to

Port 2002


Andrew spends most of his scarce leisure resources looking into Linux and related Open Source projects.

INFO [1] Linux Magazine, Issue 24, p50 [2] Linux Magazine, Issue 25, p52 [3] Keychain: projects/keychain.html [4] Daniel Robbins, the author of keychain, describes his program in two articles: library/l-keyc2/ and com/developerworks/library/l-keyc3/

[6] ~sgtatham/putty/

mountd Port 2030

Andrew Jones is a contractor to the Linux Information Systems AG in Berlin. He has been using Open Source Software for many years.

[5] nosh_1.0-9_i386.deb.

Port 2049

mount -t nfs \ -o tcp,port=2002,\ mountport=2030 \ localhost:....

mount the NFS directories immediately on booting, it is important to run the init scripts that mount local partitions before running the NFS scripts. Doing so ensures that the mountpoint, the SSH key and the ssh program itself will be available. Forced commands (see top of article) are useful here, too: You should use an extra keypair for NFS over SSH, add a safe command (such as /bin/sleep) to the public key and restrict the source IP. If someone does manage to steal the private key, at least they will not have full access to the server host. Data throughput in our test was similar to that provided by UDP, despite TCP overheads, and encryption/decryption. However, the CPU load on the NFS server was about 30 per cent higher for the secure link. â&#x2013;


CIPE, but individual clients will be fine with SSH. The first issue here is the fact that SSH can only use the TCP protocols, whereas NFS only uses UDP datagrams. That is only true of older NFS versions on Linux: but newer userspace NFS servers, and even the more recent kernel based NFS servers, can also provide NFS over TCP [10]. Userspace NFS does not require any additional attention, whereas kernel based NFS requires a patch [11]. The patch adds an additional kernel configuration option (see Figure 3) in the Network File Systems section. The development stage of this feature is currently experimental.


[7] Windows GUI for file transfer over SSH:

Port 1119

[8] MacSSH: [9] [10] [11]

SSH Tunnel

NFS Client

ssh -L 2002: \ -L 2030: \ User@

[12] 278755 [13] Privilege Separation:

NFS Server:

Figure 5: The mount command on the NFS client appears to reference a directory on the local machine. The SSH tunnel, which redirects the nfsd and mount ports to the client ensures that both ends meet

[14] ISS Advisory: http://online.securityfocus. com/archive/1/278818

Dec 02 / Jan 03




Configuring Postfix to Prevent Exploitation

Self Defense S

pam, spam, spam – the bane of most email inboxes. Whether you are inundated by porn ads, ominous looking gaming offers, premium rate numbers, or purportedly lucrative investment opportunities abroad, the perpetrator normally remains hidden, and attempts to follow pesky bulk mail back to its source often leads to a deadend, or the tracks lead to an innocent mail server, but no further. The problem with this server is that has been configured as an open relay, so it accepts mail from anywhere and relays it to the target mail server (or onwards to an intermediate server). Although this does not sound particularly evil, and was even a desirable feature during the infancy of the Internet, it makes it easy for spammers to conceal their true identity. At the same time the abundance of open relays ensures that spammers do not need to invest in technology on a scale capable of transmitting thousands or even millions of email messages, or even pay for the traffic this creates. Instead they simply transfer their junk mail, and a long list of targets, to a


Dec 02 / Jan 03

Wouldn’t it be nice to have an answer to spamming? But at least it’s no big deal, to configure your mail server to prevent it from being misused as a relay station by spammers. BY PEER HEINLEIN third-party open relay. A mail server exploited in this way then happily spends hours, or even days and weeks, transmitting gigabytes of spam to the list of targets. The spammer has no transmitting traffic worthy of note, and thus virtually no costs. And risks are low, provided the spammer is smart enough not to leave tracks.

Stop the Accomplices! The real solution to the spam issue cannot be to install filters for the victims. That would be removing the symptoms but ignoring the cause. It makes more sense to start with the mail servers and first ensure that they are incapable of acting as relays, and thus as the spammers’accomplices. The reasoning behind this is quite simply: • A normal mail server accepts email from any point in an internet, provided it is addressed to a mail address that the server is responsible for. • On the other hand, users are allowed to contact the mail server when they want to transmit mail

across the Internet, and the mail server will deliver outgoing mail world wide. • However, it can never be the mail server’s job to accept mail from anywhere and anyone, and to to deliver it to an arbitrary target. The mail server is not responsible for mail from arbitrary senders destined to arbitrary targets. Unfortunately, not all Mail Transport Agents (MTAs) reflect this in their basic configuration – as the administrator you may need to check this, and possibly modify your mail server’s settings manually, before installing it onto the live network.

Alternative Postfix Even discovering how your mail server is set up might entail a detailed investigation of the mail server configuration. And that is one of the reasons that System Administrators are turning to Postfix [1], an increasingly popular MTA which is simple to set up while remaining flexible and secure. Although we will be using Postfix to demonstrate the configuration steps required to set up a secure mail server, users of other programs, such as Sendmail or Qmail, should be able to derive the configu-


ration steps for their individual programs from the steps shown here. Step one restricts the server’s capability to receive mail to the two case mentioned previously and thus closes a potentially open relay. Following this, we will be looking into exceptions for specific accounts or authentications mechanisms that may be in place. But an open relay should never be allowed onto the network. In other words, before you even start setting up the mail server, you must discover the IP addresses that belong to your own network. The server will be allowed to relay mail to the Internet only for these addresses. The addresses will usually include the IP address range in your LAN or IP address ranges of any Internet access points if you are a provider. These addresses must now be added to the main configuration file for Postfix, /etc/postfix/ mynetworks = U,

Alternatively, Postfix permits more flexible use of the mynetworks_style parameter to define the addresses where access is permitted; the server then uses its own IP address to discover the address ranges. This allows for a flexible configuration – easily ported to other servers:


mynetworks_style = class

class corresponds to the class A/B/C network in which the server resides. You can also use the subnet keyword (which corresponds to your server’s subnet, and is normally your best bet), and the host keyword for your server’s IP address. These parameters will have no immediate effect on Postfix. They will be significant later, though, when the server needs to decide whether the MTA is allowed to accept an email message, or required to reject it. The server makes this decision when the client supplies the target address, as the server refers to this address to decide whether or not the message is addressed to one of its own users. If this is so, the MTA will accept the message independently of its origin.

Restrictive The smtpd_recipient _restrictions keyword is used to apply conditions and policies. The basic configuration is as follows: smtpd_recipient_restrictions= permit_mynetworks, check_relay_domains, reject

The keywords permit_mynetworks and check_relay_domains allow Postfix to

Figure 2: MD5 encryption is the preferred method for transferring passwords securely

accept email messages either when they are addressed to a local user (incoming), or if they originate from a known IP address, that is from a client on the local network (outgoing). If neither case applies, Postfix will reject the message. So far, so good – our server is secure and relay proof. However, practical applications are normally not that simple, and may require your server to relay email messages for your own users, although their origin lies outside of your IP address range (a home office, for example). It looks like we will need to consider an authentication mechanism, and allow the server to perform selective relaying for users who can prove their identity to the server.

Mail Addresses Do Not Prove Identity

Figure 1: Two software configuration settings allow KMail to use SMTP-Auth for logging on

Using the sender’s email address to authenticate your users is definitely not a good idea. Email addresses can be chosen arbitrarily and are easily spoofed by spammers. Some spammers deliberately use an email address belonging to the mail server they are exploiting, to persuade the mail server to relay their messages. As simple as it might seem, this method was often successful: Up to a few weeks ago, a major domain hoster in Europe allowed arbitrary mail relays provided the source address belonged (or was spoofed to appear to belong) to

Dec 02 / Jan 03




one of their customers’ domains. And this despite the fact that clean and secure solutions to the authentication issue abound. If it is not feasible to identify clients directly using the IP address assigned to them or by cryptoraphic methods, you can still resort to password protection. In contrast to the POP3 or IMAP protocols, which are responsible for retrieving email messages from mailboxes, the SMTP protocol originally did not provide for proof of identity based on user names or passwords. This weakness was resolved later, and all modern MTAs and mail clients support SMTP authentication (SMTP-Auth). If a client program can correctly log on to the server while presenting mail for processing, the client will usually be considered trustworthy. The Cyrus SASL package, [2], [3], can help Postfix out. You can use the package to set up a small login database /etc/sasldb that Postfix will use for authentication. The saslpasswd tool is used to add users to the database and sasldblistusers lists the current entries (see Listing 1). Cyrus SASL can manage multiple host names and domains in the database allowing user accounts with the same name and different domains to coexist. Cyrus-SASL uses the concept of realms. If you do not use the -u domainname parameter to specify a realm, saslpasswd will assume the host name (mailserver in this case). For Postfix you can enter the name referred to as smtpd_sasl_U local_domain in the “Critical SASL Parameters …” boxout! To allow Postfix to relay messages from users authenticated by SMTP-Auth, the admin user now needs to add the permit_sasl_authenticated to the smtpd_recipient_restrictions list:

Critical SASL Parameters for Postfix Use the following configuration parameters to teach Postfix SASL (Simple Authentication and Security Layer): smtpd_sasl_auth_enable = yes This parameter enables or disables SMTP-Auth (no). smtpd_sasl_security_options = noanonymous, noplaintext noanonymous prevents anonymous logins (the whole effort would be senseless otherwise). noplaintext prevents clients from transmitting SMTP authentication passwords in the clear, in contrast to the PLAIN and LOGIN authentication methods.This parameter forces the client to apply an encryption algorithm to encode its password and prevent it from being snarfed. This makes sense from a security point of view: Users are now forced to use a secure configuration (such as CRAM-MD5 or DIGEST-MD5, Figure 2). If they log on via a connection protected by SSL/TLS (and clear text passwords are protected by this encryption scheme), there is no reason to ban PLAIN and LOGIN, of course. smtpd_sasl_local_domain = postfixbuch This parameter must contain the value defined as your realm in sasldb. Realms are basically used to authenticate users from multiple (virtual) server domains, however, both Postfix and other SASL clients can normally handle only one SASL domain. broken_sasl_auth_clients = yes Some older clients, for example Microsoft Outlook Express 4.x, expect an answer from the mail server in AUTH=LOGIN… format, although AUTH LOGIN… is standard. If you set this parameter to yes, Postfix will issue the AUTH banner twice, using both formats.

smtpd_recipient_restrictions= permit_mynetworks, permit_sasl_authenticated, check_relay_domains, reject

The following parameters additionally added to

smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = U noanonymous, noplaintext smtpd_sasl_local_domain = U mailserver broken_sasl_auth_clients = yes

The “Critcical SASL Parameters …” boxout explains the individual options. The procedure is completed by running a small script to transfer your user data to the SASL database. The parameter -p

Listing 1: Database for SMTP-Auth root: # saslpasswd -c tux Password: secret Again (for verification): secret root: # sasldblistusers user: tux realm: mailserver mech: DIGEST-MD5 user: tux realm: mailserver mech: PLAIN user: tux realm: mailserver mech: CRAM-MD5


Dec 02 / Jan 03


also allows saslpasswd to password from standard input:



echo secret | saslpasswd U -p -c tux

Although SMTP-Auth is simple to set up, there is one drawback: The postmaster will still need to teach users to add the required data to their client configurations (Figures 1 and 3).

POP before SMTP POP before SMTP, also known as SMTP after POP, a commonly used method that works on IMAP servers, is a possible alternative. Again, the idea is simple, although configuration may be more complex. When a client successfully completes the POP3/IMAP login procedure on a server,

Listing 2: Checking the Logfile /var/log: # tail pop-before-smtp [...] read ts=Mar 22 11:03:29 ip= read ts=Mar 22 11:17:58 ip= accepted --- not in mynetworks written ok read ts=Mar 22 11:17:59 ip= purging ts=Fri Mar 22 10:21:50 2002 ip=


Figure 3: No matter whether you require MD5 based SMTP Authentication, or SMTP after POP, the Windows client, The Bat, is prepared

the server ascertains that it is one of your users. The mail server will now accept email messages from the client and relay them for a pre-defined period (normally 15 minutes). If a user has configured their mail software to query their inbox

before sending outgoing email messages, you do not need to perform any additional configuration steps on the client. Some clients use this order by default today and others are easily re-configured. Outlook (Express), which still


ignores POP before SMTP is an exception and needs to be fooled into co-operating [4]. This approach creates a few problems for postmasters, and is controversial. For one thing, the identity of the sender cannot be proved, as access is permitted to a computer and it is impossible to check for multiple parallel users. There is some risk of a dynamically assigned IP address being reassigned to another user within the time slot, and that this user coincidentally or specifically accesses the mail server their predecessor was permitted to access. But POP before SMTP still makes sense in many scenarios and can be implemented along with SMTP-Auth. The pop-before-smtp [5] script (see the “Setting up pop-before-smtp” boxout for a description) runs as a daemon and monitors the logfiles on the POP3/IMAP server. If new entries are added for successful login events, the script reads the IP address of the user from the logfile and

pop-before-smtp If you have not yet installed the required Perl modules, you can perform this step automatically. If you have not used Perl/CPAN previously, you will need to navigate a few configuration prompts: linux: # perl -MCPAN -e 'install Time::HiRes' linux: # perl -MCPAN -e 'install File::Tail' linux: # perl -MCPAN -e 'install Date::Parse' linux: # perl -MCPAN -e 'install Net::Netmask' Use the following init script to launch pop-before-smtp automatically at the appropriate runlevels: /usr/local/src/pop-before-smtp-1.30: # cp pop-before-smtp.init /etc/init.d/pop-before-smtp /usr/local/src/pop-before-smtp-1.30: # cp /etc /usr/local/src/pop-before-smtp-1.30: # cp pop-before-smtp /usr/sbin /etc/init.d: /etc/init.d: /etc/init.d: /etc/init.d: /usr/sbin: #

# ln -s pop-before-smtp rc3.d/S11pop-before-smtp # ln -s pop-before-smtp rc3.d/K11pop-before-smtp # ln -s pop-before-smtp rc5.d/S11pop-before-smtp # ln -s pop-before-smtp rc5.d/K11pop-before-smtp ln -s pop-before-smtp /usr/sbin/rcpop-before-smtp

Now edit the configuration file, /etc/, and launch the daemon manually.You should check whether the script runs correctly when you boot your computer at a later stage: /etc/init.d: # rcpop-before-smtp start /etc/init.d: # ps ax | grep pop-before-smtp 5022 ?


0:07 /usr/bin/perl -wT /usr/sbin/pop-before-smtp --watchlog=/var/log/mail U

≠--logto=/var/log/pop-before-smtp --daemon=/var/run/ 9367 pts/1


0:00 grep pop-before

/etc/init.d: # ls -al /etc/postfix/pop* -rw-r--r--

1 root

root 12288 Okt 8 11:18 /etc/postfix/pop-before-smtp.db

Dec 02 / Jan 03




adds it to the /etc/postfix/pop-beforesmtp.db database. Now that Postfix will accept and relay mail from the allowed IP addresses, there is nothing to stop authenticated users from sending messages. The script also removes the IP address from the database after the time slot has expired. To allow the program to accurately identify and extract the logfile entries for your POP/IMAP servers, you will need to use regular expressions in the /etc/ configuration file. The script contains entries for major mail servers – use an editor to enable the lines, as required. Make sure that you check the paths in the configuration file, paying particular attention to the logfile entries: # Set the log file we will watch # for pop3d/imapd records. $file_tail{'name'} = U '/var/log/maillog';

You can use the init script in the “Setting up pop-before-smtp” box to run the daemon when you boot your system. rcpop-before-smtp start

should launch the script, retire to the background and create the database. In debug mode the daemon will log each IP address it recognizes in /var/log/popbefore-smtp (Listing 2). IP addresses that Postfix already recognizes as mynet-

works, or allowed addresses are not transferred to the database, but simply logged. Use the configuration file to enable debug mode; you will need to set the $debug variable to 1 to do so: # Set $debug to output some # extra log messages # (if logging is enabled). $debug = 1;

If the logfile contains the written ok string, you can assume that the script has added the IP address to the database. If the string is missing, pop-before-smtp is simply letting you know that the IP address was noted during an earlier login and is not allowed currently. purging means that the time slot has expired for this IP address, and the address has been deleted from the database.

Monitoring Logfiles Monitor the logfile after installation to ensure that logins are being correctly identified. If pop-before-smtp refuses to recognize any IP addresses, this indicates a faulty regular expression in /etc/ Postfix still needs to understand how to evaluate the database, in order to answer the question as to whether a user should be permitted to relay. The check_client_access parameter in smtpd _recipient_restrictions is used for this purpose. The database from the popbefore-smtp script is also required:

For Experts: POP before SMTP without root privileges Apart from the fact that normal users do not have write access to /etc, that is the default path for the database, there is no reason why pop-before-smtp needs to run with root priveleges. So you might like to create a special account for the daemon with a user ID of less than 500, setting * as (an invalid) password in /etc/shadow, and /bin/false as the login shell, and /var/popbsmtp as the home directory for the daemon. You can then assign appropriate access priveleges for just this directory, and change the script and Postfix database path configuration to /var/popbsmtp/pop-before-smtp. Change the call in the init script to use startproc -u user to run the script (insert the user name you assigned to replace user). Look for the following lines:

smtpd_recipient_restrictions= permit_mynetworks, check_sasl_authenticated, check_client_access hash:U /etc/<$> postfix/U pop-before-smtp, check_relay_domains, [... other restrictions as U applicable ...] reject

Before you let your server loose on the Internet, you might like to recheck carefully the configuration [6]: Open relays crop up in databases such as ORDB [7], and many servers refuse to accept mail from computers listed there and at similar locations. Once you have been black-listed in an open relay database, it is quite hard to get your name off the list. Your last resort may be changing your server’s IP address. It may also be a fitting punishment, because running a mail server as an open relay really makes you a spammer’s accomplice. ■

INFO [1] Postfix project site: http://www. postfix. org/ [2] Cyrus-SASL Sources: ftp://ftp.andrew. [3] SASL info and howto: http://www. txt [4] POP before SMTP with Outlook: http:// htm [5] Script and howto for SMTP after POP: [6] Very good relay test: http://www.abuse. net/relay.html [7] The Open Relay Database: http://www.

echo -n "Starting $progname: " $pgm $dbfile $watchlog $logto --daemon=$pid and change them to: start) echo -n "Starting $progname: " startproc -u user $pgm $dbfile $watchlog $logto --daemon=$pid


Dec 02 / Jan 03



Peer Heinlein has been teaching Linux for many years now, and is an Internet Service Provider based in Berlin. He has recently published a PostFix book for SuSE Press, on the MTA and running secure mail servers on Linux.


Perl Tutorial: Part 7

Thinking In Line Noise

Pre-wrapped Packages O

nce your code begins to grow beyond a small script and into large applications you will often need to re-use small snippets, functions or even collections of functions that you have written in new applications. There are two main ways you can set about this re-use: • copy and paste • code abstraction Unlike most choices in Perl there is only one sensible way to do it. Copy and pasting code may seem like a quick and easy way to reproduce the functionality you need but it has a number of drawbacks that far out-weigh any temporary time-savings. The greatest of these issues is bugfixing, if you find a bug (and it WILL happen) in a piece of code you have manually copied and pasted into multiple files then you will have to track down each and every occurrence of the code (Potentially even in files on different machines so a simple find and replace will not make the task any less arduous for you) and make the change numerous times. You will also find that you frequently make small incremental adjustments to the code to make it fit better to the task, so any enhancements will have to be copied too. If just one instance of the code is overlooked, then there are different solutions to the same problem. Fixing a bug in this manner will cost you far more than any time you saved by adopting this approach. Are you sure you fixed that bug everywhere? Now that we have covered the worst way of implementing code re-use let’s explain the principles behind the preferred methods, packages and modules. The commonly adopted approach to code reuse is to write functions and put them into a file in an accessible location. The functions within a file should be logically grouped together by purpose. Functions whose tasks have a similar theme are usually placed in the same file.


Dec 02 / Jan 03

To fit in with the festive season this month we are going to look at presents, well packages to be exact but you can still think of them as little bundles of coding joy. BY DEAN WILSON AND FRANK BOOTH In Perl parlance a ‘package’ is a way of specifying a namespace, before we examine the syntax of a package let’s look at what a namespace is and the benefits they can provide.

A rose by any other name In essence a namespace is used to isolate variables and functions into separate “compartments” to help avoid namespace pollution and collision. Within a program all of the functions and variables (including filehandles) are from the ‘main’ package, to determine which package the currently executing code is in Perl provides the ‘__PACKAGE__’ constant: print "Current package is: '"; print __PACKAGE__, "'\n";

Modules – Scratching Itches Now you have been introduced to how packages work you may be wishing you had stopped reading at the copy and paste coding section. Now that you have seen the cost of code re-use the right way we can introduce you to the benefits, namely building modules. A module is quite simply a package placed in a separate file, where the file name is the same as the package name with a ‘.pm’ suffix so that Perl readily recognizes the file as a “Perl Module”. The code below is from a file called #this should be placed in #a file called package Toaster; our $VERSION = 1.00; my $num_slots = 2; my $toasting_time = 10; sub roast_toast { sleep $toasting_time; print "Toast is done\n"; return 0; } 1;

The ‘__PACKAGE__’ constant cannot be placed within a double quoted string and still be interpolated, by including it in double quotes the literal value is returned instead. Creating a new package by using the ‘package’ function with the name of the package you wish to create (Listing 1). A package’s scope is terminated by the end The first line of a module IS ALWAYS the of the file, exiting the block the package package declaration, ensure the case of was declared in or declaring another the name matches the file-name. The package afterward, as shown in Listing 2. Listing 1: __PACKAGE__ Using a namespace #in the default package we can reuse variable print "Current package is '", __PACKAGE__, "'\n"; names in each package package Huey; using the ‘our’ keyprint "In package ", __PACKAGE__, "'\n"; word to declare the package Louey; variables to exist with print "In package ", __PACKAGE__, "'\n"; a specific value within package Dewey; a package (Listing 3). print "In package ", __PACKAGE__, "'\n"; This code will package main; compile and run but print "In package ", __PACKAGE__, "'\n"; give warnings.

Perl Tutorial: Part 7

module name should begin with an uppercase letter, as a rule only pragmas begin with a lowercase letter. In the Toaster module we declare two variables and a function. If you’re wondering what the ‘1;‘ line at the end of the code block is there for (in a real module this would be at the end of the file), it is required as all Perl modules must evaluate to ‘true’ when they’re compiled. Although the value of the last evaluated expression in the module would be returned this is not guaranteed to evaluate to ‘true’ as we can see by the function ‘roast_toast’ returning ‘0’ so for clarity and simplicity we explicitly return ‘1’ to ensure a correct compile and load.

Loading the Toaster We now need to actually load the module into our perl program when we run the application so we can access the functionality it provides. Perl tracks the modules it has available for use by storing a list of paths as a list within which it looks for ‘.pm’ files. This list is known as ‘@INC’ and is available within perl itself for modification. Before we move on to showing you how to add your own directories let’s show two possible ways to display the default value of ‘@INC’. The first way to do this is by allowing Perl itself to do the work and show us where it searches: perl -V

You will then be shown a number of details that describe many of the options that this Perl interpreter was compiled with. Underneath those details you will find a section that resembles: /usr/lib/perl5/5.6.0/i386-linux /usr/lib/perl5/5.6.0 /usr/lib/perl5/site_perl/5.6.0/U i386-linux /usr/lib/perl5/site_perl/5.6.0 /usr/lib/perl5/site_perl

This information shows the default locations that perl will search when you have a ‘use’ statement in your code. Two items of interest are that many of the site directories have the version number contained within them allowing

many versions of Perl to live happily on the same machine while still allowing easy identification of which version the modules belong to. The machine this was written on has five versions for backwards compatibility testing. The second item of interest is that the current directory ‘.’ is included by default. The second way to show the searched directories by using actual Perl code, from the command line you can issue: perl U -we 'print map { "$_\n" } @INC;'

To generate a list of the default directories that are printed to screen in this format: /usr/lib/perl5/5.6.0/i386-linux /usr/lib/perl5/5.6.0 /usr/lib/perl5/site_perl/5.6.0/U i386-linux /usr/lib/perl5/site_perl/5.6.0 /usr/lib/perl5/site_perl


# nvoke 'buildscript with # build/directory added to @INC perl -I build/directory U

In the above example of ‘-I’ we add the ‘build/directory/‘ to the ‘@INC’ list of the ‘’ script. Where ‘-I’ comes into its element is when used in quick prototyping of very small projects, however once your requirements grow to the level where you need to specify multiple additional include directories you will begin to see the use of ‘-I’ as a limiting factor to how far your project can scale. We have not yet covered Perl’s special blocks such as BEGIN or END in this article but it is worth mentioning this technique at this stage as a reference, although it is an oversimplification at a basic level if you have a BEGIN block in your code then its contents will be executed before anything else in the application is, including module loading. # notice this should be executed # first as it is before the # BEGIN block print "Normal Hello\n"; BEGIN { # this sections runs first print "I'm run first\n"; }

Recognize the directories it returns? This is the same information that the Perl interpreter itself told us about but the list is available from within Perl itself. Now we have provided a number of different techniques to retrieve and display Listing 2: Exiting the block the current and default #in the default package values of ‘@INC’ we print "Current package is '", __PACKAGE__, "'\n"; will move on and step # Exiting the block the though the list of ways # package was declared in to coerce Perl into { looking in additional package Larry; locations for additional print "In package ", __PACKAGE__, "'\n"; modules including: } • ‘-I’ command line # Back to main package switch print "In package ", __PACKAGE__, "'\n"; • Modifying @INC in # Closing a package by a BEGIN block # declaring a new package • ‘use lib’ pragma package Moe; The ‘-I’ method can print "In package ", __PACKAGE__, "'\n"; be a very long-winded # in this case the new way of specifying # package is main again. additional directories package main; and suffers from an print "In package ", __PACKAGE__, "'\n"; important drawback: # Package terminated by You have to remember # end of file. to add it with every package Curly; invocation of the print "In package ", __PACKAGE__, "'\n"; program.

Dec 02 / Jan 03



Perl Tutorial: Part 7

As you would expect, corresponding END block. contained within the block run just before the program own execution.

it has a The code is ALWAYS finishes its

print "running happily in "; print __PACKAGE__, "\n"; END { print "exiting at "; print scalar localtime(); print "\n";' }

Although we will delay the detailed look at the full range of functionality these two block types provide the more common uses of these blocks make sense even without knowing all their intimate details, END blocks are ideal places to put clean up or summary code and BEGIN blocks are a perfect place to alter ‘@INC’ as shown below: # !/usr/bin/perl -w use warnings; use strict; # this is our module use CustomModule; BEGIN { # this will be executed before # the rest of the code unshift(@INC, "/home/dwilson"); } print "In main body\n"; print "=" x 25, "\n"; print join("\n", @INC);

We use the BEGIN block to place another directory at the start of the ‘@INC’ array ( array index zero ) pushing the other directories one position back. We do this so that our additional directory is the first one that is checked for the module. The execution then moves back to the top of the file and begins running the code in its usual order and includes the “CustomModule” before moving on to the print statements. If you run this code you will get a list of the directories in ‘@INC’ including our additional one, the change we made is still effective. Adding additional paths like this via the BEGIN block is a common practice when you either have a large number of custom paths you wish to have included or when you want to actually do conditional inclusion of modules. As the


Dec 02 / Jan 03

BEGIN block is inside Listing 3: Reusing variable names the Perl application it is possible to use Perl use warnings; functions to alter the use strict; values in ‘@INC’ and our $fred = 'blah'; so dynamically buildprint "Current package is '", __PACKAGE__, "'\n"; ing a list of directories package Huey; is made simple with print "In package ", __PACKAGE__, "'\n"; very little additional our $fred; code or complexity package Louey; required. print "In package ", __PACKAGE__, "'\n"; The last of the more our $fred; common approaches package Dewey; is using a pragma print "In package ", __PACKAGE__, "'\n"; known as ‘use lib’ to our $fred; specify the desired additions. Using this than this you will often have to resort method is actually one of the simpler back to using BEGIN blocks with all the ways of specifying new paths, it’s as simpower they provide, albeit at the cost of ple as adding a call to the pragma at the greater complexity. top of your program: # !/usr/bin/perl -w use strict; use lib("/home/dwilson", U "/home/wanttogo"); use CustomModule; print "In main body\n"; print "=" x 25, "\n"; print join("\n", @INC);

When this code snippet is run the two additional directories are added to ‘@INC’ and then made available to the rest of the program. If you ‘use’ modules like this then you should always specify the name of the module in a ‘use’ statement AFTER the ‘use lib’ otherwise you will get run-time errors, as shown in Listing 4. Although we did not define our own BEGIN block in this code the module loading is still done at this phase and the error is caught before we go on any further. The ease of use provided by ‘use lib’ is significant and has a very low learning curve that allows it to be used in most of the cases where you would want to add additional paths, once you find yourself needing more flexibility

Invoking the mighty toaster After wading through the coverage of how to create your own packaged namespace and then reading the details surrounding the loading of modules you are probably getting the itch to test your new found knowledge with some concrete code examples. The examples throughout the rest of this section assume that you have the “Toaster” module in one of the locations specified in ‘@INC’ so if you have skipped past the previous text you are going to be unable to progress until you have gone back and read it all. You can test that you can access “Toaster” correctly by running: perl -MToaster -e 'print U $Toaster::VERSION, "\n";'

This should return ‘1’, if you see “Can’t locate in @INC” then the module is not in the ‘@INC’ path and you need to amend your configuration as per the instructions given above ( in the “Loading the Toaster” section ) before

Listing 4: Runtime errors Can't locate in @INC (@INC contains: /home/dwilson /home/wanttogo /usr/lib/perl5/5.6.0/i386-linux /usr/lib/perl5/5.6.0 /usr/lib/perl5/site_perl/5.6.0/i386-linux /usr/lib/perl5/site_perl/5.6.0 /usr/lib/perl5/site_perl .) at line 5. BEGIN failed--compilation aborted at line 5.

Perl Tutorial: Part 7

you can run the code samples. If we want to use the ‘roast_toast’ function at the moment we need to qualify the call with its full package name: Toaster::roast_toast();

Although using the full package name as a prefix to all external calls may seem like just an inconvenience at the moment once you begin to use multiple modules with long names it will begin to have an effect on the clarity of your code. Another important reason to avoid using this approach is with data access, or encapsulation as it is often known as. At the moment we can reach in and change the value of any variable that we like with no regard toward the internal structure of the module, as an example of this if we look at the ‘$toasting_time’ variable we can see that it is numeric and is used internally in the module. Look at the consequences of making a change like this: use Toaster; $Toaster::toasting_time = U "Thirty Seconds"; Toaster::roast_toast();

If we run without warnings with code like this we will see strange behaviour as Perl converts ‘$toasting_time’ into a number from a string when ‘roast_toast’ uses it and the sleep time will become erratic. Instead of this direct action we should use the functions provided to manipulate any required variables, a practice called data encapsulation, that is one of the tenets of Object Orientated development and a good design decision even in procedural code like ours. By encapsulating data we protect ourselves from tying our code too tightly with the module’s own functions, all we want to do is ‘roast_toast’, we do not care if the time taken changes, we simply want the action to be performed, in a robust design the implementation should be hidden to us. We can address both of the above concerns by using a feature of Perl’s module system called ‘Exporter’. The Exporter module allows module writers to specify a list of the variables and functions that they wish to expose to the

calling application so that they are loaded into the current namespace. In the example below we add some additional Exporter related code to show how little is needed before we start to see the benefits: #revised Toaster module use strict; use warnings; package Toaster; require Exporter; our @ISA = qw(Exporter); our @EXPORT = U qw($num_slots roast_toast); our $VERSION = 1.50; my $num_slots = 2; my $toasting_time = 10; sub roast_toast { sleep $toasting_time; print "Toast is done\n"; return 0; } 1;

The newly revised Toaster module ( with free set of knives ) required just three new lines included to take advantage of these benefits, before we detail those lines here is a small sample script that can use the module: use Toaster; roast_toast();


If the module author changes the way the module is implemented ( which is allowed as long as the public functions are left alone ) then your code could break and you would have no recourse. There is probably a very good reason why the module author did not expose those variables and finding out why could be painful. Going back to our revised version of the Toaster module we first pull in the ‘Exporter’ module and we then assign Exporter to ‘@ISA’, this allows your module to ‘inherit’ functionality from the Exporter module. Inheritance is a topic more related to object orientated programming so we will gloss over the details, for now just think of these two lines as the code that enables your module to export symbols. The @EXPORT line controls which functions and variables are to be exported by default. Any entries in this array will be exported to the calling namespace when this module is ‘use’d. If the caller only wants to pull out a single function from your module and keep the memory footprint of her own application down then it is possible to amend the ‘use Toaster;‘ code so that the module only exports what is desired: use Toaster qw(roast_toast);

This is an extremely simple example of how all the explicit declarations can be removed to cut down on the amount of line noise in the code while retaining the full functionality. If we try to change the ‘$toasting_time’ with the first example below it fails: use Toaster; $toasting_time = 23;

The error we receive is “Global symbol ‘$toasting_time’ requires explicit package name” due to no variable called ‘$toasting_time’ being present in the ‘main’ package and Toaster not exporting its own ‘$toasting_time’. While we can still modify the variable the same way we did in previous examples using a fully qualified package name this is willfully ignoring the module writer’s wishes and becomes more a case of coder beware.

This code sample will only import the ‘roast_toast’ function, if you now try and modify the ‘$num_slots’ variable that the module has in its ‘@EXPORT’ array then you will get an error as it is no longer available in this package. use Toaster qw(roast_toast); #this works roast_toast(); #this fails with an error $num_slots = 23;

Now we have covered the basic rules and functionality of Exporting from one module into an application, let’s look at a slightly more complex scenario. If a module has a basic level of functionality that it always wants to provide, but it also has some niche functions that are only useful in specialized applications but require too much memory to export by default, rather than forcing the caller to use fully qualified package names

Dec 02 / Jan 03



Perl Tutorial: Part 7

and break some of the laws of good design Exporter allows a second array to be populated. This array is called ‘@EXPORT_OK’ and only exports what the caller requests. package Kettle; require Exporter; use strict; use warnings; our @ISA = qw(Exporter); our @EXPORT = qw(boil); our @EXPORT_OK = qw(whistle); our $VERSION = 1.00; my $boil_time = 2; my $pints_held = 4; sub boil { print "Kettle has been $$ boiled...\n"; } sub whistle { print "Whhhhhhheeeeeeeee\n"; } 1;

The package above shows a very simple implementation of a Kettle that has the essential kettle function, ‘boil’ while also providing the seldom requested ‘whistle’. If we just ‘use Kettle’ then we only get ‘boil’ as that is the only element in the ‘@EXPORT’ array and if we explicitly ask for ‘whistle’ with the ‘use Kettle qw(whistle)‘ then we lose the ability to ‘boil’. To solve this problem Perl allows you to ask for the entire ‘@EXPORT’ array and then any additional functions that you would like: use Kettle qw(:DEFAULT whistle); boil(); whistle();

By using the special ‘:DEFAULT’ label to import the values that are in the ‘@EXPORT’ array and also providing the name of the features that you need in your package you can make generic modules that allow a large scope of reuse. Working through the correct path of code reuse has taken a lot more initial effort than the simple copy and paste approach does, but hopefully you will have been convinced to do things correctly by the additional power that modules provide. We have barely breached the surface of Perl’s module system provides. To


Dec 02 / Jan 03

show you that the entry barrier is not as high as it might seem we now move on to one of the best examples of code reuse on the Internet, CPAN.

Getting Modules While Perl can stand feature for feature with other modern programming languages its true ‘killer app’ may be CPAN (, the Comprehensive Perl Archive Network, a large online repository of modules built by the Perl community. The modules in CPAN are themselves good examples of why extra effort is required to build a generic module . However the effort is its own reward, CPAN’s stock of code mostly originates from coders “scratching their own itch” and donating the code back to the community to save other people from reinventing the wheel. CPAN is one of the more successful examples of the “Cathedral” development methodology whereby a pool of developers raise the standard of the code base. Once an author is registered on CPAN they can begin the process of uploading modules. Before a module can be uploaded the name and purpose of the module has to be announced to ‘’ where the hard working volunteers ensure there are no duplications with existing work and the module uses an appropriate namespace. Once the module details have been accepted the module is uploaded to the server where it comes under the scrutiny of the Perl QA Smoke testers. The smoke testers are another group of volunteers that donate processing time on a variety of operating systems and architectures, using batched scripts new arrivals and updates on CPAN are tested and the results are posted back to CPAN showing which platforms the module ran on. Finally the module is propagated though the CPAN mirrors until it becomes fully available everywhere. Retrieving a Perl module can be done in a number of ways: • CPAN • CPAN++ • manual install • PPM • Native package installer (apt-get, rpm or similar)

The example given below shows the typical install procedure for a module that is being installed manually and then we introduce you to the cpan shell that is made available by the module. We will not be covering the other methods of installation as they are not as universally available. Once the module’s ‘.tar.gz’ file is downloaded from one of CPAN’s module pages the following steps should be taken to install it: # extract module $ tar -zxvf U <modulename>-<version>.tar.gz $ cd <modulename>-<version> $ perl Makefile.PL $ make $ make test $ make install

Stepping through the above example we unpack the module and then change in to its directory. Running ‘perl’ causes perl to generate a Makefile for the module filling in many of the required details with the information Perl discovered about the system when it was built and installed. To show how much work this saves consider that a of 25 lines is expanded to a ready for use Makefile of over seven hundred lines. We then run a ‘make’ to do any building required for a module before running the modules test harness with ‘make test’. Not all modules have a set of tests to run. The Perl QA effort has progressed to the point where all the core modules have a test harness. Although not having tests should not stop you from using a module their presence indicates a conscientious author and an indication of robust code, providing all the tests pass… We then run ‘make install’ to perform the work of installing the module in one of the paths which Perl searches for its libraries. An important often overlooked detail is that this is the only step of installing modules that actually requires you to be root: all of the other stages can be executed as an unprivileged user. The extra privileges are needed because it writes to directories that most people will not have access to. Now that you have seen the procedure for

Perl Tutorial: Part 7

installing a module by hand you can appreciate the abstraction that installing via CPAN provides. The other major selling point of the CPAN approach is dependency tracking. When you try and install a module by hand that depends upon additional modules you must first install those manually otherwise the install fails due to unsatisfied dependencies. This could take several iterations if the modules that are listed as dependencies have further dependencies of their own. Installing via the CPAN module uses a shell similar in many ways to bash, including command completion and history – if the required modules are installed. The first time you invoke the CPAN shell you will be prompted with a list of configuration options that you will be asked to confirm or edit. These include which mirrors to download from, where certain binaries are located on the system and whether to automatically follow dependencies. To invoke the CPAN shell from the command line you must type: perl -MCPAN -e shell

And the prompt will change to ‘cpan>‘. The invocation line causes the perl interpreter to load the module specified by the upper case ‘-M’, in this case the CPAN module while the ‘-e’ calls the function ‘shell’ which has been exported from the module. From within the cpan shell you can easily search for and install additional modules with a very important advantage over installing them by hand, will track and install dependencies for you. Depending on the values you supplied when you ran for the first time, this tracking dependencies would be done either automatically without a prompt, only after you answer yes to install at a prompt or the module install would fail. As you would expect the last option is seldom chosen. Navigating around the CPAN shell is very simple once you have a grasp of a few basic principles, if you are unsure of the command required to carry out a task then type ‘help’ at the prompt and it will return a list of the valid commands and what they do.

The commands are in three main groups: • Querying commands • Module installation • Meta-commands The querying commands are the best place to start as they are often the commands you use the most, to view the collection of modules under the XML namespace type in ‘m /XML::/‘ and press return, a list of all the modules will scroll past along with some additional details such as the author name and module version. When you look along the selection of XML modules you may see a module that looks promising for the task at hand, for example XML::XPath. Once you have found a module that may be suitable you can install it by issuing ‘install XML::XPath’ at the prompt, the module will then be downloaded to the machine and the manual steps described above will be run via the module. If the module has any tests defined for itself then they will be run at the install and a report of the successes and failures will be shown. If the module fails its tests then the installation of the module will be aborted. If the module passes its own tests but fails because of unsatisfied dependencies then will go and track down those dependencies and install then going through as many iterations of this as needed until either the modules are all installed or one of them fails.

Similar Code When you have begun to use a number of CPAN modules you may find that the standard of code provided is high enough that you would like to see what else the author has contributed. The author of ‘XML::XPath’, Matt Sergeant, a Perl luminary responsible for a large number of CPAN’s more popular has an impressive collection of modules that can be viewed without leaving the comfort of the CPAN shell. We rerun the search for the ‘XML::XPath’ module but this time as we know the full name we do not use a regular expression search and instead we enter ‘m XML::XPath’. ‘m’ is the keyword that tells we are searching for


a module, you can also use ‘a’, ‘b’ or ‘d’ to search for authors, bundles or distributions but those are out of the scope of this article. The ‘m XML::XPath’ command will then go and get a list of all the modules that are currently listed in the author’s home directory and display them on screen with some additional details such as the files last update time, the version number of the module and even the file size of the complete module. From here if we wanted to install another module we could do a simple ‘install XML::SAX’ and the module would take care of it for us. Now that we have shown you how to install modules we’re going to show you how to be a little more choosy in the modules you actually download, to find out more information than the name and brief summary you can use the shell to retrieve and display the modules own readme file. To do this for XML::SAX you would type the intuitive ‘readme XML::SAX’, will go and download the file and then the pager you chose in the initialization of the module will be invoked to read it. The last installation example we will show is a little different from the previous ones as it applies to itself, whenever a new release of the module is issued CPAN detects this the next time it is run and offers the chance to upgrade. Although it may seem a little strange to use to upgrade the module, the process is very reliable and requires very little additional effort beyond a normal module install. After you have issued an ‘install CPAN’ command and the module has been downloaded and the tests run you finish off the install by issuing a ‘reload cpan’ command, the screen will have a little process counter of dots appear and a summary of the total number of subroutines that have been reloaded by itself is shown marking the successful completion of the upgrade. When you have completed your work in the CPAN shell to exit back to your command shell just enter ‘q’ on a line by itself and press return, the lock-file will be removed and the CPAN shell will then be closed. ■

Dec 02 / Jan 03



C Tutorial Part 13

C: Part 13

Language of the ‘C’ Following on from last month’s article, Steven Goodwin, in this, the final part of our C tutorial, looks at how C can be unreadable, and why it becomes like that. BY STEVEN GOODWIN


good friend of mine from University knows all the bad parts of town. She knows where the fights will be, and who’ll be dealing in what, and where. Thing is – she’s a nice girl! What is she doing knowing about the dodgy parts of town? Her answer was my inspiration; knowing where not to go, stops you from doing it. So in this article, I will tell you why bad code is written, how to understand it, and how to stop yourself from doing it.

Purpose In Life The easiest case to understand as to why code is unreadable is where it was written so intentionally. This could be because it was written to demonstrate an interesting (mis)use of the language, or intended for a programming competition such as the IOCCC (see boxout). Some source code will be obfuscated on purpose to hide it’s meaning along with any clever, novel, or interesting technology contained within. This is sometimes referred to as ‘shrouded source’ where, although the code is available to the end user (enabling it to be distributed openly, needing only a recompile), it is


Dec 02 / Jan 03

impossible to read and understand, since the meaning of the code has been perverted. This can happen by using obtuse (or even wrong) variable and function names (perhaps of a single letter), the removal of white space, an over-use of macros or any number of other techniques. Such code even makes Perl look readable! Understanding such code is a considerable task, and not to be undertaken lightly. Only in exceptional cases (i.e. you’re paid to, or the code is a puzzle you “just have to work out”) is it worth trying to understand such code. Your time is better spent solving the problems yourself, and re-writing it in a sensible (preferably open), fashion.

In My Defence For code that is unintentionally obfuscated, the most common cause is casual. Code is written in a particular style ‘just because that’s how a particular programmer writes’ – the geek equivalent of Finnegans’ Wake! Over years of programming, people drop into various habits. Some good. Some bad. All of them are completely natural to the person in question, but require more

thought by everyone else. Let us take a simple case: if (fp = $$ fopen("/etc/convert.conf", "r")) { /* process the file */ }

This is something we’ve seen before, and is quite a common structure for opening a file and handling its contents, should it exist. We’ve seen it before, so we’re used to it. If we had not, it might be a different story. So, what if the expression was something with which we are unfamiliar? Here, the use of language is identical, but the situation is not. if (n = CountItems()) { /* Is this supposed to checkU the integrity of 'n'? */ }

This unintentional obfuscation can show its roots in a number of places, but because they are all quirks of the original programmer (which you are unlikely to know on a first hand basis) and so it gives you two things to think about, not one. For example, a programmer may have come from a different language to C, and was forcing his ideas into ‘C’ and

C Tutorial Part 13


not making use of its Listing 1: An intentionally obtuse program strengths as a language. Or they #define ______ putchar( might be from the #define _____ ( ‘old school’ where #define ____ ) they have either #define ___ << become such good friends with their _,__;main(){ _=-~_,__=_- -_- -_, __=______ _____ _____ _ ___ __ ____- -_ ____ compiler that they ___ __ ____ , __=__- -_____ _____ _ ___ _ ___ _ ___ _ ___ _ ___ _ ____-_-_-_know what shortcuts _ ____,__=______ __- -_ ____ ,__=______ ______ ______ __-_- -_____ _ ___ _- to take (for better _- -_ ____ ____ ____- -_- -_- -_ ____, ______ _ ___ _ ___ _____ _ ___ _ ___ _ performance, say), ____ ____,______ __,______ __-_____ _____- -_- -_- -_ ____ ___ _- -_- -_ ____ or they have been ____ ____,______ __=__- -_- -_- -_ ____ ,__-=_____ _- -_- -_ ____ ___ _____ _ burnt by broken soft____,______ __ ____,__^=_ ___ _ ___ _ ___ _,______ __ ____, __=_ ___ _,______ ware and forced to _____ __ ___ __ ____- -__ ____;}/* by Steev, but why did he sign his name? */ write in this unnatural manner, working around problems in the tools. Over time, were so focused on the task, it did not smaller code, and therefore will take less this behaviour becomes second nature to concern them to separate each step of time to execute! While that can be true the programmer – but not the reader – the process, or the coding standards to at the machine code level, it is not the and so it appears more complex that it which they were working limited them case at the high level of C. Perhaps they really is. to 80 characters on a line – and they are used to interpreters where this can Also in the old school programmers’ were already on 75! When this occurs be true. With compiled languages it is “box of tricks” will be a number of (and you want to understand the code) not an issue, especially with the language features that may not be appreyou may have to expand each expression optimisers currently in use. ciated by novices, although they’ve no into its individual components. Consider In most cases, the two examples above doubt learnt. Probably by rote. An the conversion table example from part 5. will produce the same code, but expression may be empty, making the invariably people will believe the following statement reasonable: second version is somewhat quicker. for(i=0;i<sizeof(ConvertTable)/U Unfortunately, as a language, C is very sizeof(ConvertTable[0]);i++) supportive of dense syntax since an if (a && !b && (c||d) ) { /* handle each elementU expression can be a number of things, ; from the table here */ } such as a function call, a parameter or a else piece of algebra. As expressions feature printf("Doing If we take each part of the expression throughout the language, it is possible to and represent it on it’s own then, like the something!"); include them in places you would not proverbial school bully, it is no longer perhaps expect. threatening, and easy to understand The alternative would require a lot of Sometimes decomposing an expresbecause each part is so simple we can negative logic and is generally more sion is not enough on its own. You have give it an obvious and easy to difficult to understand. The reader may to look more broadly at the programs understand variable name. care to study De Morgan’s Theorem on structure and operation. such matters. Listing 1 uses three tricks from the int iSizeOfWholeTable = U box: 1-variables may begin with (and sizeof(ConvertTable); iScores[(iPly&2)>1]++; include) an underscore, 2-global int iSizeOfEachElement = U variables are guaranteed to be initialised sizeof(ConvertTable[0]); This simple piece of code could be to zero, and 3-integer variables can be broken down into parts (the bitwise int iNumberOfElementsInTable = U declared without the reserved word ‘int’. AND, the bitshift and the increment) but iSizeOfWholeTable / U The latter is only true, however, for nonthat would not gain us much. Instead of iSizeOfEachElement; ANSI conforming code. looking at how it does it, let’s look at for(i=0;i<iNumberOfElementsU what is produced by re-writing the AND. InTable;i++)

Living In A Box

In most cases, causal obfuscation just condenses code into a smaller area. This happens for a number of reasons; perhaps the algorithm or method is well known and it feels natural or ‘obvious’ for the programmer to write it as such (like we saw above). Or perhaps they

{ /* handle each element from the table here */ }

The conditional operator (the ? and : symbols) is a very quick way of condensing four lines of an if-else statement into one. Some people do this because they think that it compiles into

if (iPly & 2) /* The numbers 2,3,6,7, etc U make this true. i.e. 4n+2, U 4n+3 for n>= 0 */ t = 2; else t = 0;

Dec 02 / Jan 03



C Tutorial Part 13

Speed of execution Faster? if (a) x = 1; else x = 2;

Slower? x = a ? 1 : 2;

includes an empty expression for good measure! The reader may care to deduce a compressed form of the strlen function. The authors record is 27 characters for the function body.

Wide Open Spaces


Now the expression is very simple (especially since 2>1 is always 1, and 0>1 is 0). In the context of the whole program (not shown here for space reasons!) we know that iPly is the number of the players, and ranges from 0 to 3. It therefore seems reasonable that this line converts a player index (0 to 3) to a team index (0 or 1), with players 0 and 1 being on team 0, and 2 & 3 being on team 1. The most natural scenarios for compressed code occur in string manipulation, where a string copy can be written:

It is not just the code that may confuse, but the spaces in between the code, too! Ill formatting can occur anywhere, which is why it is best to find a style of layout that you like, and stick to it. If you work for a company, this style may be dictated to you. Otherwise, look at other peoples code and choose one. If your layout is clear and easy to understand it should not be considered ‘wrong’. Similarly, there is no ‘right’ way, and no good coder should tell you that there is. They might try to convince you to change, however, but that is part of a holy war that is best to avoid if possible! while(i<0) i--; printf("i=%d\n", i);

while(*pSrc++ = *pDest++);

Since there are no braces after the ‘while’, this loop will only iterate the single ‘i--‘ instruction. However, the formatting implies something else, which is not good. Listing 2: Compressing spaces You must int main(int argc, char* argv[]){ unsigned consequently char c='r';double x1,y,y1,t=0,q=78,r=22,x, beware of the ‘C’ x2,y2,a,b,v;do{(c=='r')?(y2=-(y1=-1.6),x1= empty expression, -2.0f,x2=0.8):(c=='?')? c=0, printf("%f\ where a ‘;‘ on it’s ,%f:%f,%f",x1,y1,x2,y2):(c <':'&&c>48) own is valid, and ?x=x1,y=y1,*(c>'3'&&c<':' ?&y1: &t) what issues it can +=(y2-y1)/3,*(c>'6'&&c< ':'?&y1 raise. Imagine:

It uses the simple fact that strings terminate with a NUL – which is numerically equivalent to FALSE. It also

:&t)+=(y2-y1)/3, *((c == '8' ||c+3=='8'||c+3 +3== '8'?&x1 :&t))+=(x2-x1 )/ 3,*((c =='9'||c+3== '9'||c +6=='9' ?&x1: &t) )+=2*(x2-x1) /3,x2= x1+(x2-x)/3, y2 =y1+( y2-y)/3:(c=0);for(y= y2;y>= y1&&c;c=1,y-=(y2-y1)/r, putchar ('\n')) for(x=x1;x<=x2; x+=(x2x1)/q){a=b=c=0; while ( ++c&&(a=(t =a)*a)<4&&(v=b*b)<4)a-=v-x ,b=y+b*2*t; putchar("#@XMW*N&KPBQYKG$R" "STEEVxHOUV" "CT()[]%JL={}eou?/\\|Ili+~<>_-^\"!;:`,. "[ c?c>2:63]);}} while((c=getchar ())!='x'); return 0;/* Mandelbrot - S.Goodwin.2001*/}


Dec 02 / Jan 03

while(i<0); i--;

A loop with break and/or continue statements littered throughout is going to be more difficult to follow than one where they have been grouped together near the top. Matching else statements to their respected if’s can also be tricky. The rule in ‘C’ is for the else to match the last unmatched if. if (a) if (b) if (c) printf("Is c true?"); else printfU ("Which is true? a, b, or c?");

So in this example, the else matches the ‘if (c)‘ line, not the ‘if (b)‘ as the formatting suggests. In addition, code like this should be simplified to only represent only the cases we’re interested in. if (a && b) { if (c) printf("Is c true?"); else printfU ("Which is true? a, b, or c?"); }

The code should also be correctly formatted, preferably with braces, since editors can easily find the next (or previous) occurrence of a brace so you can determine which code is attached to which ‘if’. When using such a layout, its format aids the understanding of the code, and does not hinder it, so the obfuscation is less pronounced. One style point I use is that if the ‘true’ part of a condition uses braces, then so does the ‘false’ part. Formatting can also cause problems with the understanding

Listing 3:Print “Daft Jacko abhors Tux” d[256]={0x200000, 0x8000000, 0x10000, 15,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,2,0,0,0 ,0,0,0,0,0,0,0,112,36,0,1,4,32,0,96,0,68,137,0,8}; main(story) { while(d[0]<d[1]) U *d+=((d[((((*d>16)-26)&0x1ff)>1)]&U (*d&d[2]?240:15)))>(*d&d[2]?4:0)&(1<<(*d>25))?U putchar(((*d>16)&0x1ff)),0x10000:0x10000; }

C Tutorial Part 13

of expressions that use (or even rely on) precedence. Stepping back to the example I gave when discussing precedence, notice how ill formatting would confuse the issue. ans = 10*x



Clever Trevor Another case of obfuscation is where the code is cleverer than it needs to be. This can manifest itself in a couple of ways. Consider a loop to compute the sum of every number between 1 and 100. int iTotal = 0; for(i=1;i<=100;i++) iTotal += i;

j = 0; for(i=1;i<100;i++) if (/*some condition*/) j=j?j:i;

This loop finds the first case where the condition is true, and produces the position in the variable ‘j’. This a good example of bad coding because: 1) The variables are not meaningful 2) It relies on ‘i’ never starting at 0 3) It over-condenses the code without a comment Again, splitting the conditional ?: will help understand this.

for(i=1;i<=50;i++) iTotal += 101;

or even, iTotal = 101 * 50;

This works because a mathematician named Gauss (1777-1855) deduced that working the arithmetic from both ends at once reduces the sums complexity. It becomes a list of 50 sums, all of them equal to 101, because 1+100=2+99=3+98 and so on. Very simple to understand when you’re writing it, but much more difficult to read; especially in the general case. If each step of the process is given a comment then there is some salvation – but there rarely is! This type of obfuscation requires you to understand the language used to implement the problem…and the method used to solve the problem. Using mathematical identities is often necessary to improve performance of software, but you should always document those methods so others can understand the code to make enhancements (and bug fixes) easily. Code can also be too clever outside the field of mathematics, and may rely on assumptions that are implicit in the code or data.

to be brought into the cache. Then, when the loop writes to memory, it does so to the (fast) cached version, and not main memory, as it might have done without the ‘tmp’ line.

Billericay Dickie The flip side of code trying to be too clever is code that is not clever at all. This could be because it uses the wrong method, but gets the right answer, or uses the right methods but in the wrong cases. Consider this example I found in some code on a Windows machine. len = strlen(szFilename); szFilename[len - 4]=0;

if (j) j = j; else j = i;

This is simple, understandable and very straightforward. However, if the programmer knew about Gauss, he might have written:


So, if ‘j’ is non-zero, nothing will happen and j will remain unchanged. Otherwise (j = 0) it will be assigned to the value of ‘i’. At this point it is no longer zero (because of the assumption that i always starts at 1), and so is unaffected for the rest of the loop.

Bright Lights, Big City Trying to out-compile the compiler can also make code unreadable. This is where the writer has learnt / discovered / worked out how the compiler (or even CPU) will handle the code, and so has written parts of the program in such a way to produce code that gives better performance. This has the same symptoms as code that has been intentionally obfuscated, but suffers from the fact that not even the original author knows why it had to be done in that particular way. The irony is that any performance gained from compiler A is not valid on B (or even between versions of the same compiler)! One good example here is the code: tmp = *ptr; for(i=0;i<1000;i++) ptr[i] = 0;

Here the ‘tmp’ variable is never used, and appears to be redundant, so one might be tempted to remove it. However, on processors such as the Pentium, reading the memory location at ‘ptr’ may cause that section of memory (8K or so)

It gets the right answer (most of the time) but uses the wrong method! The intention was to remove the file extension from szFilename, before concatenating a different one onto the end. This is confusing because that’s not what it actually does: imagine if the filename did not have an extension! A similar case happens with comments. Comments are good unless they disagree with the code. Or the variables used within the code are given names that do not apply to their job. Neither happens when writing a program, but as it changed and new features are added, the comments are not updated, and a ‘LastItem’ variable is now used (or even re-used) as a count of the number items – and slowly the code becomes less clear than it once was. Both situations should be rectified by making the code do what it is supposed to, in the way that it is supposed to do it! Removing a filename extension means taking all characters after the dot – so look for the last dot (I’ve even seen code that removed the first dot, causing other problems!) and remove those characters. If you’re writing an interactive application and you notice there are 12 characters after the dot (i.e. it probably does not have an extension, just a dot in the name) you can always report the error to the user and let them confirm the action. The ‘write as you mean’ rule is the best way to code, ensuring both man and machine understand what’s going on. Consider the 1 to 100 summing loop above. If we’d written it as,

Dec 02 / Jan 03


C Tutorial Part 13

End Note Sidebar As this marks the final part of ‘Language of the C’, I would like to take the opportunity to thank a few people. Notable John Hearns for encouraging me to write it, John Southern & Colin Murphy for letting me write it, Alan Troth for reading it (and forcing me to rewrite it), and TULS for the beer and curry!

for(i=0;i<100;i++) iTotal += i+1;

This may fit in with C’s zero-indexing policy, but it does not make (as much) sense because the meaning is lost. The number zero is not part of the question, so it should not be integral to the finding of the solution. And do not use the excuse of ‘code will run slow’ when cutting corners, either. As Knuth once said, “Premature optimisation is the root of all evil”.

Old Before I Die Some programs are difficult to read because of their over-reliance on the C pre-processor. Especially by beginners who have come from Pascal, say, and would still rather type ‘begin’ instead of ‘{‘ to start each code block. It is not unknown for them to start each file with:

correct, but they do not make sense in context! Depending on the importance of the software (and the salary involved!) it may be worth re-writing them, not from the source, but from the original algorithms.

Old Red Eyes is Back One case of obfuscation that happens (but rarely) involves old code. When software has been ported from an old system, or you are working on an old Unix system, there may be some historical features that can be confusing. The programmer might have used functions that no longer exist in the standard libraries, or those that have since been replaced or renamed. One example is strchr. This used to be called index, and might exist in some code. Now, since this function has not been documented for many years one might be tempted to look for it outside of the standard libraries, and not find it. In extreme cases, you might be working on a compiler than supports features of the old K&R style of C. On these systems, the language was much younger than it is now, and supports strange syntax such as: int x 1;

#define begin #define end

{ }

Which is actually a simple declaration and assignment that we know as:

This, although quaint, is ultimately confusing to the reader (since the word begin looks like it should be a function or a variable), and prevents the author from moving away from Pascal. They will never have to think in C and so are likely to implement substandard solutions (that are by their nature more difficult to read) because they are not considering (and working to) the strengths of the language. In these cases, you need to pre-process the source files to expand the macros into something that looks more like C. In extreme cases people may be working with programs that have been converted, line-by-line, from another language into C. These conversions are often the technical equivalent of badly translated Kung Fu movies – the words may be


Dec 02 / Jan 03

int x=1;

Similarly, code like

register n = (count + 7) / 8; /* count > 0 assumed */ switch (count % 8) { case 0: do { *to = *from++; case 7: *to = *from++; case 6: *to = *from++; case 5: *to = *from++; case 4: *to = *from++; case 3: *to = *from++; case 2: *to = *from++; case 1: *to = *from++; } while (--n > 0); } (Copyright 1984, 1988, Tom Duff)

(The ‘to’ address is mapped to a device, and therefore it does not need to be incrementedwithin the program). Any language powerful enough to produce original code, is also (by it’s very nature) powerful enough to produce oddities or quirks of use that were not considered when originally designing the language. No language course could ever hope to cover every single obtuse case of syntax in existence – and there’s always one programmer who will find more evil ways of abusing the language. In these cases, you have little choice but to work through the code, line by line, function by function, understanding what the compiler would do in these situations and mimic it. This technique (called dryrunning) is carried out by language lawyers to understand and demonstrate vagrancies of a particular language. And you should to. ■


x =- 1;

Would (on an ANSI C compiler) assign minus one to x. However, in the ‘olden days’, it would decrement x by 1, because =- was the original form of -= . With Linux being comparatively new, this should be a rare case, especially as gcc does not support it.

The End Of the World Despite the fact that this series has taught the C language and its many (varied) uses, it is still possible to construct legitimate code that looks wrong, strange, or confusing. My favourite example of this is Duffs Device.

The International Obfuscated C Code Contest. A yearly competition to (ab)use C in the most esoteric manner possible.The winning entries are somewhat scarier than the ‘simple’examples given here.

The language of ‘C’has been brought



to you today by Steven Goodwin and the pages 68–72. Steven is a lead programmer, who has just finished off a game for the Nintendo GameCube console.When not working, he can often be found relaxing at London LONIX meetings.




Keeping Track of MP3 Files D

o you use your computer as a jukebox? Have you lost count of the number of MP3s on your hard disk? And are you slowly but surely losing track of your musical treasures? In this case, you may be a candidate for the KMp3Indexer – a tool that helps you to organize all the MP3s on your hard disk (or CDs) in a neat index. And the quick search function means that you are just one mouse click away from your favorite track at any time. You will find the current version of the program on the KMp3Indexer home

Figure 1: Selecting the directory containing your music files

page at kmp3indexer. Ensure you have a functional KDE 3.0 installation before you start. After unzipping the program, you simply follow the well known Linux pattern of ./configure; make; make install to install it on your system.

Indexing 101 Do not be too disappointed when you initially launch the program (for example by typing kmp3indexer & in the command line), although the main

KTOOLS In this column we present tools, month by month, which have proven to be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without.


Dec 02 / Jan 03

No matter how thin the line between chaos and genius may be, a well organized index of your favorite MP3s can help you avoid wasting time looking for MP3s, and the KMp3Indexer takes the pain out of creating lists. BY STEFANIE TEUFEL window is admittedly somewhat spartan at first glance. You can quickly add some content to the empty window by selecting File/Read files… in the menu, and specifying the directory containing the MP3s you want to place on the index in the window that then appears (Figure 1). As you can see in Figure 2, the left-hand panel is starting to fill out. There are only a few more steps to take before you can complete your index. Two options are available: you can either create a directory based on the ID3 tag, or opt for the file name. If you decide to go for the ID3 tag, use the left panel to select the files you want to add to the index and then click on Index / Index by ID3/OggVorbis Tag in the menu bar – you can alternatively use the keyboard shortcut [Ctrl-i]. Simple as that – the index is finished, as is evidenced by the content in the right panel shown in Figure 3. Incidentally, it is quite easy to add to the directory: simply select the files you want to add, and repeat the procedure which we outlined above. To create an index based on file names, you again select the required files in the lefthand list box, and then click on Index/Index by filename. Use the dialog box that appears (Figure 4) to specify the order in which the KMp3Indexer should read the file name, and also select a separating character. If you do

not require all three selection fields, you can simply leave one out – starting on the right.

Basic Settings Now is the time to customize your Indexer using the File/KMp3Indexer menu item. To do so, first take a look at the individual options in Figure 5. Path to the directory allows you to specify the default directory to search for MP3 files. Just click on the button with the three dots to open



Figure 2: Before… Here the files have been chosen

Figure 4: In contrast to the ID3 index, creating an

Figure 6: You can easily correct the entries in your

but the index is not complete

index of file names requires some manual steps


Figure 3: …and after

the selection window introduced in Figure 1. Use relative paths refers to the output of the KMp3Indexer. If you enable this option, the Indexer will output any pathnames relative to the supplied directory. You can use the add filename to path option to add the file name to any path output. The show zeros option also applies to the Indexer output. If this option is enabled, the track number zero is replaced by a space character when you are saving your entries. The list box in the lower right panel shows the order in which the supplied information will be written to the index file. You can choose from the following options: Track (the track number of the song – if no track number is available, this will default to zero and be ignored for further output); Artist (the name of the group or artist); Title (the title of the track), Album (the name of the album that contains the track) and File (the filename). The four buttons adjacent to the list box (viewed top down) are used to move the selected item up by one position; to reload the item (this means deleting the list field and reinitialising every entry); to delete the item (the information linked to it is no longer stored) and to move the selected item down one in the list.

quite easy to correct them. Just double click on an individual entry (as shown in the right-hand panel of Figure 3) to edit that entry. Figure 6 shows the dialog box where you can edit the individual entry. If you need to delete one or multiple entries from the index, simply select the required entries and then click Index/Remove in the menu bar. The Indexer will remove the selected entries after prompting you to confirm. To delete an entire list, you will instead need to select Index/Clear index. If your index contains complete albums, one or more entries may contain the wrong artist tag. In this case, simply high light the incorrect entries and then select Index/Unify Artist in the menu. The dialog box that then appears shows a list of artists for the selected entries in the left panel and allows you to select the required artist. After doing so, the artist appears in the top right panel. You can then click on apply to specify the text displayed here as the new artist for the selected entries. If the list does not contain the name of the required artist, simply type the correct name in the upper right text box, and then apply. The KMp3Indexer automatically attempts to generate a track number by reference to the first character in the file name. Unfortunately, most file names do

Need a Change? If you happen to discover a few mistakes in an index you have just created, it is

Figure 7: Track numbers can be modified easily

not contain a track number, or the supplied number may be wrong. If you need to change the track number for multiple files, select the required index entries and then click on Index/Generate TrackNO…. The dialog box that then appears (Figure 7) contains the entries you have selected. You can now re-count the track numbers, by clicking on …by counting, or click on …by filename to apply consecutive numbers by file name. If you decide to re-count, you can enter a number in the start at box, to seed the re-count with the number you type. This field defaults to 1. If you then click on Generate, and the list contains ten entries for example, the selected items will be numbered one through ten in the order shown in the list box.

Export me The KMp3Indexer allows you to save the indices you have created in various formats. You can use the Export/Import options to select a format. Export to CSV and Export to XMMS Playlist are probably the most useful items here. CSV is short for “Comma Separated Values” and means that each entry will be represented by a line in a file, and that the individual fields will be separated by commas. You can use this format to import your index to KSpread, MS Excel, or a database. In contrast, you can use the XMMS Playlist format to export an index for the popular graphical MP3 player. ■

Figure 5: Configuring the Kmp3Indexer

Dec 02 / Jan 03




Jo’s Alternative Desktop: xmtoolbar

Button Bar N

ow don’t get me wrong – the Unix Desktop Environment introduced in last month’s deskTOPia [1] is quite nice, but some people may have missed all those little tools that the major desktop environments will tend to launch automatically. The fact that these tools waste your system resources is a completely different kettle of fish of course. But if you really want all that functionality, you can enhance the notso-major desktops, adding the tools you require, while avoiding unneccessary waste of resources. To do so, simply run them automatically when you launch the GUI desktop.

Coaching Let’s assume that your favorite Window Manager only offers you a start menu, although you would prefer icons on your desktop. The xmtoolbar program launcher may be a bit long in the tooth, but it still provides a useful solution to this issue. The tool, which incidentally was written by Thomas Zwettler as a programming exercise in the summer of 1996, and has remained unchanged ever since, is based on Motif, a GUI toolkit that has been shunned by the Linux community due to its commercial background, and never quite made the grade. As some members of the Linux community were unhappy with the idea of doing without applications they had developed using Motif, they developed a freeware Motif clone called lesstif. Unfortunately, xmtoolbar still insists on the original, so the source code will not be much use to most users. If you intend to use xmtoolbar, you will

DESKTOPIA Only you can decide how your desktop looks.With deskTOPia we regularly take you with us on a journey into the land of window managers and desktop environments, presenting the useful and the colorful viewers and pretty toys.


Dec 02 / Jan 03

Your desktop may not offer you a start menu, or you may not be happy with just one. Want more? Take a look at xmtoolbar! BY JO MOSKALEWSKI

Figure 1: Something is missing on initially launching the program

probably want to ensure that you have a statically linked binary somewhere on your storage media – in other words the entire functionality of the toolkit will be embedded in the program executable.

Installation The installation procedure is quite simple: The xmtoolbar-1.2-full-static.gz file is available either from http://www.x. org/contrib/desktop_managers/ or on the subscription CD with this issue. Simply expand the archive, copy the resulting file to a directory in your search path, and make the file executable.

the buttons themselves. Any icons used by xmtoolbar must be in xpm format. From a technical point of view there is no need to define a uniform size, but doing so will certainly improve the visual impact. You should also note that although images with transparent areas will be displayed, the transparent area will be converted to black. You can use Babygimp [2] or a similar tool to change this to a more friendly color gradient.

Make it Official If you have decided on a suitable default icon (as in Figure 3), ensure that you install it permanently. To do so, right click the icon and select Save config. By default you are required to save your configuration whenever you change it – if you forget to do so, all that effort will be mere history after quitting xmtoolbar. Before you get too annoyed, you might

gunzip U xmtoolbar-1.2-full-static.gz mv xmtoolbar-1.2-full-static U /usr/local/bin/xmtoolbar chmod 755 U /usr/local/bin/xmtoolbar

Your first encounter with xmtoolbar may turn out to be somewhat prosaic: When you type xmtoolbar &amp; to initially launch the button bar, the tool will complain about a missing icon and prompt you to click Figure 3: … and Figure 2: The toolbar in action… the OK button to create a default icon minimized to like to check the Save-on-exit in xpm format (Figure 1). display only option under Preferences in In fact, there is nothing to display the default the drop-down menu (see before you complete this step. icon Figure 4). xmtoolbar is a simple icon bar that You might also prefer to deactivate the may be able to display bubble help texts, Auto-Hide feature at this stage, and but definitely cannot display text within



restart xmtoolbar after having done so. This function is not particularly stable, but as you can minimize the whole toolbar by clicking on the default icon, as shown in Figure 3, you can tidy up your desktop quite well without any help from auto-hide.

the default button. A group can contain buttons but no other groups. The buttons in a group will appear in the opposite direction to the main menu of your default icon. The direction is specified in the configuration file, although the main menu will open downwards and the Figure 6: Two xmtoolbar instances running on a KDE desktop group buttons to the right by default; note that Figure 2 xmtoolbar*background: #8090aa shows an inverted configuration. xmtoolbar*foreground: #ffffee

In Your Corner!


xmtoolbar will automatically take up residence in a corner of your desktop, unless of course you have used a command-line option to overrule the preferences. As an example, the following syntax

To configure a button or a group, you simply right click on the required element. If you right click a group, you can use the drop-down menu to assign a different icon to it, add a button or a short description (popup help), or even delete the entire group. A similar Figure 5: The drop-down procedure menu with a pixmap applies to but- background tons. You can right click a button in order to assign a pixmap, select the application to be launched, assign parameters used on launching the application, type a description for the button, or remove the button. However, you will not be able to sort your buttons and groups. Do not assume that these typical settings will complete your configuration tasks. As Motif is a classical GUI toolkit, it will of course respect any XResources you have defined. To customize the default colors for example, you will need to add the following to .Xdefaults in your home directory:

Figure 4: Preferences

xmtoolbar -x +500 -y 0 &

will attach the button bar to the top of your desktop and indent it by 500 pixels. In addition to specifying the position, and instead of using the default configuration file, ~/.toolbar, you can pass a file with alternative options to the program when you launch it: Thus, xmtoolbar -c ~/.toolbar2

will conjure up a second instance of xmtoolbar on your desktop, as defined in ~/.toolbar2. The xmtoolbar behaves slightly differently than other start menus, allowing you to add a subfolder, (Group), and applications, (Button), to your default icon, for example. The whole branch expands or contracts when you click on

GLOSSARY GUI Toolkit: A collection of predefined classes used for creating Graphical User Interfaces that provide the developer with scrollboxes, buttons, and menu structures, removing the need to develop these components for each new program.

If you would prefer to use a background image rather than a color (Figure 5), you might like to try the following line: xmtoolbar*backgroundPixmap: U grafic.xpm

Be aware that large images will slow xmtoolbar down considerably. But you can change the frame width for 3D effects, or the gap between the display elements and the frame, without impacting your resources: xmtoolbar*ShadowThickness: 2 xmtoolbar*MarginWidth: 1 xmtoolbar*MarginHeight: 1

If the current font does not appeal to you, no problem: xmtoolbar*fontList: U -*-lucida-medium-r-*-*-*-90U -*-*-p-*-iso8859-1

To apply entries of this type, you need: xrdb -merge .Xdefaults

and then relaunch xmtoolbar.

INFO [1] Jo Moskalewski:“All Together”, Linux Magazine Issue 25, p73 [2] Babygimp:

Dec 02 / Jan 03



out of the box

out of the box: linklint

The Right Links Broken links on web sites tend to leave the user with a poor impression of the Internet site owner. Do you really want to go to the trouble of testing internal and external links manually? If not, you might like to take a look at linklint. BY CHRISTIAN PERLE


f you embed a horde of links in the HTML page structure of your Internet presence, the temporary nature of this medium may cause you some headaches. Links to external pages are often obsolete immediately after going online. That leaves you with very little alternative but to test all of your embedded links regularly. Many HTML editors contain link checkers for this task, but loading your pages in a HTML editor, even though you do not really want to change anything, is somewhat tedious. You can save a lot of work by using a small tool that follows the Unix philosophy of “one tool for one job” to perform the check for you. Like linklint by James B. Bowlin, for example, a stand-alone program that lends itself well to shell scripting, and also does a very thorough job. Peter Fries,

No Fuss Installation Conveniently, you do not need to compile linklint in order to install the program, as it was implmented in the form of a perl script. Of course, perl must be pre-installed, but this is now the case for most major Linux distributions that you are likely to come across. The linklint-2.3.5.tar.gz archive is available from or on the subscription CD. To install linklint, simply expand the archive and copy the linklint script to a directory in your path:

OUT OF THE BOX There are thousands of tools and utilities for Linux.“Out of the box” takes a pick of the bunch and each month suggests a little program, which we feel is either absolutely indispensable or unduly ignored.


Dec 02 / Jan 03

tar xzf linklint-2.3.5.tar.gz cd linklint-2.3.5 su (Enter Root password) cp linklint-2.3.5 U /usr/local/bin/linklint exit

/usr/local/bin in the /usr/local branch of the file system is a suitable target, because this area is not controlled by the package manager that came with your distribution. The directories in this branch of the file system are explicitly designed for manually installing

additional programs, which, are not available in your distribution’s default package format.

Appearing Locally As our first example, let us test a page structure available on the local machine. To keep things simple we will be using the linklint documentation, which is available in HTML format. Before we start, let us also ensure that a required page is missing: cd linklint-2.3.5

out of the box


mv doc/bugs.html doc/bugz.html

Now it’s time to launch linklint: linklint -doc report /doc/@

The -doc report parameter tells linklint to create exhaustive logfiles in the report directory. /doc/@ tells the program to check any files with links starting in the doc directory. We will be discussing how to use so-called linksets in more detail later on. The fact that linklint defaults to searching in the document root of your web server means that you have to prepend a slash, although doc is actually in the current directory (and not below root in your file system). As we are not

Figure 2: External Links

using a web server in this example, the program assumes the current directory as the document root. Now let’s look at a sample of the test results, as shown in errorF.txt (Listing 1). This file contains a list of broken links. As expected, linklint has noticed that bugs.html no longer exists, and displays a list of files that refer to the missing file. Linklint stores the same information in a HTML file with the same prefix, providing links that allow you to access the filenames in the list. Figure 1 shows the results as viewed in Netscape Navigator. You can also opt, if required, to restrict linklint‘s output file format to either -textonly or -htmlonly. Remote.html is another interesting candidate. This file lists all the links that refer to targets on web servers outside of the local page structure. In the case of the linklint doc files, these are the links listed in Figure 2.


Figure 1: Missing Links

As previously mentioned, you can use so-called linksets to tell linklint what files or directories you want it to search through on your local file system or web server. A linkset is collection of files or

Figure 3: Remote URL errors

directories in which some characters are interpreted as wildcards. If the linkset contains only a single file, only this file will be checked. If the target is a directory, three characters can be used to terminate the set: If the last character is a simple slash, /, linklint will look at the page as it presents itself to a user, when requesting the directory from your web server – in this case, the program will check the index.html file on that page, but nothing else. If you now append a hash sign, #, to the slash and linklint will inspect any HTML files found in the target directory, but ignore any subdirectories. If you add an “at” sign, @, to the target directory name, the tool will check any linked files and subdirectories. However, linklint will still ignore any links that refer to


OUT FAST! More information:


out of the box

Listing 2: Results

Listing 1: errorF.txt

found 104 urls: ok -----

8 urls: moved permanently (301)


11 urls: moved temporarily (302)


2 urls: could not find ip address


7 urls: not found (404)


1 url: timed out connecting to host


1 url: timed out getting host ip address

file: errorF.txt date: Thu, 12 Sep 2002 16:32:17 (local) Linklint version: 2.3.5 #------------------------------------------------------# ERROR

7 files had broken links


Linklint checked 115 urls: 104 were ok, 11 failed. 19 urls moved. 4 hosts failed: 4 urls could be retried.

[file: /doc/index.html] had 1 broken link /doc/bugs.html /doc/doc_index.html had 1 broken link /doc/bugs.html /doc/hints.html had 1 broken link /doc/bugs.html /doc/howitworks.html had 1 broken link /doc/bugs.html

linklint -http -host U /~perle/

This command will check the index page of the web site for the user perle on host If you additionally want to check all the files at the same level of the file system, the syntax is as follows: linklint -http -host U U /~perle/#

And if you want to check any linked subdirectories on the site: linklint -http -host U U /~perle/@

The last parameter in each of these three examples stipulates the linkset.

/doc/index.html had 1 broken link /doc/bugs.html Incidentally, linklint can /doc/inputs.html had 1 broken link /doc/bugs.html handle more /doc/outputs.html had 1 broken link /doc/bugs.html than one linkset in a tidy up my linkfarm. While checking single command. If you are performing a large web sites with linklint you might more extensive check, you might like to exceed the programs default threshold of create a full logfile using the -doc direc500 files. tory option. However, you can use the -limit X flag Away Game to raise the threshold to X files. You can also store a group of options in a text file In our last example, we are going to and pass the file to the program using allow linklint to leave the confines of the the @optionfile flag. The extensive original server. This function is very HTML documentation available provides useful, if you want to test a linkfarm information on some more of the (such as your Netscape bookmarks) in program’s interesting functions. ■ order to bring it up to date:

linklint -net -http U -host U /~perle/Links.html

This tells linklint to check only the Links.html file. However, the -net option allows the links to point to external targets one level down. The output from this simple command is a status report containing the URLs of the links in alphabetical order. The actual results, as shown in Listing 2, would indicate that now it might be a good idea for me to


targets external to your web server. You need the HTTP protocol to access external targets, and this is why the program obligingly supplies the -http option for this job. Let’s look at a practical example – in fact it is my own home page, located at URL http://www.

Christian Perle currently works as a developer at secunet Security Networks AG. Christian discovered Linux in 1996, after playing around with the Sinclair ZX 81, Atari ST and finally IBM PC.When not hacking Linux stuff he can often be found playing guitar and “Magic:The Gathering“.

GLOSSARY HTML: “Hypertext Markup Language”; the page markup language used for World Wide Web pages that originated at CERN uses socalled tags to define text passages as headings, lists, tables and similar. Shell Script: A text file containing shell commands that are launched automatically in quick succession. Simple flow control constructs, such as loops and conditions, are also possible.


Dec 02 / Jan 03

Compile: A program’s source code, written in a high-level programming language, cannot be executed by the operating system. You first need to compile the program, that is to translate it into machine code using a compiler to provide the processor with an executable version. Perl:The“Practical extraction and reportlanguage” is an extremely power scripting language that is available for many platforms besides Linux.

URL:“Uniform Resource Locator”; the unique address of a resource on the Web. In addition to the host name and path, the URL also stipulates the transfer protocol.This would mean using the HTTP protocol to access, but the FTP protocol to access linux/mirrors/. HTTP: The “Hypertext Transfer Protocol”is the transfer protocol for the World Wide Web.



RPM Database at the Console

Piped Packages F

or many users the Red Hat Package Manager or the easily remembered short form, RPM, is a familiar installation tool. You can use the rpm -i packagename command to install an RPM package to your hard disk. And if you decide not to keep the program, you can normally remove it without trace by typing rpm -e programname. But the Package Manager is capable of a lot more, as RPM maintains a database that can help you manage your programs. Lets take a look at an example:

Installation tools for RPM packages are available in all shapes and sizes – from Gnorpm, through Kpackages, to YaST. However, if you prefer to get straight to the point and get the job in hand done quickly, you might like to take a look at the rpm command-line tool. BY ANDREAS KNEIB

[akneib/akneib]$ rpm -qi xmms

Deutsche Post World Net

This command displays a short description of the Xmms player. The -q option is derived from the word query. -i means information, but only in combination with the query option, as it will otherwise launch an installer. As you can see in Figure 1 the info for Xmms is organized in sections. Besides the description in the Description line, the version number and authors are also detailed. If you are interested only in a specific part of this output, you can use the grep tool in the as following way: [akneib/akneib]$ rpm -qi xmms | grep Summary Summary : The Sound player with the WinAmp GUI

The standard output of the rpm -qi xmms command is sent to grep using the “|” character. The technical term for this is a pipe between the program instances. grep will filter and display the lines containing the “Summary” pattern. However, instead of using a general grep, you can address the sections directly: [akneib/akneib]$ rpm -q --queryformat U "%{Summary}\n" xmms An MP3 player

--queryformat will provide you with specific package information. The “\n” in this example tells the shell to add an end-of-line to the output, and is not part of the actual RPM command. You can use a combination of --queryformat and grep to help you search for specific packages. In our next example the database supplies a list containing all the packages that belong to the Games group. We will deal with the -B option for grep later on in this article.

[akneib/akneib]$ rpm -qa U --queryformat "\n Package U %{NAME}\n Group %{Group}" | U grep -B 1 Games Package xkobo Group X11/Games -Package nethack Group Amusements/Games -[...]

However, the rpm -qi is not always successful in its answer:

Dec 02 / Jan 03




information on Xterm, such as with the following: [akneib/akneib]$ rpm -qd U xf86-4.0.3-33 | grep xterm /usr/X11R6/lib/X11/doc/html/U xterm.1.html

Figure 1: Output for the â&#x20AC;&#x153;rpm -qi xmmsâ&#x20AC;? command

[akneib/akneib]$ rpm -qi xterm Package xterm is not installed

Xterm is obviously part of another, as yet unknown package. So, where do you start? Perhaps you have already encountered the which command. which searches all the directories in your $PATH variable for the full path to a command, and then displays this path: [akneib/akneib]$ which xterm /usr/bin/xterm

Now you have the full path to the xterm command. Your next step is to use the RPM option -f to launch a query for the package containing the required file. Instead of typing rpm -qf /usr/bin/xterm, you could try a more elegant solution based on command substitution: [akneib/akneib]$ rpm -qf `which xterm` xf86-4.0.3-33

[akneib/akneib]$ rpm -qf $(which xterm) xf86-4.0.3-33

So, Xterm belongs to the xf86-4.0.3-33 package. As the query, rpm -qi xterm, did not return a result, you may find some information on the terminal emulation for this package. The -d option will display a list of documentation files. You can again use the grep tool to search for any relevant

Dec 02 / Jan 03

[akneib/akneib]$ lynx `rpm -qd U xf86-4.0.3-33 | grep xterm`

You can use a variety of combinations of the RPM command-line flags. In the following example we have added the the -a flag. This allows you to query all of the packages: [akneib/akneib]$ rpm -qia | less

This will display a short description of all the packages installed on your system. The pager less allows you to read and browse through the output. You can use the following command to create a text file containing the names of all the installed packages: [akneib/akneib]$ rpm -qa > allpackages.txt

Now command substitution might sound somewhat hairy, but what it really means is telling the shell to launch a command within another command. The command in backticks is replaced by its results. Instead of the backtick you could also use the following notation:


Now all you need to do is start the browser. To save yourself a lot of typing, you can use the arrow keys to access the last command and create a command substitution. Specify your browser as the executable, by typing it at the start of the command-line, and point it at the xterm.1.html.

If this list is too untidy for your liking, you can use the sort tool to sort the output alphabetically: [akneib/akneib]$ rpm -qa | U sort -df -o allpackages.txt

Simply count the lines to find out how many packages you have installed. Of course, there is a command line tool available for this task: [akneib/akneib]$ rpm -qa | wc -l 561

-c and -l are two further useful options. -c displays the configuration files for a package, and -l a list of the files the package contains:

[akneib/akneib]$ rpm -qc syslogd /etc/init.d/syslog /etc/syslog.conf [akneib/akneib]$ rpm -ql syslogd /etc/init.d/syslog /etc/syslog.conf /sbin/klogd /sbin/rcsyslog [...]

Users will often want to take a look at a packages contents before installing the package. In this case, you can opt for -p, a flag that lets you look at RPM files you have not yet installed. The following command lists all the files in the mutt1.3.99-1.i386.rpm package, which has not yet been installed: [akneib/akneib]$ rpm -qpl U mutt-1.3.99-1.i386.rpm /usr/doc/mutt-1.3.99/ABOUT-NLS /usr/doc/mutt-1.3.99/COPYRIGHT [...]

Dependencies The fact that the RPM Package Manager can recognize dependencies is a major advantage. It is often impossible to install a program because it requires you to pre-install another package. The --requires option tells you what libraries and packages are required to install the RPM file: [akneib/akneib]$ rpm -qp --requires U leafnode-1.9.21-1.i386.rpm [...]

To find out what libraries and packages an RPM file you are thinking of installing provides, try the --provides flag: [akneib/akneib]$ rpm -qp --provides U xforms-0.89-3.i386.rpm [...]

And --whatprovides option allows you to query libraries:


[akneib/akneib]$ rpm -q U --whatprovides e2fsprogs-1.19-56

This example shows what installed package the library belongs to. You can also attack this from the opposite direction: The following will list the installed packages that require the library: [akneib/akneib]$ rpm -q U --whatrequires e2fsprogs-1.19-56 dump-0.4b21-34 mc-4.5.51-46

What do you do, if the installation of a package fails on account of unresolved dependencies? This problem not only applies to RPM packages, but also when you are installing from source code. Let’s assume that you want to compile the LaTeX frontend LyX from the sources, but program execution terminates after launching the configure script, and displays the message Cannot find forms.h. Searching for the missing file leads you directly to your distribution CD, which contains a directory full of RPM files ready for access by tools such as YaST. Now change to this directory and let your computer do the walking: ~/cdrom > U rpm -qpil --provides *.rpm | U grep -B 30 forms\.h$ | less

The asterisk in “*.rpm” is a wildcard. The program will thus supply a full file list for any files with .rpm, and also tell

Figure 2: The file forms.h has been found

create an RPM package from the compiled sources. This allows you to benefit from a convenient package manager when installing programs you have compiled yourself.

Table 1: Options -a


Query all packages


Remove package


Package information / Installation


Update package




Detailed package information

-f file…

Which package contains the


Display documentation files


Display configuration files


Complete file list for package


Query a non-installed package


Dependencies for package


Support for package


What package provides…


Which packages require…


Progress indicator




Force installation


Ignore dependencies


you what programs each RPM package on the CD provide. The results of this search operation are piped to the grep tool, which will extract any lines in the stream containing the search pattern forms\.h$. The dot between the two filenames needs to be masked by a backslash in this case, as it would otherwise be interpreted as a single character wildcard. The option -B 30 tells the tool to start the output 30 lines before the line containing the search pattern. This area will probably contain some notes helping you to identify the package to which forms.h belongs. To provide for easier reading, the grep output is piped to the less pager. In this case, forms.h is a component of the xformsd package (Figure 2). You can now use your distribution tool to install Xformsd from CD. After having done so, you should be able to compile LyX and run make install without any further errors. Incidentally, the checkinstall program [1] can be used to

Your distribution CDs may contain a whole bunch of applications, but by no means everything you might need. If you download a current program version off the Web, only a few steps are normally required to install it. Lets look at how you install a package first: [akneib/akneib]$ rpm -ihv mutt-1.3.99-1.i386.rpm mutt ######################

The -i flag (without -q!) will install the RPM package. The -v flag displays the package name and -h a progress indicator. To delete a package including any and all programs and documentation, simply type… [akneib/akneib]$ rpm -e mutt

The e means erase and allows you to zap the package completely. The --nodeps option will ride roughshod across your system and allow you to install or remove a package, even if dependencies to other packages exist: [akneib/akneib]$ rpm -e --nodeps perl

You can use the -U to update a package – this will simply install the package, if it was not installed previously: [akneib/akneib]$ rpm -Uhv mutt-1.3.99-1.i386.rpm

The --force option is equivalent to forcefeeding; you can use it to install a package, even if it already exists on your hard disk, by simply overwriting the existing files. Take care when using both -nodeps and --force. Table 1 contains an overview of the options you can use to extract information from your database.■

INFO [1] CheckInstall: gabe/2002/05/062-ootb/checkinstall-3.html

Dec 02 / Jan 03





Basic Programming Programmers have been looking for a Visual Basic clone for Linux for quite a while now. Gambas is well on the way to closing this gap. BY FRANK WIEDUWILT


or many Linux converts losing a simple programming language like Basic, which is extremely useful for straightforward programming tasks, was a hard blow. Gambas [1], which means “Gambas Almost Means BASic”, offers a developer environment that is similar to Visual Basic. Gambas’ author, Benoît Minisini, has created a program that allows you to create graphic interfaces with just a few mouse clicks, and it comes complete with a convenient editor for writing source code.

Installation You will find Gambas on this issue’s subscription CD. To compile Gambas you will need both the sources, and a QT version > 3.0.2. Users of SuSE Linux version 8.0 should use Gambas 0.37, which is also included on the subscription CD, as Gambas 0.38 will not compile on your distribution. Type the following command in your favourite shell to expand the gambas0.38.tar.gz archive: tar -zxvf gambas-0.38.tar.gz

Now change to the directory you just created, gambas-0.38, (cd gambas-0.38)), and type the following commands in the following order: ./configure, make, and ensuring that you are root, make install. This completes the installation of the program, which you can now launch by typing gambas & in a shell.

Initial Launch After launching the program, you are prompted to choose between creating a new project or openening an existing one. If you select New Project, Gambas will prompt you to enter the directory where you will be storing the project files before opening a number of windows where you can edit your project files (see Figure 1). The left panel then shows an overview of the current project in a tree structure. The top panel on the right contains a selection of GUI elements that you can add to the program interface – the properties of the selected element are displayed on the right. To start programming, simply draw one or multiple windows – Gambas refers to them as forms.

Figure 1: Gambas


Dec 02 / Jan 03

Figure 2: Form Designer and Toolbox

A Question of Forms A GUI tool is available for creating forms. You can use the Form Designer just like a drawing tool to position individual elements on the background of the programming environment (see Figure 2). A full selection of components typically required for GUI programming, such as a buttons, text boxes, combo

Figure 3: The Component Explorer


Figure 4: The Menu Editor

boxes, tree and table view, and picture boxes, is available. Now select the required element in the toolbox, and drag it to the required size in the form window. You can use the Properties window to edit the element’s properties. A grid is available to ensure exact positioning of the elements. Use the Properties window to specify the name and appearance of the element. Double clicking an element will open the source code editor where you can type directives to define the various events that the element will react to. Each GUI element has a number of properties, such as color, size, and position, for example, and also a number of methods. To determine the options available for a specific element, simply launch the Component Explorer (see Figure 3). Select an object in the left panel to display its properties and methods in the right panel. A convenient Menu Editor is available to help you create menus – just right click in a form to open the editor (see Figure 4).

button in the editor window. The dropdown menu that appears contains a full selection of commands for source code editing. The editor also allows you to define breakpoints and select variables for debugger traces. More support for keyboard shortcuts would be useful – to avoid having to continually switch between keyboard and mouse while coding. However, the fact that the editor automatically capitalizes Basic keywords, highlights commands and remarks is quite useful. You can customize the editor to suit your requirements by choosing colors for the text background, and for syntax highlighting. Unfortunately, Gambas will not allow you to integrate your favorite editor, such as Kate or Emacs, for example. However, the forms and classes are stored in text format, which would allow you to use another editor for any source code authoring before you compile it with Gambas.


Figure 5: The Source Code Editor

Bug Hunting Debugging source code can often turn out to be quite a challenge. Gambas provides the developer with a debugger that allows you to trace variables at run time and define breakpoints (Figure 6).

Linking Gambas provides a compiler, that will translate your Basic source code into an executable. You press the [F7] key to launch the compilation process. If everything works out ok, the executable, which will be named after your project, appears in the project directory. You will need to have Qt installed on your system to run your program.

Editing Source Code


The source code editor may look somewhat spartan at first glance (Figure 5). But try clicking the right mouse

The Gambas home page and the subscription CD both include a substantial language directory, the


Figure 6: The Debugger

Gambas Language Encyclopaedia, which also contains a few sample programs. The file is available both as a PDF and in StarOffice 5.2 format.

Conclusion Gambas is an interesting project. Even at this early stage it will allow you to create small programs, and it was stable during our tests. Program development is quick and painless due to the various GUI tools, and the footprint of the executables Gambas creates is surprisingly small. Future versions promise an enhanced debugger and a database access component. ■

GUI: Graphical User Interface:This means the visual elements of a program, as the user sees them on screen. Syntax Highlighting: The editor uses colors to highlight commands, remarks, and variables, allowing the developer to keep track more easily and recognize typos at a glance. Compiler: A compiler is a program that converts the source code authored by a developer into a machine-readable and executable format.

INFO [1] Project home page: http://gambas.

Dec 02 / Jan 03



dig and DNS

Peeking behind the drapes: dig and DNS

Digging the Data Forest N

ormal users view the Domain Name Service as a kind of black box, as shown in Figure 1. The service is installed somehow – today this often occurs automatically by enabling a DNS transfer when opening an Internet connection via PPP. Subsequently you can type any host name and rely on your computer to respond with a known IP address, or at least find the address automatically. Of course this (normally) works in reverse order, too – your computer should be able to supply the symbolic name mapped to a given IP address it you desire. In a typical configuration like the one just described, the local computer will not need to know anything about how DNS works. It simply sends a request to a pre-defined address, and receives an answer from that address. Admittedly, this “pre-defined” address is normally hosted by your friendly neighborhood ISP, and normally means a Linux computer. So where do they get their data from? The Domain Name Service is a hierarchical system. Otherwise it would be impossible for an AOL host, or a computer belonging to the US Government, or anyone else for that matter to store all the IP addresses of all the computers on the Internet. The size of the database would not be too difficult to handle, but how could you keep the database up-to-date. New PCs are continually being bought, and laptops change their IP addresses everytime they are attached.

The Domain Name Service (DNS) translates between user-friendly host names and computer-friendly IP addresses. In this issue we will not be looking at how to install DNS but taking a peek behind the curtains, and letting a small tool do some spying for us. BY MARC ANDRÉ SELIG

DNS solves this problem based on the tried and trusted principle of “divide and conquer”. Anyone can set up a DNS server, which will be responsible for its

GLOSSARY PPP: The Point to Point Protocol is the most commonly used procedure for semi-automatic configuration of network connections. One of the communication partners – commonly referred to as the “server”, although the protocol does not differentiate between clients and servers – knows the required connection data, e.g. the IP addresses, and transfers them to the other computer.This allows for easy configuration of dial-up connections by modem, ISDN, or DSL. A Microsoft extension also transfers the DNS server address at link level, and although this clearly contravenes the protocol design, it does make life easier for the user. BIND: Short for “Berkeley Internet Name Domain”, the most common type of name server. Most Linux installations come without a name server of this type, if only for security reasons, however, the additional tools, such as dig, are normally installed.


Dec 02 / Jan 03

own little corner of the Internet. The University of Manchester has its own DNS server, for example, and this server is responsible for the addresses in the domain. We refer to the server as being authoritative in this case. If a user requires the IP address for, they ask the DNS server responsible for (see Figure 2). An intermediate DNS server located at the Internet provider’s site, will receive the request and forward it to the authoritative server; subsequently the ISP’s server will pick up the answer from

dig and DNS

Question: What address is

Question: What address is




Forwarding question: What address is www?


DNSServer InternetProvider

Blackbox DNS


DNS server

Forwarding answer:

Figure 1: DNS as a black box

the authoritative server and forward it to the computer that originated the request. The intermediate server knows next to nothing about the network, being restricted to forwarding DNS requests and answers. A server of this type is thus referred to as a forwarding name server. Of course the server is redundant: If you like, you can install a DNS server of your own on a local machine and do without the forwarding name server hosted by the ISP, but we will not be looking into that approach.

Spies in Action Instead, let’s try out that spy I promised you. The dig tool sends a request to a Domain Name Server and displays the answer in detail. dig is actually a component belonging to the BIND distribution and was designed for testing name servers. It will additionally provide us with a lot of internal detail. dig is launched from the commandline, in a terminal window, for example. (If the program has not yet been installed, you will normally find it in a package called bind-utils.) You simply tell dig the name of the host or IP address you are looking for. You can also optionally supply a name server – prepend an at sign, @, to do so – to query a name server other than the default. As a final argument, you can also specify the query type, which allows you to output both an address and various other details. dig U any

Listing 1 contains an example. This shows an enquiry directed to a name server hosted by NTL ( for the host name The format of the results is based on that of a BIND configuration file; remarks are

Figure 2: DNS Server at the University of Manchester

indicated by semi-colons at the beginning of the line. dig repeats the command-line arguments first and then displays the answer from the server in a more-or-less intelligible format. Our answer comprises a header and four sections:

The query (QUESTION SECTION), the actual (ANSWER), the authority section and various additional data, (ADDITIONAL SECTION). This query was for, type A, class IN. The type indicates the result type, in this case a numeric IP

Listing 1: Example of a dig query [john@black john]$ dig @ ; <<> DiG 9.2.1 <<> @ ;; global options: printcmd ;; Got answer: ;; ->HEADER<<- opcode: QUERY, status: NOERROR, id: 35431 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 3, ADDITIONAL: 0 ;; QUESTION SECTION: ;




55757 55757




84177 84177 84177



;; ;; ;; ;;

Query time: 31 msec SERVER: WHEN: Tue Nov 5 11:49:23 2002 MSG SIZE rcvd: 143

[john@black john]$

Table 1: Important DNS data types Short form



IP address. A host name can be mapped to several IP addresses!


Alias:A host name refers to another host name first,and this host points to an IP address.


Pointer:An IP address pointing to a host name.


Mail Exchange:Which server will receive email for the domain in question?


Indicates a name server with authoritative data for a specific address or an entire address block.


“Start Of Authority”:The data shown here is valid for any subsequent addresses. Entries of this kind contain validity marks and use an email address to point to the host master responsible for the domain, that is to the person responsible for maintaining the data.


A pseudo data type you can pass to dig dig to list any available data.

Dec 02 / Jan 03



dig and DNS

Root:(root,".") e.g.

"com.": e.g.

"uk.": z.B.

"": e.g.

"": Figure 3: The DNS Tree

address. Table 1 contains a list of important data types; all these types can be queried directly by passing the corresponding argument to dig. The IN class refers to the “Internet” and is the only class that will occur in the examples provided in this article. The ANSWER SECTION contains the requested data. This section is mandatory! If a name server cannot answer a request, but can point to another name server that may have the answer, the results it returns will not contain an Answer Section. Instead the AUTHORITY SECTION will contain details on the name server or servers that have authoritative data for the zone in question. In our example can answer our request, and returns the expected results: is a CNAME (canonical name) for, and this host has the IP address But if we require authoritative data, that

is an answer that is guaranteed to be correct, we will need to contact one of the name servers in the list, that is, etc. The data entries contain not only the host name, class, data type, and content (IP address or an additional host name), but also a time value, such as, 55757. This is a kind of “best before” date for the supplied data – in other words the content of this line will remain valid for at least this period, after which it will need to be updated.

Climbing Trees This still leaves one important question unanswered. has authoritative data for the domain. But how do we know that, or to be more precise, how does know that? To answer this question, we will need to climb the hierarchical DNS tree. Figure 3 shows the tree in the traditional

Box 1: dig and Your Firewall As a responsible network user you will probably have installed a packet filter (a “firewall”) on your computer. If you do not run a DNS server of your own, your filter should only allow DNS requests to be transmitted to your Internet service provider’s DNS server, and should only accept DNS answers from this server. To try out the examples in this article, you will need to define the following exceptions, which will allow an almost arbitrary exchange of DNS data. Needless to say, this is not recommended for a production server and you should reset your firewall after completing the experiment! For iptables (Linux kernel version 2.4 or newer), you will need to enter the following commands as the root user: iptables -I OUTPUT 1 -p udp --dport 53 -j ACCEPT iptables -I INPUT 1 -p udp --sport 53 -j ACCEPT If you still use ipchains (Linux 2.2), replace iptables by ipchains and use small letters for the INPUT and OUTPUT keywords: ipchains -I output 1 -p udp --dport 53 -j ACCEPT ipchains -I input 1 -p udp --sport 53 -j ACCEPT


Dec 02 / Jan 03

way with the root at the top. The name of the root domain is a single period. Immediately below the root are the socalled top-level domains, such as uk, com, net, and org. The root servers have authoritative data for the root domains and thus know the servers that can answer requests for the top-level domain uk, for example. To prevent a chicken or egg scenario from occurring, the root servers are fixed. There names are, and so on, and the servers are distributed as needed, all over the globe. Their IP addresses have not changed for years, and every root server knows them. Additionally, these IPs are listed in a “hints” file that is distributed globally, and which every name server is originally based on. The root servers are subject to an incredible load and without them more or less nothing on the Internet would work. Of course, they only answer the

GLOSSARY Octet: A traditional IP address comprises 32 bits, which are divided into bytes containing 8 bits each in typical computer fashion.This leaves you with four bytes or octets. In a DNS tree, the last part of the name is always closest to the root level; so the “uk” part of will be superordinate to “”. A normal IP address is built in the reverse order.The figure 202 at the end of is fairly insignificant, and merely designates a specific computer on a LAN. In contrast, the 12 at the start of the address is extremely important as it designates the network itself. So it makes sense to reverse the IP address order for DNS.The suffix indicates a “disguised”IP address and allows us to formulate a pseudo host name,, which we can query using DNS, as previously described. PPP: The Point to Point Protocol is the most commonly used procedure for semiautomatic configuration of network connections. One of the communication partners – commonly referred to as the “server”, although the protocol does not differentiate between clients and servers – knows the required connection data, e.g. the IP addresses, and transfers them to the other computer.This allows for easy configuration of dial-up connections by modem, ISDN, or DSL. A Microsoft extension also transfers the DNS server address at link level, and although this clearly contravenes the protocol design, it does make life easier for the user.

dig and DNS

load, no less than five servers currently handle this task. And all of these servers will answer public queries for domains within the UK. After being informed by the root server of an authoritative data source for uk, the name server will ask that server for the next branch of the tree, such as If the host name is even longer, this process is simply continued until the last node, the address itself, is reached. Figure 4 shows you an example of this process from the viewpoint of a single name server.

Query "."? Local server



Query "uk."? Authority server for ".":


Query ""? Authority server for "uk.":


And vice-versa?

Query ""? Authority server for "":


Figure 4: A complete DNS query

same questions time and time again (“who is authoritative for uk?”), but at an amazing speed. We were so impertinent as to bother a root server

with dig to create Listing 2. As you can see, a top-level domain such as uk for United Kingdom needs more than one authoritative server. To distribute the

Listing 2: What servers are authoritative for uk? [john@black john]$ dig uk ; <<> DiG 9.2.1 <<> uk ;; global options: printcmd ;; Got answer: ;; ->HEADER<<- opcode: QUERY, status: NOERROR, id: 48832 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 5 ;; QUESTION SECTION: ;uk.


You will not always need the IP address for a known host name. Sometimes the IP address is known, and you need to query the host name mapped to it. Admittedly, only computers and not users will tend to perform this kind of search, but the lookup direction is still quite common. The methods described earlier work quite well when used in combination with a fairly neat trick. Let us assume that the IP address occurs in a logfile. If your computer needs to learn the host name mapped to this address to tidy up some statistics, it will simply reverse the order of the octets in the address and add the special domain. ■


A [1] BIND:


172800 172800 172800 172800



[2] P. Mockapetris: RFC1034 “Domain Names – Concepts and Facilities”(1987) [3] P. Mockapetris: RFC1035 “Domain Names – Implementation and Specification” (1987) [4] List of root servers: net/domain/named.root

172800 172800 172800 172800 172800



Query time: 92 msec SERVER: WHEN: Tue Nov 5 12:09:03 2002 MSG SIZE rcvd: 199

[john@black john]$ NS0.JA.NET. NS.UU.NET. SEC-NOM.DNS.UK.PSI.NET.


;; AUTHORITY SECTION: uk. uk. uk. uk.

Marc André Selig spends half of his time working as a scientific assistant at the University of Trier and as a medical doctor in the Schramberg hospital. If he happens to find time for it, his currenty preoccupation is programing web based databases on various Unix platforms.

Dec 02 / Jan 03



Brave GNU World

The monthly GNU Column

Brave GNU World I

n this monthly column we bring you the news from within the GNU project. In this issue we will look at localisation within KDE and email with Java. We also cover software patents and speech compression.

Welcome to another issue of Georg’s Brave GNU World. This time with a variety of topics from different areas. BY GEORG C. F. GREVE

KDE en_GB John Knight pointed the “KDE en_GB” project out to me, of which he is the initiator and coordinator. Goal of the project is to provide a British English (en_GB) localization for the well-known K Desktop Environment (KDE). Many people value “their” English and, as in this case, do not feel comfortable with the far-spread American English. This project allows them to choose their familiar brand of English. It also offers advantages for the educational field, because in some countries, British English is the authoritative form and pupils only seeing American English on their computers might develop problems with their own language.

Therefore John believes that as one of the major results of his project, KDE will experience an advantage in schools and universities of Great Britain, Australia and other former English speaking members of the British Empire. John himself happens to be Australian and started the project about one and a half years ago because – according to his own sentiment – he used to be overly pedantic and wanted to put the skill to good use. Together with Malcolm Hunter (England), Dwayne Bailey (South Africa), Aston Clulow (Australia) and Ken Knight, his twin brother, John is trying to keep the translation as up-to-date as possible, because the ongoing development provides for a permanently changing basis. This is one of the main difficulties the project is always facing. Other problems arise from programmers writing mixtures of British and American English; also sometimes Americanisms are overlooked by the translators. And so, additional pairs of eyes are always welcome. By the way: It appears that contact with Will Stephenson, another volunteer, Figure 1: Minimise the use of “minimize” with en_GB local support


Dec 02 / Jan 03

was lost because his email address does not seem to work. If Will Stephenson reads this: John asks you to get in touch with him. Also John would like to encourage the large distributions to support the project, since some of them require installing the package by hand. Within the project, a list of all English speaking countries has been created, which lists the preferred form of English within a country. Even if this list is not complete, it could be interesting for the distributors to have the optimal number of automatic defaults. And finally the work of Dwayne should be mentioned. He is being financed by the South African government to create modules for all 11 languages spoken in South Africa – when John spoke to him last, he was busy with both the Xhosa and Zulu modules for KDE. The freedom of doing such projects is truly invaluable and cannot be put into monetary terms; it is clearly a major argument for Free Software. Also it shows that Free Software not only encourages cultural diversity between countries, it also strengthens cultural diversity within a country. As usual within the KDE project, the work of the translation teams is also published under the terms of the GNU General Public License and if you are interested in more information about translations of KDE, you should take a look at the KDE home page for translators and documentation writers where they will welcome you. [1] This is also where new translators can find information and new translation projects can be started.

Brave GNU World


At the moment JMail supports English and French; help by translators for other languages, as well as help by proofreaders for the English version would be very welcome. Even more important than translators are more users giving feedback and bug reports to help Yvan reaching version 1.0, which he would like to release towards the end of this year.


Figure 2: JMail main window with Motif look and feel

JMail JMail [2] is an email program written in Java by Yvan Norsa and published under the GNU General Public License (GPL). Originally started as a homework project for school, JMail has turned out to be a complete email client with LDAP support that can be used on all platforms supporting Java. This makes it particularly interesting for anyone having to work on different platforms. Despite the GPL license, the project is

Figure 3: Linphone making use of the Speex protocols

of course notably encumbered as Free Software, since it requires a proprietary Java environment, making the user dependant on them. This well-known Java problem has still not been completely solved. For further development, Yvan plans on reimplementing some parts of code that he feels are unsatisfactory, as well as introducing local folders and thread support. Also the profile files will be changed from plain text to XML.

One of the areas most encumbered by the existance of software patents is the digital compression of voice audio files, which provides the basis for internet telephony (â&#x20AC;&#x153;Voice over IPâ&#x20AC;? VoIP), audio books, internet radio, voice mail and other future applications. Since Free Software is not compatible with a monopolizing system, a Free Software implementation of patented algorithms can only be done under very special circumstances, which are not met in this area. Users of Free Software are therefore left with few choices nowadays, which either means low quality and/or low compression rate or encodings that were optimized for music, like Ogg-Vorbis. [3] With Speex, [4] a recent addition to the GNU Project, Jean-Marc Valin is working on a Free Software solution unencumbered by software patents. He is being supported in this task by David

Figure 3: Phone home using Speex on LinPhone

Dec 02 / Jan 03



Brave GNU World

Rowe and Steve Underwood, as well as several people helping investigate patents to ensure that Speex does not violate them. Started in February 2002, the project is beeing written entirely in ANSI C to keep it as portable as possible and is published under the GNU Lesser General Public License (LGPL) to maintain interoperability with any other proprietary software. As the project is still in a relatively young development phase, the file and stream format often changing from version to version – stabilizing this is one of the most important tasks at hand. Despite this difficulty, there are already first applications of Speex – for example Linphone [5] by Simon Morlat, an internet telephony program for GNU/Linux, which is also building upon the GNU oSIP library introduced in Linux Magazine issue #22. [6] The most severe problem for Speex development are software patents, though. They require permanent attention to check whether patents are being violated and how they can be circumvented. This provides a significant roadblock to innovation and help for this is very much welcome. Depending on the perspective, some could consider the unsatisfactoy musicencoding-capabilities of Speex a disadvantage; but for this purpose there is Ogg-Vorbis, to which Speex seeks to provide as supplemental, and not as a replacement or competitor. Besides the patent problems, there are also other issues you can support Speex development with. Developers with a background in digital signal processing (DSP) are sought for quality improvement and help would also be useful for the API and the encoder/decoder. Similar to many young projects, Speex also is lacking in documentation, as the developers readily admit. So there are many possibilities to participate. As a side note, Jean-Marc would like to see it pointed out that being member of the University of Sherbrooke does not put him into a conflict of interests although the university is notorious for holding onto software patents on speech coding and compression. Although he made his masters degree in that group,


Dec 02 / Jan 03

he is now working on his Ph.D. in the group for mobile robotics, which gives him freedom when working on speech coding. It is a rather sad statement for the future of science that such a disclaimer is necessary nowadays.

Software patents As the previous feature shows, software patents have a very immediate effect on some projects and we have to fear that this will spread further. By now, many people have heard about the software patent problem – also thanks to the untiring work of people like Hartmut Pilch and Jean-Paul Smets. It seems there are still a lot of wrong ideas and confusion around this topic – especially with decision makers and politicians, because otherwise some statements become incomprehensible. It is time for Brave GNU World to point out the problem from a macroeconomical perspective. As examples in the United States show, [7] the actual effect of software patents is introducing a mechanism which allows larger companies to raise or lower their thumb, deciding about life or death of innovative ideas and companies. They provide a carte blanche to force anyone into legal struggles that are usually survived only by the larger, wealthy company. The creation of software patents and the legal struggles about software patents both require patent lawyers. And

in Europe the patent approving instance, the European Patent Office, is neither democratically controlled nor is it responsible for approved patents. This makes software patents the golden goose for patent lawyers and the patent office. Software patents can be created in almost any number, do not require a connection to reality and their only purpose is to start legal struggles. Software patents do not only provide an efficient obstacle to innovation, they also force companies to spend large sums on patent lawyers and fees and make it necessary to maintain much larger “war chests” for legal struggles. Therefore software patents weaken innovation and the economic situation by introducing a kind of artificial friction loss which subsidizes part of the legal system itself. This is supported by practical experience as well as theoretical studies, because as yet there is no proof that software patents are beneficial for society, but there are many facts showing their harmful effects. It should be allowed to raise the question whether the group of patent offices and patent lawyers needs such a subsidy financed by the overall economic situation. For those who would like to get deeper into the topic, the material collected by the FFII [8] is recommended. Also I ask everybody to support the petition for a software patent free Europe [9] and write letters to the editor to the mainstream press, asking them to address this topic.

INFO [1] KDE internationalization home page: [2] JMail home page: [3] Ogg-Vorbis home page: [4] Speex home page: [5] Linphone home page: [6] Linux Magazine issue #22, Brave GNU World, p97 [7] Gary L. Reback,“Patently Absurd”, Forbes Magazine: 044.html [8] FFII home page: [9] Petition for a Software Patent Free Europe: [10]polyXmass home page: [11] DotGNU Forum project page: [12] DotGNU home page: [13] Home page of Georg’s Brave GNU World: Send ideas, comments and questions to

Brave GNU World


allow working with all polymers, which would be defined by the user. PolyXmass is this reimplementation. Development on the program is not “closed.” When a colleague recently sketched some complicated formulas on a piece of paper, which required Filippo to compute masses in a rather tricky way, he wrote a sophisticated molecular calculator for polyXmass and called it polyXcalc. So it is a very lively project that is already very useful to many users as the feedback shows. For the future it is planned to make the program more modular, possibly through CORBA/Orbit code, but these plans have not yet solidified. If you would like to contribute, you’re surely welcome.

DotGNU Forum The DotGNU Forum project [11] is part of the DotGNU project, [12] which aims at creating an “operating system for the internet” and a Free alternative to Microsofts .NET initiative, since the latter threatens the freedom of users. The goal of DotGNU Forum is establishing a platform which allows multiple users to simultaneously work on data together through communication channels such as Internet Relay Chat (IRC), File Transfer Protocol (FTP), Instant Messaging (IM), Bulletin Board Systems (BBS), USENET or HTTP. The DotGNU Forum server also provides “plazas,” virtual meeting points which may contain data or applications relevant for certain topics. Users can meet and work together on a project. For this the DotGNU Forum provides several means of communication, like

documentation browser, download server, message boards and an integrated chat system. DotGNU Forum was written in C# and it is possible to write extensions in other languages supported by DotGNU. According to Peter Minten, author of DotGNU Forum, one of the major advantages of his project is its clientserver based design philosophy, which tries to keep the server as small and stable as possible. Also the extensibility is one reason why he believes people should consider using DotGNU Forum. The idea for the project evolved out of some first thoughts about virtual universities and class rooms, which has been generalized to virtual places. In reference to the ancient romans, where the forum was a center of communication and activity, the project was then dubbed DotGNU Forum. Right now the server still requires some work before the first applications can be written and help is very welcome for writing code and documentation. For the not-so-close future, Peter envisions 3D-forums in which people can see each other virtually and talk to each other through Voice over IP (VoIP). It will certainly be a while before this becomes possible. The next step planned is to support EMACS and other editors as input interfaces for the forum. Oh, and as a part of the GNU Project, DotGNU Forum is naturally released under the GNU General Public License.

Until next month That should be enough for this month. As usual please feel free to provide questions, ideas, feedback, comments and news about interesting projects by mailing them to the usual address. [13]■


Filippo Rusconi of the “Centre National de la Recherche Scientifique” (CNRS) has published polyXmass, [10] a program for mass spectrometry simulation, as Free Software under the GNU General Public License (GPL) with support of his university. The project aims at providing a modular framework which allows the user to define new polymer chemistries, build them into sequences and perform sophisticated computations on them that simulate chemical reactions in order to create a simulated mass spectrogram reflecting all the previous steps. The program was written in C with the Gtk+ toolkit and its target audience would be users of mass spectrometers, especially chemists, biochemists and students. And far as the author knows, there are no comparable projects. In the eyes of Filippo Rusconi, polyXmass has many strengths. It is very versatile when defining polymers, incredibly flexible in displaying sequences, allowing users to draw the “letters” of the “alphabet” themselves, along with being very quick in the chemical computations. Since XML is used to save polymer definitions and sequences, all data exists as ASCII and can be edited by hand or imported into own programs. The project was born out of the wish to move to GNU/Linux, because originally Filippo had written a program called massXpert under Windows, which allowed calculation of proteins only. Instead of simply porting that program, he reimplemented it in a way that would


Georg C. F. Greve Dipl.-Phys. has been using free software for many years. He was an earlier adopter of GNU/Linux. After becoming active in the GNU project he formed the Free Foundation Europe, of which he is the current president. More information can be found at http://www.

Figure 4: PolyXmass – the user can make very finely customized polymer sequence cleavages

Dec 02 / Jan 03


Events / Advertiser Index / Call for Papers


Call for Papers

Linux Events Event Location

Date Web Site

UMeet2002 Online

Dec 09–20 2002

Chaos Communication Congress Dec 27–29 2002 Berlin–Germany Spam Conference Cambridge, MA–USA

Jan 17 2003

LinuxWorld Conference & Expo New York, NY–USA

Jan 21–24 2003 Perth,WA–Australia

Jan 22–25 2003

SAINT-2003 Orlando, Florida–USA

Jan 27–31 2003

NordU/USENIX 2003 Västerås–Sweden

Feb 10–14 2003

Desktop Linux Summit San Diego, CA–USA

Feb 20–21 2003

CeBIT 2003 Hanover–Germany

Mar 12–19 2003

Ruby Con Dearborn, MI–USA

Mar 28 –30 2003


e are always looking for article submissions and new authors for the magazine. Although we will consider articles covering any Linux topic, the following themes are of special interest: • System Administration • Useful hints, tips and tricks • Security, both news and techniques • Product Reviews, especially from real world experience • Community news and projects

Advertiser Index Advertiser

Web Site



11, Inside Back Cover

Apple Computer



Outside Back Cover

Dedicated Servers


Digital Networks


LinuxPark CeBIT


Linux Magazine Back Issues


Linux Magazine Subscription

Bind-in 66–67

SmartCertify Direct




The Positive Internet Company Ltd


Dec 02 / Jan 03

Inside Front Cover

If you have an idea for an article, please send a proposal to The proposal should contain an outline of the article idea, an estimate of the article length, a brief description of your background, and your complete contact information. Articles are usually about 800 words per page, although code listings and images often reduce this amount. The technical level of the article should be consistent with our typical content. Remember that Linux Magazine is read in many countries, and your article may be translated for use in our sister publications. Therefore, it is best to avoid using slang and idioms that might not be understood by all readers. Be careful when referring to particular dates or events in the future. Many weeks will pass between the submission of your manuscript and the final copy in the reader’s hands. When submitting proposals or manuscripts, please use a subject text that helps us to quickly identify your email as an article proposal for a particular topic. Screenshots and other supporting materials are always welcome. Don’t worry about the file format of the text and materials, we can work with almost anything. Please send all correspondence regarding articles to ■

Subscription CD


Subscription CD


he CD ROM with your subscription issue contains all the software listed below, saving you hours of searching and downloading time. On this month’s subscription CD ROM we start with the latest distribution to hit the servers. Included along side the full distribution we have all the files that we mention in the magazine, in the most convenient formats.

LRs-Linux LRs-Linux is a Source-Distribution based on the LFS- System (Linux From Scratch). LRs-Linux is compiled completely from source code but does not need a host operating system for it to do this. The installation of LRs-Linux is largely automatic. LRs-Linux is not a conventional distribution: the majority of other Linux distributions “pack” their compiled binaries with RPM; these binaries are often compiled for a large variety of processors (i386, i586 and others). LRs-Linux does not do this, because LRs-Linux is compiled completely from the source code so the installed Linux system is ideally suited to your computer. Your processor is automatically recognised and the system is optimised for this processor. This normally results in an improvement in the performance of your system. The GNU-C-Compiler Version 3.2 (gcc-3.2) has been used. The system contains a Linux Base System with the following: • Kernel 2.4.19 – based on LFS-4.0 • ext3-support reiserfs-support • XFREE 4.2.1 • xfce 3.8.16 • WindowMaker 0.80.1 • Blackbox-0.65.0pre1 • KDE 3.0.3 • openssh-3.4p1 • Mozilla 1.1 • Apache 2.0.40 • Sendmail 8.12.6 • Samba-2.2.5 • Xchat 1.8.10 • AxyFTP The system is saved as an ISO image on the CD-ROM so you will need either a CD recorder to make your own bootable disc or to mount the disk via a loopback device. More information about this amazing distribution can be found on the website at

Wine Wine is an implementation of the Windows Win32 and Win16 APIs on top of X and Unix. Think of Wine as a Windows compatibility layer. Wine provides both a development toolkit (Winelib) for porting Windows sources to Unix and a program loader, allowing many unmodified Windows 3.x/95/98/ME /NT/W2K/XP binaries to run under Intel Unixes. Wine does not require Microsoft Windows, as it is a completely alternative implementation consisting of 100% Microsoft-free code, but it

can optionally use native system DLLs if they are available.

Clam Antivirus Clam AntiVirus is an antivirus scanner written from scratch. It is licensed under GNU GPL2 and uses the virus database from OpenAntiVirus, which is an another free antivirus project. In contrast to the OpenAntiVirus (which is written in Java), Clam AntiVirus is written entirely in C and is POSIX compliant.


Subscribe & Save Save yourself hours of download time in the future with the Linux Magazine subscription CD! Each subscription copy of the magazine includes a CD like the one described here free of charge. In addition, a subscription will save you over 16% compared to the cover price, and it ensures that you’ll get advanced Linux Know-How delivered to your door every month.

Almost BASIC. A programming environment similar to Subscribe to Linux Magazine Visual Basic. A program that today! allows you to create graphic interfaces with just a few Order Online: mouse clicks. Gambas is, above all, a Or use the order form between Basic language with object p66 and p67 in this magazine. extensions. A program written with Gambas is a set of files. Each file describes a class, in terms of object programming. The class files are compiled, then executed by an interpreter. From this point of view, it is very inspired by Java. Gambas will be made up of the following programs:A compiler, an interpreter, an archiver, a graphical user interface component and a development environment.

Gnumeric The Gnumeric spreadsheet is part of the GNOME desktop environment: a project to create a free, user friendly desktop environment. Gnumeric is intended to be a drop in replacement for proprietary spreadsheets. Gnumeric will import your existing Excel, 1-2-3, Applix, Sylk, XBase Quattro Pro, Dif, Plan Perfect, and Oleo files.

Gimp The GIMP is the GNU Image Manipulation Program. It is a freely distributed piece of software suitable for such tasks as photo retouching, image composition and image manipulating. GIMP is an extremely capable piece of software with many capabilities. It can be used as a simple paint program, an expert quality photo retouching program, an online batch processing system, a mass production image renderer, a image format converter, etc. GIMP is extremely expandedable and extensible. It is designed to be augmented with plugins and extensions to do just about anything. ■

Dec 02 / Jan 03



Next month

February 2003: Issue 27

Next month highlights Editorial Editor Assistant Editor International Editors


Production Coordinator Layout Cover Design John Southern, Colin Murphy, Hans-Georg Eßer,, Heike Jurzik,, Ulrich Wolf, Rüdiger Berlich, Frederik Bijlsma, Frank Booth, Zack Brown, Daniel Cooper, Thomas Drilling, Steven Goodwin, Georg C. F. Greve, Peer Heinlein, Andrew Jones, Patricia Jung, Andreas Kneib, Charly Kühnast, Achim Leitner, Martin Loschwitz, Joachim Moskalewski, Andrea Müller, Christian Perle, Marc André Selig, Stefanie Teufel, Frank Wieduwilt, Dean Wilson, Thomas Zell Hans-Jörg Ehren, Judith Erb, Elgin Grabe, Klaus Rehfeld Pinball Werbeagentur

Advertising Sales All countries Brian Osborn, (except phone +49 651 99 36 216, Germany, fax +49 651 99 36 217 Austria, Switz.) Germany Osmund Schmidt, Austria Switzerland phone +49 6335 9110, fax +49 6335 7779 Management (Vorstand) Hermann Plank,, Rosie Schuster, Project Management Hans-Jörg Ehren, Subscription Subscription rate (12 issues including monthly CD) United Kingdom £ 39.90 Other Europe Euro 64.90 Outside Europe – SAL Euro 74.90 (combined air / surface mail transport) Outside Europe – Airmail Euro 84.90 phone +49 89 9934 1167, fax +49 89 9934 1199, Linux Magazine Stefan-George-Ring 24 81929 Munich, Germany, phone +49 89 9934 1167, fax +49 89 9934 1199 – Worldwide – Australia – Canada – United Kingdom While every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material provided on it is at your own risk. The CD is thoroughly checked for any viruses or errors before reproduction. Copyright and Trademarks © 2002 Linux New Media Ltd. No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, e-mails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. Linux is a trademark of Linus Torvalds. ISSN 14715678 Printed in Germany. Linux Magazine is published monthly by Linux New Media AG, Munich, Germany, and Linux New Media Ltd, Macclesfield, England. Company registered in England. Distributed by COMAG Specialist, Tavistock Road, West Drayton, Middlesex, UB7 7QE, United Kingdom


Dec 02 / Jan 03

Security Enhanced Linux

Cache 5

Designed as a National Security Agency (NSA) project, this version of Linux has a strong, flexible mandatory access control architecture incorporated into the major subsystems of the kernel. The system provides a mechanism to enforce the separation of information based on confidentiality and integrity requirements. This allows threats of tampering and bypassing of application security mechanisms to be addressed and enables the confinement of damage that can be caused by malicious or flawed applications. We examine the reasoning behind this version along with the base system, installation and practical use.

The Cache 5 database from Intersystems lets you use your core transaction processing system data without having to load it weekly into a data warehousing system. We take a look at what the system will mean to other postrelational databases under Linux.

User control Controlling users on a Linux system is an important part of administration. We provide you with a guide to all the options and features available.

Diskless Clients A guide to the setting up and configuration of diskless clients. We explain the protocols and technology used in producing easy to administer network with a standardized setup and centralized resources.

We step through the commands line by line, as well as showing you the latest graphical tools so you do not forget any flags. We look at permissions and disk options along with the KSysV init editor with its advanced scheme editor.

GNU e-mail and News

We take you through the setting up of network booting and BIOS modification along with all the scripts for your office server. PXE and DHCP connections are covered with NFS protocols.

E-mail has become a way of life. News groups are important in the quest for information. We need these quickly without any bloated software. Emacs offers us the functionality and expandability we need while Xemacs makes the system simpler and easier to use with an intuitive interface. Find out how to control all your tasks with just a few key strokes or mouse clicks and use your time making the computer work for you and not you working for the computer.

On Sale: 10 January

linux magazine uk 26  

linux magazine uk 26