Issuu on Google+

CMS Redbook Overview

Jack Hennessy Feb 28 2008 09:04:36


Contents This custom Redbook was created by Jack Hennessy on Feb 28 2008 from chapters from the following Redbooks.The original page numbering for each chapter has been preserved from the source Redbook. Performance Tuning for Content Manager Source : SG24-6949-01 - Jul 31 2006 URL : http://www.redbooks.ibm.com/abstracts/sg246949.html?Open Chapter 1. Content Manager Chapter 2. Content Manager base products Chapter 3. Performance tuning basics Chapter 4. Planning for performance Chapter 5. Designing and configuring for performance Chapter 6. Best practices for Content Manager system performance Chapter 9. Tuning WebSphere for Content Manager Chapter 10. Tuning TSM for Content Manager Chapter 13. Case study Appendix A. Performance tuning guidelines for a Document Manager system Content Manager OnDemand Guide Source : SG24-6915-01 - Jun 30 2006 URL : http://www.redbooks.ibm.com/abstracts/sg246915.html?Open Chapter 1. Overview and concepts Chapter 9. Data conversion Chapter 10. Report Distribution Chapter 15. Did you know?


.OTICES 4HISINFORMATIONWASDEVELOPEDFORPRODUCTSANDSERVICESOFFEREDINTHE53! )"-MAYNOTOFFERTHEPRODUCTS SERVICES ORFEATURESDISCUSSEDINTHISDOCUMENTINOTHERCOUNTRIES#ONSULT YOURLOCAL)"-REPRESENTATIVEFORINFORMATIONONTHEPRODUCTSANDSERVICESCURRENTLYAVAILABLEINYOURAREA !NYREFERENCETOAN)"-PRODUCT PROGRAM ORSERVICEISNOTINTENDEDTOSTATEORIMPLYTHATONLYTHAT)"- PRODUCT PROGRAM ORSERVICEMAYBEUSED!NYFUNCTIONALLYEQUIVALENTPRODUCT PROGRAM ORSERVICETHAT DOESNOTINFRINGEANY)"-INTELLECTUALPROPERTYRIGHTMAYBEUSEDINSTEAD(OWEVER ITISTHEUSERgS RESPONSIBILITYTOEVALUATEANDVERIFYTHEOPERATIONOFANYNON )"-PRODUCT PROGRAM ORSERVICE )"-MAYHAVEPATENTSORPENDINGPATENTAPPLICATIONSCOVERINGSUBJECTMATTERDESCRIBEDINTHISDOCUMENT 4HEFURNISHINGOFTHISDOCUMENTDOESNOTGIVEYOUANYLICENSETOTHESEPATENTS9OUCANSENDLICENSE INQUIRIES INWRITING TO )"-$IRECTOROF,ICENSING )"-#ORPORATION .ORTH#ASTLE$RIVE!RMONK .9 53! 4HEFOLLOWINGPARAGRAPHDOESNOTAPPLYTOTHE5NITED+INGDOMORANYOTHERCOUNTRYWHERESUCHPROVISIONS AREINCONSISTENTWITHLOCALLAW).4%2.!4)/.!,"53).%33-!#().%3#/20/2!4)/.02/6)$%3 4()305",)#!4)/.h!3)3v7)4(/547!22!.49/&!.9+).$ %)4(%2%802%33/2)-0,)%$ ).#,5$).' "54./4,)-)4%$4/ 4(%)-0,)%$7!22!.4)%3/&./. ).&2).'%-%.4 -%2#(!.4!"),)49/2&)4.%33&/2!0!24)#5,!20520/3%3OMESTATESDONOTALLOWDISCLAIMER OFEXPRESSORIMPLIEDWARRANTIESINCERTAINTRANSACTIONS THEREFORE THISSTATEMENTMAYNOTAPPLYTOYOU 4HISINFORMATIONCOULDINCLUDETECHNICALINACCURACIESORTYPOGRAPHICALERRORS#HANGESAREPERIODICALLYMADE TOTHEINFORMATIONHEREINTHESECHANGESWILLBEINCORPORATEDINNEWEDITIONSOFTHEPUBLICATION)"-MAY MAKEIMPROVEMENTSANDORCHANGESINTHEPRODUCTS ANDORTHEPROGRAMS DESCRIBEDINTHISPUBLICATIONAT ANYTIMEWITHOUTNOTICE !NYREFERENCESINTHISINFORMATIONTONON )"-7EBSITESAREPROVIDEDFORCONVENIENCEONLYANDDONOTINANY MANNERSERVEASANENDORSEMENTOFTHOSE7EBSITES4HEMATERIALSATTHOSE7EBSITESARENOTPARTOFTHE MATERIALSFORTHIS)"-PRODUCTANDUSEOFTHOSE7EBSITESISATYOUROWNRISK )"-MAYUSEORDISTRIBUTEANYOFTHEINFORMATIONYOUSUPPLYINANYWAYITBELIEVESAPPROPRIATEWITHOUT INCURRINGANYOBLIGATIONTOYOU )NFORMATIONCONCERNINGNON )"-PRODUCTSWASOBTAINEDFROMTHESUPPLIERSOFTHOSEPRODUCTS THEIRPUBLISHED ANNOUNCEMENTSOROTHERPUBLICLYAVAILABLESOURCES)"-HASNOTTESTEDTHOSEPRODUCTSANDCANNOTCONFIRM THEACCURACYOFPERFORMANCE COMPATIBILITYORANYOTHERCLAIMSRELATEDTONON )"-PRODUCTS1UESTIONSON THECAPABILITIESOFNON )"-PRODUCTSSHOULDBEADDRESSEDTOTHESUPPLIERSOFTHOSEPRODUCTS 4HISINFORMATIONCONTAINSEXAMPLESOFDATAANDREPORTSUSEDINDAILYBUSINESSOPERATIONS4OILLUSTRATETHEM ASCOMPLETELYASPOSSIBLE THEEXAMPLESINCLUDETHENAMESOFINDIVIDUALS COMPANIES BRANDS ANDPRODUCTS !LLOFTHESENAMESAREFICTITIOUSANDANYSIMILARITYTOTHENAMESANDADDRESSESUSEDBYANACTUALBUSINESS ENTERPRISEISENTIRELYCOINCIDENTAL #/092)'(4,)#%.3% 4HISINFORMATIONCONTAINSSAMPLEAPPLICATIONPROGRAMSINSOURCELANGUAGE WHICHILLUSTRATESPROGRAMMING TECHNIQUESONVARIOUSOPERATINGPLATFORMS9OUMAYCOPY MODIFY ANDDISTRIBUTETHESESAMPLEPROGRAMSIN ANYFORMWITHOUTPAYMENTTO)"- FORTHEPURPOSESOFDEVELOPING USING MARKETINGORDISTRIBUTINGAPPLICATION PROGRAMSCONFORMINGTOTHEAPPLICATIONPROGRAMMINGINTERFACEFORTHEOPERATINGPLATFORMFORWHICHTHE SAMPLEPROGRAMSAREWRITTEN4HESEEXAMPLESHAVENOTBEENTHOROUGHLYTESTEDUNDERALLCONDITIONS)"- THEREFORE CANNOTGUARANTEEORIMPLYRELIABILITY SERVICEABILITY ORFUNCTIONOFTHESEPROGRAMS9OUMAYCOPY MODIFY ANDDISTRIBUTETHESESAMPLEPROGRAMSINANYFORMWITHOUTPAYMENTTO)"-FORTHEPURPOSESOF DEVELOPING USING MARKETING ORDISTRIBUTINGAPPLICATIONPROGRAMSCONFORMINGTO)"-gSAPPLICATION PROGRAMMINGINTERFACES


This is the trademark page from the Performance Tuning for Content Manager Redbook, SG24-6949-01 at http://www.redbooks.ibm.com/abstracts/sg246949.html?Open

Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX® AIX 5L™ AS/400® DB2® DB2 Connect™ DB2 Universal Database™ Domino®

IBM® iSeries™ Lotus® OS/2® Redbooks™ Redbooks (logo) Tivoli®

TotalStorage® VideoCharger™ WebSphere® z/OS® 1-2-3® ™

The following terms are trademarks of other companies: Enterprise JavaBeans, EJB, Java, Java Naming and Directory Interface, JavaBeans, JavaMail, JavaServer, JavaServer Pages, JDBC, JMX, JSP, JVM, J2EE, Solaris, Sun, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Active Directory, Excel, Internet Explorer, Microsoft, Windows NT, Windows Server, Windows, Win32, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other terms that are trademarks or service marks of other companies are marked with a ™ or ® at the first occurrence within the original IBM Redbooks publication from which these chapters are selected. Although the first occurrence of a trademark or service mark may not be marked in the chapter you have chosen to include in My Redbooks™, It is available at the URL and link set forth in the Table of Contents Other company, product, or service names may be trademarks or service marks of others.


Content Manager OnDemand Guide Redbook,SG24-6915-01 at http://www.redbooks.ibm.com/abstracts/sg246915.html?open

Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: Advanced Function Presentation™ AFP™ AIX® AS/400® Bar Code Object Content Architecture™ CICS® DB2 Universal Database™ DB2® DFSMSdfp™ Enterprise Storage Server® eServer™ Infoprint®

IBM® IPDS™ iSeries™ Lotus Notes® Lotus® Mixed Object Document Content Architecture™ MVS™ Notes® OS/2® OS/390® OS/400® Print Services Facility™ pSeries®

Redbooks™ Redbooks (logo) RACF® S/390® System Storage™ System/370™ Tivoli® TotalStorage® WebSphere® Workplace™ z/OS® zSeries®

The following terms are trademarks of other companies: Java, JSP, JVM, J2EE, Solaris, Sun, Sun Fire, Sun Java, Sun Microsystems, SunOS, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Excel, Microsoft, Windows server, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other terms that are trademarks or service marks of other companies are marked with a ™ or ® at the first occurrence within the original IBM Redbooks publication from which these chapters are selected. Although the first occurrence of a trademark or service mark may not be marked in the chapter you have chosen to include in My Redbooks™, It is available at the URL and link set forth in the Table of Contents Other company, product, or service names may be trademarks or service marks of others.


Abstract Redbook overview of CMS tuning


Preface Demonstrate Redbook customization.


"ECOMEAPUBLISHEDAUTHOR *OINUSFORATWO WEEKTOSIX WEEKRESIDENCYPROGRAM(ELPWRITEAN)"- 2EDBOOKDEALINGWITHSPECIFICPRODUCTSORSOLUTIONS WHILEGETTINGHANDS ON EXPERIENCEWITHLEADING EDGETECHNOLOGIES9OUgLLTEAMWITH)"-TECHNICAL PROFESSIONALS "USINESS0ARTNERSANDORCUSTOMERS 9OUREFFORTSWILLHELPINCREASEPRODUCTACCEPTANCEANDCUSTOMERSATISFACTION!S ABONUS YOUgLLDEVELOPANETWORKOFCONTACTSIN)"-DEVELOPMENTLABS AND INCREASEYOURPRODUCTIVITYANDMARKETABILITY &INDOUTMOREABOUTTHERESIDENCYPROGRAM BROWSETHERESIDENCYINDEX AND APPLYONLINEAT lj£ǚǼ¼ǧǚȟȏƜƍ£ǧǧǒȓȟȏƜȓljƍƜǢ¼ljƜȓǼDŽȣǚǔ

#OMMENTSWELCOME 9OURCOMMENTSAREIMPORTANTTOUS 7EWANT)"-2EDBOOKSTOBEASHELPFULASPOSSIBLE3ENDUSYOURCOMMENTS ABOUTTHISOROTHER)"-2EDBOOKSINONEOFTHEFOLLOWINGWAYS 5SETHEONLINE#ONTACTUSREVIEWREDBOOKFORMFOUNDAT lj£ǚǼ¼ǧǚȟȏƜƍ£ǧǧǒȓ 3ENDYOURCOMMENTSINANEMAILTO ȏƜƍ£ǧǧǒ¡ȮȓǼlj£ǚǼ¼ǧǚ -AILYOURCOMMENTSTO )"-#ORPORATION )NTERNATIONAL4ECHNICAL3UPPORT/RGANIZATION $EPT(94$-AIL3TATION0 3OUTH2OAD 0OUGHKEEPSIE .9 


1

Chapter 1.

Content Manager In this chapter we introduce the IBM DB2 Content Manager (Content Manager) architecture. We provide an overview of various Content Manager components, how they interact with each other, and some of the key concepts that you should know related to Content Manager System.

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

3


1.1 Introduction IBM DB2 Content Manager Enterprise Edition provides a flexible system that can be used to manage unstructured content according to business needs. Installable on Microsoft Windows, IBM AIX, Sun Solaris™, Linux, and IBM z/OS platforms, Content Manager provides a complete solution for storing and retrieving documents in almost any format. A companion product, Content Manager Standard Edition, provides similar functions for IBM iSeries™ users. Content Manager features include:  Support for multiple operating systems  A Java-based tool for managing the system  Client options including browser access  Support for business documents of almost every kind  Administration tools for defining users and user privileges, defining data models, and providing ways to manage your servers and data  Efficient methods for keeping the system secure  Document routing to manage the flow of work through the system

1.2 Content Manager system components The Content Manager system is scalable to fit the needs of small, medium, and large-scale businesses. The main system components that make up a Content Manager system are:  Library Server  Resource Manager  Clients Content Manager clients include the system administration client, Client for Windows, Information Integrator for Content system administration client, and eClient. The clients running either in user desktops or in mid-tier application servers (such as WebSphere Application Server) invoke Content Manager services that are divided between a Library Server and one or more Resource Managers. The Library Server manages the content metadata and is responsible for access control to all of the content and interfaces to one or more Resource Managers. The Resource Managers, using WebSphere Application Server and Tivoli Storage Manager, manage the content objects. Contents can be stored on any devices supported by Tivoli Storage Manager, including DASD, optical, and tape.

4

Performance Tuning for Content Manager


Figure 1-1 shows the Content Manager system with its components.

Tivoli Storage Manager

DASD Optical

Tape

Web Application Server

Library Server

Resource Manager

Clients

System Administration Client

Client for Windows

Web Application Server

Information Integrator for Content System Administration Client

eClient (Browser)

Figure 1-1 Content Manager system

All access to Library Server is through the database query language SQL. Results from the queries are passed back to the client with object tokens, locators for requested content that the user is authorized to access. The client then directly communicates with the Resource Manager using the Internet protocols HTTP, FTP, and FILE. In the following sections, we explore the Content Manager system components, discuss the communication among the system components, and introduce the APIs used by Content Manager.

1.2.1 Library Server The Library Server is the key component of a Content Manager system. It is where you define the information that you store in Content Manager. The Library Server has the following functions:  Storing, managing, and providing access control for objects stored on one or more Resource Managers. (For Resource Manager information, see 1.2.2, “Resource Manager� on page 7.)  Processing requests (such as update or delete) from one or more clients.

Chapter 1. Content Manager

5


 Maintaining data integrity between all of the components in a Content Manager system.  Directly controlling user access to objects stored on any Resource Manager in the system.  Managing the content and performing parametric searches, text searches, and combined (parametric and text) searches, using a relational database management system such as DB2 Universal Database™. Figure 1-2 shows a Library Server component.

Library Server Database Stored Procedures

Figure 1-2 Library Server component

You can directly access the Library Server database using SQL (Structured Query Language) or a relational database client. A Library Server contains a Library Server database and is implemented as a set of stored procedures. All tasks carried out by Library Server are performed via stored procedures on the Library Server database. Each Content Manager system can have only one Library Server, which can run on Windows, AIX, Linux, Solaris, and z/OS. Notice, if you use Oracle instead of IBM DB2, you cannot run the Library Server on Linux, Linux for s/390, and z/OS. The Library Server requires the following software:    

IBM DB2 Universal Database or Oracle Database IBM DB2 Runtime Client or Oracle Client IBM DB2 NetSearch Extender or Oracle Text (optional) C++ Compiler (optional)

IBM DB2 Universal Database or Oracle Database The IBM DB2 Universal Database (provided in the Content Manager package) or Oracle Database are required to run with the Library Server and must be installed on the same system or workstation as the Library Server. The Library

6

Performance Tuning for Content Manager


Server needs the database software to store document model definitions, document metadata, access control, and document routing definitions.

IBM DB2 Runtime Client or Oracle Client The IBM DB2 Runtime Client, or Oracle Client, provides communication between the client workstation and the Library Server database. After the database client (DB2 or Oracle) is installed, the Library Server database must be cataloged on the client workstation. Cataloging establishes communication between the Library Server and the Content Manager Client for Windows application and the system administration client. Without this component, the Client for Windows and the system administration client cannot communicate with the Library Server (through stored procedure calls).

IBM DB2 NetSearch Extender or Oracle Text (optional) Content Manager offers full-text searching of documents in the Content Manager database. This is an optional feature. To use it, you must install one of the following products, depending on the database you are using:  DB2 Net Search Extender if you are using DB2 Universal Database  Oracle Text if you are using the Oracle Database If you do not require text search capabilities, you do not need to install DB2 Net Search Extender. If you currently do not need it, but anticipate using this feature in the future, it is best to install DB2 Net Search Extender before installing Content Manager. This is because the Content Manager installation program automatically enables the Library Server database to search for text if DB2 Net Search Extender is installed earlier.

C++ Compiler (optional) The C++ Compiler is an optional component that can be used to define user exit routines.

1.2.2 Resource Manager The Resource Manager stores objects for Content Manager. You can store and retrieve objects on the Resource Manager. When you request data, the request is first routed through the Library Server. A single Library Server can support multiple Resource Managers and content can be stored on any of these Resource Managers. Figure 1-3 on page 8 shows the Resource Manager and its relationship to the Library Server.

Chapter 1. Content Manager

7


W eb Application Server Library Server

Resource M anager

Figure 1-3 Resource Manager component and its relationship to the Library Server

The Resource Manager consists of a Resource Manager database and Resource Manager applications. Similar to the Library Server, the Resource Manager relies on a relational database to manage the location of stored objects and storage devices. The Resource Manager works closely with Tivoli Storage Manager (TSM) to define storage classes and migration policies. The Resource Manager can run on Windows, AIX, Linux, Solaris, or z/OS. If you use Oracle instead of IBM DB2, you cannot run the Resource Manager on Linux, Linux for s/390, or z/OS. The Resource Manager requires the following software:     

IBM DB2 Universal Database or Oracle Database WebSphere Application Server IBM HTTP Server Tivoli Storage Manager Tivoli Storage Manager Client

IBM DB2 Universal Database or Oracle Database The IBM DB2 Universal Database (provided in the Content Manager package) or Oracle Database is required to run the Resource Manager database. Depending on the speed and storage requirements of your Content Manager system, you can install the Resource Manager database on the same machine as the Library Server database. The Resource Manager database server stores information pertaining to the objects being managed by the Resource Manager server.

WebSphere Application Server The IBM WebSphere Application Server software provided in this package is required to run with the Resource Manager and must be installed on the same machine as the Resource Manager. The IBM WebSphere Application Server provides the Java-enabled platform that enables Web clients to interact with Content Manager. Users and processes on a wide variety of operating systems can interact by using the facilities provided by WebSphere Application Server.

8

Performance Tuning for Content Manager


The Resource Manager is implemented as a Java 2 Enterprise Edition (J2EE™) Web application, and requires the WebSphere Application Server as its J2EE-compliant application server. WebSphere Application Server must be installed and configured before you begin the installation of the Content Manager Resource Manager component. You can find more about WebSphere Application Server and about how it interacts with Content Manager by browsing through the documentation that is available through the WebSphere Application Server information center, which is located at: http://publib.boulder.ibm.com/infocenter/ws51help/index.jsp

IBM HTTP Server Client requests to store or retrieve documents are communicated by HTTP to the Web server. The Web server then packages the request to be sent to the Resource Manager. Optionally, client configuration files (*.ini and *.property) may be published on the Web server so that all clients in the enterprise share a common and centrally maintained configuration.

Tivoli Storage Manager Tivoli Storage Manager is a client/server product that provides storage management and data access services in a heterogeneous environment. It is provided so that you can store objects, long term, on a storage device other than the fixed disks that are attached to the Resource Manager. Tivoli Storage Manager supports a variety of communication methods and provides administrative facilities to schedule backup and storage of files.

Tivoli Storage Manager Client The Resource Manager server uses the Tivoli Storage Manager Client APIs, which are installed with the Tivoli Storage Manager Client, to access the Tivoli Storage Manager server. These APIs provide connectivity between the Resource Manager server and the Tivoli Storage Manager server and can be programmed to centrally administer storage.

1.2.3 Content Manager clients The Content Manager clients are the interface that users use to manage and work in the Content Manager system. Content Manager offers the following out-of-box clients:  System administration client  Client for Windows, a dedicated Windows client that runs on Windows 2000, Windows ServerÂŽ 2003, or Windows XP

Chapter 1. Content Manager

9


 eClient, a browser-based client that runs on Netscape or Internet Explorer速. Your choice might be influenced by the portability of the Web client versus the functionality of the Windows client. In addition to the out-of-box clients, Content Manager also offers the following possible choices of clients:  Portal client  Custom client

System administration client The System administration client is one of the Content Manager clients. It oversees and manages the entire Content Manager system. Through the system administration client, you can:  Define your data model.  Define users and their access to the system.  Manage storage and storage objects in the system. Figure 1-4 shows the system administration client and its relationship to the Library Server and the Resource Manager.

Web Application Server Library Server

Resource Manager

System Administration Client

Figure 1-4 System administration client

The system administration client component can be installed on Windows, AIX, Solaris, or Linux. It can be installed where other components are installed or on a separate workstation. As a security measure, the system administration client

10

Performance Tuning for Content Manager


communicates with the Resource Manager through the Secure Sockets Layer (SSL) interface.

Client for Windows The Client for Windows is installed only on a Windows system. It enables users to import, view, store, and retrieve documents. The Client for Windows can also be run in a Terminal Server Edition (TSE) environment. The number of users who can be supported on any one TSE server depends on the memory, processing power, and other factors on the server, as well as on the amount of activity for each client user. All client actions are supported in this environment, except for scanning documents into the system (which must be done on a local workstation). Figure 1-5 shows the relationship between the Content Manager Client for Windows and the other Content Manager components (Library Server, Resource Manager, and system administration client).

Web Application Server Library Server

Resource Manager

Database

Clients

System Administration Client

Client for Windows

Figure 1-5 Client for Windows

eClient eClient can be installed on a Windows, AIX, or Solaris system. eClient is a browser-based client that provides out-of-the-box capabilities for Content Manager systems, similar to that of the Windows client. Import and export can be done using the eClient wizards. Document and folder organization functions are available through eClient. The eClient viewer provides page navigation functions such as next, prev, last, goto, zoom in, and zoom out. The application displays the first page of a document as soon as it is available without waiting for the entire document to download. This improves the response

Chapter 1. Content Manager

11


time when working with large documents. eClient also supports the same type of search capabilities as a Windows client.

Portal client Portal client is an extensive and customizable front end to Content Manager. It is composed of Content Manager Portlets, a Web-based application, running on the WebSphere Portal environment. Content Manager Portlets consists of two portlets, the Main portlet and the Viewer portlet. The Main portlet provides a single portlet-based functional replacement for the current Content Manager eClient. When deployed alone, the Main portlet will display documents in new browser windows for viewing. Optionally, the Viewer portlet can be deployed on the same portal page as the Main portlet to provide document-viewing capability with a unified layout in an integrated fashion, instead of in separate browser windows. Click-2-Action (C2A) is also supported by the Viewer portlet. Other portlets, including the Main portlet, can communicate to the Viewer portlet using C2A to have documents displayed in the Viewer portlet. Portlets are free. They are available from the WebSphere Portlet Catalog: http://catalog.lotus.com/wps/portal/workplace?NavCode=1WP1000O6

Creating your own client You can create custom Content Manager client applications using client APIs and user exit routines that are part of the Content Manager connectors. These connectors come with Content Manager Information Integrator for Content. You can use these APIs to:  Access information in the Library Server and Resource Manager.  Customize document processing.  Design your own data model.

1.2.4 Content Manager component communication When the Content Manager system is installed and running, the client application accesses the Library Server through an SQL request (such as create, retrieve, update, or delete). The Library Server, which contains metadata, processes the request by looking up, in its content index, where the data resides. The Library Server then gives the client a security token and a locator within the Resource Manager where the information can be found. The information is then returned to the client. The client uses the security token and locator to access the Resource Manager. The Resource Manager accepts the security token and returns the information to the client through HTTP.

12

Performance Tuning for Content Manager


1.2.5 Content Manager APIs The Content Manager Application Programming Interface (API) provides communication through Content Manager and Information Integrator for Content connectors. The Content Manager V8 connector supports C++ APIs (on Windows and AIX) as well as Java. The Content Manager V8 connector also supports JavaBeans™, XML, and Portlets. Communication with the Library Server (through SQL queries) is through JDBC™ (Java Database Connectivity), or CLI (Command Line Interface). Communication with the Resource Manager is through HTTP request, FTP, and RTPS (Real-Time Publish Subscribe protocol).

1.3 Content Manager system configuration Content Manager architecture is open, portable, and extensible. In earlier sections, we introduced the main Content Manager components, the Library Server, Resource Manager, and Content Manager clients. Depending on your business requirements, Content Manager system can be build with two-tier or three-tier configuration. Two-tier configuration consists of Client for Windows or custom clients at the top tier, and the Library Server and the Resource Managers at the bottom. See Figure 1-6. Two-tier Configuration

Client for Windows

Library Server

or

Custom Client

Resource Manager

Figure 1-6 Content Manager two-tier configuration

Three-tier configuration (Figure 1-7 on page 14) consists of a browser or other types of clients that talk to the mid-tier server at the top tier, eClient at the mid-tier, and the Library Server and the Resource Managers at the bottom tier.

Chapter 1. Content Manager

13


Three-tier Configuration

Browser

Mid-tier Server WebSphere Application Server

eClient

JavaBeans

ICM Java API

Library Server

Resource Manager

Figure 1-7 Content Manager three-tier configuration

Different Content Manager system configurations enable you to achieve different business requirements. System configuration plays an important role in overall system performance. In this section, we introduce several common Content Manager system configurations:     

14

All in one machine Separate servers on different machines Multiple Resource Managers Mixed platforms Multiple Content Manager systems with mixed platforms

Performance Tuning for Content Manager


All in one machine Install all components (Library Server, Resource Manager, system administration client, Client for Windows, eClient) on a single machine as shown in Figure 1-8. This is a good option for the first prototype Content Manager system.

Library Server Resource Manager System Administtation Client Client for W indows eClient

Figure 1-8 System configuration: all in one machine

Separate servers on different machines Split the Content Manager’s main components to different machines as shown in Figure 1-9. Install the Library Server on one machine, and the Resource Manager on another machine. Access the system using either Client for Windows or eClient. System administration client can be installed on the same machine as the client machine.

Library Server

Client for W indows System Adm . Client eC lient

R esource M anager

Figure 1-9 System configuration: separate servers on different machines

Chapter 1. Content Manager

15


Multiple Resource Managers Install multiple Resource Managers with a single Library Server running on the same platform. Figure 1-10 shows an example. Access the system using either Client for Windows, or eClient from a workstation. System administration client can be installed on the same machine as the client machine.

AIX Library Server

Client for W indows System Adm. Client eClient

AIX

Resource Managers

Figure 1-10 System configuration: multiple Resource Managers

16

Performance Tuning for Content Manager


Mixed platforms Set up a Content Manager system with a single Library Server and multiple Resource Managers running on different platforms. Figure 1-11 shows an example. Access the system using either the Client for Windows or the eClient on a workstation. System administration client can be installed on the same machine as the client machine.

AIX

Resource Manager

Library Server

Windows

AIX Resource Managers

Client for Windows System Adm. Client eClient

Figure 1-11 System configuration: mixed platforms

Chapter 1. Content Manager

17


Multiple Content Manager systems with mixed platforms Set up multiple Content Manager systems with multiple Library Servers and Resource Managers running on different platforms. Figure 1-12 shows an example. Access each system using one unique client, either a Client for Windows or an eClient. Note that each Content Manager system can have one and only one Library Server. SUN

AIX Library Server

Library Server

Client for W indows System Adm . Client eClient

Resource Manager

W indows

Resource Manager

W indows

Figure 1-12 System configuration: multiple CM systems, mixed platforms

Summary In summary, there are many ways to configure a Content Manager system. You can set up everything in one machine, or separate components in multiple machines, and across multiple platforms based on your business requirements. Remember, system configuration directly affects system performance. We discuss performance impact and performance tuning later in this book.

18

Performance Tuning for Content Manager


2

Chapter 2.

Content Manager base products Content Manager Version 8 is built on the products IBM DB2 UDB, WebSphere Application Server, and Tivoli Storage Manager. Performance tuning a Content Manager system includes understanding, configuring, and tuning the underlying software for performance. In this chapter we provide an overview of these three base software components. For convenience, we use the term “Content Manager base product” or “base product” for short to denote any of these products in this context. Note: Oracle is another database option that you can use with Content Manager. We do not discuss it in this chapter because it is beyond the scope of this book. We focus mainly on DB2 in this redbook.

© Copyright IBM Corp. 2003, 2006. All rights reserved.

19


2.1 DB2 Universal Database overview Library Server uses a relational database, and Resource Manager also uses a relational database. IBM DB2 Universal Database (DB2) is a multimedia, Web-ready, relational database management system (RDBMS). It is available on z/OS, UNIX速, Windows, and AS/400速 platforms. Understanding DB2 is critical to achieving success in performance tuning for Content Manager systems. There are many well-written publications on DB2 that contain very useful information. Most of the material presented in the following sections is extracted from these publications:  IBM DB2 Universal Database Version 8.2 Administration Guide: Planning, SC09-4822  IBM DB2 Universal Database Version 8.2 Administration Guide: Performance, SC09-4821  IBM DB2 UDB V8 and WebSphere V5 Performance Tuning and Operation Guide, SG24-7068 We include what we think is important to give you a quick start in the basic concepts of DB2, as well as aspects of performance tuning for the DB2 database system and routine maintenance that should be performed. For a more in-depth understanding of DB2, we recommend that you read these publications because they proved to be extremely helpful for us.

2.1.1 Basic DB2 database concepts In this section, we introduce some of the basic relational database concepts. This includes an overview of the various database objects and the database configuration files.

Database objects Database objects are the components that make up a DB2 relational database. Some of the more important database objects are: instances, databases, database partition groups, tables, views, indexes, schemas, system catalog tables, buffer pools, table spaces, containers, and stored procedures. Figure 2-1 on page 21 displays the database objects relationship. The individual components of the database objects will be discussed throughout this section.

20

Performance Tuning for Content Manager


S y s te m

In s t a n c e

D a ta b a s e

D a t a b a s e p a r t it io n g r o u p

T a b le s p a c e s Ta b le s In d e x e s L o n g d a ta

Figure 2-1 Database objects relationship

Instances (database managers) An instance (also known as a database manager) is DB2 code that manages data. It controls what can be done to the data, and manages system resources that are assigned to it. Each instance is a complete environment. If the database system is parallel, it contains all of the database partitions. An instance has its own databases (which other instances cannot access), and all its database partitions share the same system directories. It also has separate security from other instances on the same machine or system.

Databases A relational database presents data as a collection of tables. A table consists of a defined number of columns and any number of rows. Each database includes a set of system catalog tables that describe the logical and physical structure of the data, a configuration file containing the parameter values allocated for the database, and a recovery log with ongoing transactions and transactions to be archived.

Chapter 2. Content Manager base products

21


Database partition groups A database partition group is a set of one or more database partitions. When you want to create tables for the database, you first create the database partition group where the table spaces will be stored, then you create the table space where the tables will be stored. In earlier versions of DB2, database partition groups were known as nodegroups. Note: For a Content Manager system, we do not recommend database partitioning.

Tables A relational database presents data as a collection of tables. A table consists of data that is arranged logically in columns and rows. All database and table data is assigned to table spaces. The data in the table is logically related, and relationships can be defined between tables. Data can be viewed and manipulated based on mathematical principles and operations called relations. Table data is accessed through Structured Query Language (SQL), a standardized language for defining and manipulating data in a relational database. A query is used in applications or by users to retrieve data from a database. The query uses SQL to create a statement in the form of: SELECT <data_name> FROM <table_name>

Views A view is an efficient way of representing data without needing to maintain it. A view is not an actual table and requires no permanent storage. A â&#x20AC;&#x153;virtual tableâ&#x20AC;? is created and used. A view can include all or some of the columns or rows contained in the tables on which it is based. For example, you can combine a department table and an employee table in a view, so that you can list all employees in a particular department.

Indexes An index is a set of pointers to rows in a table. An index allows more efficient access to rows in a table by creating a direct path to the data through pointers. The SQL optimizer automatically chooses the most efficient way to access data in tables. The optimizer takes indexes into consideration when determining the fastest access path to data. Unique indexes can be created to ensure uniqueness of the index key. An index key is a column or an ordered collection of columns on which an index is defined.

22

Performance Tuning for Content Manager


Using a unique index will ensure that the value of each index key in the indexed column or columns is unique.

Schemas A schema is an identifier, such as a user ID, that helps group tables and other database objects. A schema can be owned by an individual, and the owner can control access to the data and the objects within it. A schema is also an object in the database. It may be created automatically when the first object in a schema is created. Such an object can be anything that can be qualified by a schema name, such as a table, index, view, package, distinct type, function, or trigger. A schema name is used as the first part of a two-part object name. When an object is created, you can assign it to a specific schema. If you do not specify a schema, it is assigned to the default schema, which is usually the user ID of the person who created the object. The second part of the name is the name of the object. For example, a user named Smith might have a table named SMITH.PUBLICATIONS.

System catalog tables Each database includes a set of system catalog tables, which describe the logical and physical structure of the data. DB2 creates and maintains an extensive set of system catalog tables for each database. These tables contain information about the definitions of database objects such as user tables, views, and indexes, as well as security information about the authority that users have on these objects. The system catalog tables are created when the database is created, and are updated during the course of normal operation. You cannot explicitly create or drop them, but you can query and view their contents using the catalog views.

Buffer pools A buffer pool is an area of reserved main memory allocated into which user table data, index data, and catalog data from disk storage are temporarily stored or cached. The buffer pool plays a key role in overall database performance, because data accessed from memory is much faster than data accessed from a disk. The fewer times the database manager needs to read from or write to the disk, the better the performance. The configuration of the buffer pools is one of the most important aspects of performance tuning for a Content Manager system because you can reduce the delay caused by slow I/O. Buffer pools can be defined with varying page sizes including 4 KB, 8 KB, 16 KB, and 32 KB. By default the Content Manager Library Server uses 4 KB and 32 KB page sizes for different parts of the database.

Chapter 2. Content Manager base products

23


Table spaces A database is organized into parts called table spaces. A table space is a place to store tables. When creating a table, you can decide to have certain objects such as indexes and large object (LOB) data kept separately from the rest of the table data. A table space can also be spread over one or more physical storage devices. Table spaces reside in database partition groups. Table space definitions and attributes are recorded in the database system catalog. Containers are assigned to table spaces. A container is an allocation of physical storage (such as a file or a device). A table space can be either system managed space (SMS), or database managed space (DMS). For an SMS table space, each container is a directory in the file space of the operating system, and the operating systemâ&#x20AC;&#x2122;s file manager controls the storage space. For a DMS table space, each container is either a fixed-size pre-allocated file, or a physical device such as a disk, and the database manager controls the storage space. The page size of the table space must match the buffer pool page size. The default user table space is called USERSPACE1. Any user-created tables that are not specifically defined to an existing table space will default to this table space. Always specify a user-defined table space when creating user tables and indexes to avoid having them created in USERSPACE1. The system catalog tables exist in a regular table space called SYSCATSPACE. There is also a default system temporary table space called TEMPSPACE1, which is used to store internal temporary data required during SQL operations such as sorting, reorganizing tables, creating indexes, and joining tables. Tables containing long field data or large object data, such as multimedia objects, exist in large table spaces or in regular table spaces. The base column data for these columns is stored in a regular table space, and the long field or large object data can be stored in the same regular table space or in a specified large table space. Figure 2-2 on page 25 displays the different type of table spaces that might exist within a database.

24

Performance Tuning for Content Manager


Database Regular table space

Table spaces User data is stored here

Large table space (optional)

Temporary table space

System temporary table spaces

Tables Multimedia objects or other large object data

User Temporary table spaces

Figure 2-2 Table space types

The Content Manager product has predefined table spaces that are created when the product is installed. Table 2-1 lists the default table spaces created for Library Server. Table 2-2 on page 26 lists the default table spaces created for Resource Manager. Table 2-1 Library Server default table space setup Buffer pool

Table space

Table spaces description

ICMLSMAINBP32

ICMLFQ32

Holds all large, persistent, frequently used tables â&#x20AC;&#x201C; most notably, all item-type tables. When a new item type is defined, the system administration client defines the new tables in this table space. If you customize the DDL, this table space should be defined across multiple containers for performance and scalability.

ICMLNF32

Holds the large but less frequently used tables, such as event logs, versions, and replicas. If you do not use any of these features, this table space will not need much space.

Chapter 2. Content Manager base products

25


Buffer pool

Table space

Table spaces description

ICMLSVOLATILE BP4

ICMVFQ04

Holds the volatile tables, those whose size is usually small but which can vary widely, and for which both updates and deletes are very common (for example, the tables associated with document routing â&#x20AC;&#x153;in-progressâ&#x20AC;? items and the checked-out items). Putting this on a separate physical disk should help performance.

ICMLSFREQBP4

ICMSFQ04

Holds all the small, frequently-used, but seldom updated tables, such as system information, user definitions, and ACLs, that do not take up much space but that are always needed in memory for good performance.

CMBMAIN4

CMBINV04

Holds the tables used for Information Integrator for Content federated administration.

Table 2-2 Resource Manager default table space setup Buffer pool

Table space

Table spaces description

OBJECTPOOL

OBJECTS

Primary table space that holds the tables that grow very large, with information about all the objects stored on this Resource Manager.

SMSPOOL

SMS

Holds small but frequently read administrative information.

TRACKINGPOOL

TRACKING

Holds volatile tables related to transactions.

PARTSPOOL

PARTS

Holds tables used by the validation utility, which is infrequently used but can create a very large volume of data.

BLOBPOOL

BLOBS

Reserved for future use.

Containers A container is a physical storage device. It can be identified by a directory name, a device name, or a file name. A container is assigned to a table space. A single table space can span many containers, but each container can belong to only one table space. We strongly recommend that multiple containers be used in order to promote the distribution of data across many disks and to increase input-output (I/O) parallelism. DB2 has a limit of 512 GB per table space using 32K rows.

26

Performance Tuning for Content Manager


Figure 2-3 shows how table space HUMANRES is spread across five containers, with each container being on a different disk. Database

HUMANRES table space EMPLOYEE table DEPARTMENT table PROJECT table

D:\DBASE1 Container 0

E:\DBASE1 Container 1

G:\DBASE1 Container 3

F:\DBASE1 Container 2

H:\DBASE1 Container 4

Figure 2-3 Table space spread over multiple containers

Stored procedures Stored procedures permit one call to a remote database to perform a pre-programmed procedure in a database application environment in which many situations are repetitive. For example, for receiving a fixed set of data, performing the same set of multiple requests against a database is necessary. Processing a single SQL statement for a remote database requires sending two transmissions: one request and one receive. An application might require many SQL statements to complete its work. This could result in many acts of transmission. A stored procedure that encapsulates those many SQL statements would enable the work to be done in one transmission. A stored procedure is a dynamically linked library (DLL) or program that runs on the server and is called by the database engine in response to a call by a client.

Chapter 2. Content Manager base products

27


Content Manager uses stored procedures for such functions as checking for access control privileges and executing user functions such as searching for an item. A stored procedure can call other DLLs as required. Stored procedures usually run in a fenced mode where they are performed in processes separate from the database agents.

Configuration files and parameters When a DB2 instance or a database is created, a corresponding configuration file is created with default parameter values. You can and should modify these parameter values to adapt the configuration to your environment and potentially improve performance. Configuration files contain parameters that define values such as the resources allocated to the DB2 products and to individual databases, and the diagnostic level. There are two types of configuration files:  The database manager configuration file for each DB2 instance  The database configuration file for each individual database

Database manager configuration file The database manager configuration file is created when a DB2 instance is created. The parameters it contains affect system resources at the instance level, independent of any one database that is part of that instance. Values for many of these parameters can be changed from the system default values to improve performance or increase capacity, depending on your systemâ&#x20AC;&#x2122;s configuration. There is one database manager configuration file for each client installation as well. This file contains information about the client enabler for a specific workstation. A subset of the parameters available for a server are applicable to the client. Database manager configuration parameters are stored in a file named db2systm. This file is created when the instance of the database manager is created. It is a binary file that can only be administered using DB2 tools and commands. In UNIX-based environments, this file can be found in the sqllib subdirectory for the instance of the database manager. In Windows environment, the default location of this file is the instance subdirectory of the sqllib directory. If the DB2INSTPROF variable is set, the file is in the instance subdirectory of the directory specified by the DB2INSTPROF variable. In a partitioned database environment, this file resides on a shared file system so that all database partition servers have access to the same file. The configuration of the database manager is the same on all database partition servers.

28

Performance Tuning for Content Manager


Most of the parameters affect either the amount of system resources that will be allocated to a single instance of the database manager, or the setup of the database manager and the different communications subsystems based on environmental considerations. In addition, there are other parameters that serve informative purposes only and cannot be changed. All of these parameters have global applicability independent of any single database stored under that instance of the database manager.

Database configuration file A database configuration file is created when a database is created, and resides where that database resides. There is one configuration file per database. Its parameters specify the amount of resource to be allocated to that database. Values for many of the parameters can be changed to improve performance or increase capacity. Different changes might be required, depending on the type of activity in a specific database. Parameters for an individual database are stored in a binary configuration file named SQLDBCON. This file is stored along with other control files for the database in the SQLnnnnn directory, where nnnnn is a number assigned when the database was created. Each database has its own configuration file, and most of the parameters in the file specify the amount of resources allocated to that database. The file also contains descriptive information, as well as flags that indicate the status of the database. In a partitioned database environment, a separate SQLDBCON file exists for each database partition. The values in the SQLDBCON file might be the same or different at each database partition, but we recommend that the database configuration parameter values be the same on all partitions. As we mentioned earlier, for a Content Manager system, we do not recommend partitioning databases.

2.1.2 DB2 architecture overview The previous section provides a brief overview of some of the basic relational database concepts. Now we examine the DB2 architecture and processes as shown in Figure 2-4 on page 30. Each client application, local or remote, is linked with the DB2 client library. Local clients communicate using shared memory and semaphores. Remote clients communicate using communication protocols such as TCP/IP, Named pipes, NetBIOS, or IPX/SPX. Activities on DB2 server are controlled by threads in a single process on Windows, or processes on UNIX. They are denoted with circles or elliptical shapes. Those processes with thick borders can be launched with multiple instances. For example, there might be one or more subagents, loggers, prefetchers and page cleaners running, but

Chapter 2. Content Manager base products

29


there is only one coordinator agent assigned for each client application. In the following section, we go over the key DB2 components and processes. Clients

UDB Client Library

Client application

Client application

Shared memory and semaphores, TCPIP, Named pipes, NetBIOS, SNA, IPX/SPX

Database

Logical agents

Log buffer

Coordinator agent

Write log request

Coordinator agent

Subagents

Subagents

Buffer Pool(s) Logger

Async I/O prefetch requests Common prefetch request queue

Deadlock detector

Prefetchers Scatter/Gather I/Os

Parallel, big-block, read requests

Log

Hard drive

Hard disks

Hard disks

Hard disks

Page cleaners

Figure 2-4 DB2 architecture and processes overview

DB2 agents (coordinator agent and subagents) DB2 agents are the DB2 processes that carry out the bulk of SQL processing requests from applications. They include coordinator agents and subagents. DB2 assigns a coordinator agent for each client application. This agent coordinates the communication and processing for this application. If intra-partition parallelism is disabled (this is the default), then the coordinator agent performs all of the applicationâ&#x20AC;&#x2122;s requests. If intra-partition parallelism is enabled, then DB2 assigns a set of subagents to the application to process requests from the application. If the machine where the DB2 server resides has multiple processors, multiple subagents can also be assigned to an application.

30

Performance Tuning for Content Manager


The maximum number of agents that can be connected to a database instance can be controlled by the database manager configuration parameters such as MAXAGENTS, MAXCAGENTS, and MAX_COORDAGENTS. The maximum number of applications that can be connected to a database can be controlled by the database configuration parameter MAXAPPLS, and the database manager configuration parameter MAX_AGENTS. All agents and subagents are managed using a pooling algorithm that minimizes their creations and their destructions.

Prefetchers Prefetchers retrieve data from disk and move them into the buffer pool before applications need the data. For example, applications needing to scan through large item-type tables would have to wait for data to be moved from disk into the buffer pool if there were no data prefetchers. With prefetch, DB2 agents send asynchronous read-ahead requests to a common prefetch request queue. As prefetchers become available, they implement those requests by using big-block or scatter read input operations to bring the requested pages from disk to the buffer pool. Having multiple disks for database data storage means that the data can be striped across the disks. This striping of data enables the prefetchers to use multiple disks at the same time to retrieve data. Prefetchers are designed to improve the read performance of applications as well as utilities such as backup and restore, since they prefetch index and move data pages into the buffer pool, thereby reducing the time spent waiting for I/O to complete. The number of prefetchers can be controlled by the database configuration parameter NUM_IOSERVERS.

Page cleaners Page cleaners (also known as asynchronous page cleaners) make room in the buffer pool to enable agents and prefetchers to read pages from disk storage and move them into the buffer pool. They look for updated pages in the buffer pool and write them to disk so that these pages can be used for other data if needed. For example, if an application has updated a large amount of data in a table, many of the updated data pages in the buffer pool might not have been written to disk storage yet â&#x20AC;&#x201D; such pages are called dirty pages. Because prefetchers cannot place fetched data pages from disk storage onto the dirty pages, these dirty pages must first be flushed to disk storage and become clean pages. (The clean pages remain in the buffer pool until a prefetcher or a DB2 agent â&#x20AC;&#x153;stealsâ&#x20AC;? them.)

Chapter 2. Content Manager base products

31


Page cleaners are independent of the application agents that read and write to pages in the buffer pool. Page cleaners write changed pages from the buffer pools to disk before a database agent requires the space in the buffer pool. This process eliminates the need for the database agent to write modified pages to disk, thereby improving performance. Without the availability of independent prefetchers and page cleaners, database agents would have to do all of the reading and writing of data between the buffer pool and disk storage. The configuration of the buffer pool, along with prefetchers and page cleaners, controls the availability of the data needed by the applications. Note: Performance can improve if more page-cleaner agents are available to write dirty pages to disk. This is particularly true if snapshots reveal that there are a significant number of synchronous data-page or index-page writes in relation to the number of asynchronous data-page or index-page writes. The number of page cleaners can be controlled by the database configuration parameter NUM_IOCLEANERS.

Proactive page cleaning Starting with DB2 version 8.1.4, there is an alternative method of configuring page cleaning. This alternative method differs from the default behavior in that page cleaners behave more proactively in choosing which dirty pages get written out at any given time. This new method differs from the default page cleaning method in two major ways:  Page cleaners will no longer react in response to the value of the CHNGPGS_THRESH configuration parameter. Instead, the alternate method of page cleaning provides a mechanism whereby the agents are informed of the location of good victim pages that have just been written out, so that agents do not have to search the buffer pool to look for a victim. When the number of good victim pages drops below an acceptable value, the page cleaners are triggered, and proceed to search the entire buffer pool, writing out potential victim pages, and informing the agents of the location of these pages.  Page cleaners no longer respond to LSN gap triggers issued by the logger. When the amount of log space encompassing the log record that has updated the oldest page in the buffer pool and the current log position exceeds that allowed by the SOFTMAX parameter, it is said that the database is in an LSN gap situation. Under the default method of page cleaning, when the logger detects that an LSN gap has occurred, it will trigger the page cleaners to write out all of the pages that are contributing to the LSN gap situation. That is, it will write out those pages that are older than what is allowed by the

32

Performance Tuning for Content Manager


SOFTMAX parameter. Page cleaners will be idle for some period of time while no LSN gap is occurring. Then, when an LSN gap occurs, the page cleaners are activated to write a large number of pages before going back to sleep. This can result in the saturation of the I/O subsystem, which then affects other agents that are reading or writing pages. Furthermore, by the time an LSN gap is triggered, it is possible that the page cleaners will not be able to clean fast enough and DB2 might run out of log space. The alternate method of page cleaning modulates this behavior by spreading out the same number of writes over a greater period of time. The cleaners do this by proactively cleaning not only the pages that are currently in an LSN gap situation, but also the pages that will come into an LSN gap situation soon, based on the current level of activity. To use the new method of page cleaning, set the DB2_USE_ALTERNATIVE_PAGE_CLEANING registry variable to ON.

Log processing and loggers All databases maintain log files that keep records of database changes. All changes to regular data and index pages are written as a log record into a log buffer. The data in the log buffer is written to disk by the logger process. To optimize performance, neither the updated data pages in the buffer pool nor the log records in the log buffer are written to disk immediately. They are asynchronously written to disk by page cleaners and the logger, respectively. The logger and the buffer pool manager cooperate and ensure that an updated data page is not written to disk storage before its associated log record is written to the log. This ensures database recovery to a consistent state from the log in the event of a crash such as a power failure. There are two logging strategy choices:  Circular logging, in which the log records fill the log files and then overwrite the initial log records in the initial log file. The overwritten log records are not recoverable. For example, assume that a database has a circular logging strategy using five log files. When the fifth log file becomes full, logging would continue against the first log file, overwriting the fileâ&#x20AC;&#x2122;s original contents.  Retain log records, in which a log file is archived when it fills with log records. New log files are made available for log records. Retaining log files enables roll-forward recovery. Roll-forward recovery reapplies changes to the database based on completed units of work (transactions) that are recorded in the log. You can specify that roll-forward recovery is to the end of the logs, or to a particular point in time before the end of the logs.

Chapter 2. Content Manager base products

33


Regardless of the logging strategy, log buffers are written to disk under the following conditions:  Before the corresponding data pages are written to disk. This is called write-ahead logging.  On a COMMIT, or after the number of COMMITS to group MINCOMMIT database configuration parameter is reached.  When the log buffer is full. Double buffering is used to prevent I/O waits. Note: The use of the log retain option can severely affect the functionality of your system if not properly maintained. With this strategy, log files are continually created as needed, filling up disk space in the process. When the file system or disk is full, DB2 will be unable to create log records and will start writing error messages continually into the diagnostics log file (db2diag.log) until the problem is resolved. Regular pruning of old log files must be done and is discussed in the subsequent section about Routine Maintenance.

Deadlock detector A deadlock occurs when one application is waiting for another application to release a lock on data. Each of the waiting applications is locking data needed by another application. Mutual waiting for the other application to release a lock on held data leads to a deadlock. The applications can wait forever until one application releases the lock on the held data. Because applications do not voluntarily release locks on data that they need, DB2 uses a background process called the deadlock detector to identify and resolve these deadlocks. The deadlock detector becomes active periodically as determined by the DLCHKTIME configuration parameter. When the deadlock detector encounters a deadlock situation, one of the deadlocked applications receives an error code, and the current unit of work for that application is rolled back automatically by DB2. When the rollback is complete, the locks held by this chosen application are released, enabling the other applications to continue. Note: Selecting the proper interval for the deadlock detector ensures good performance. If the interval is too short, then the deadlock detector will become active too frequently, which will cause unnecessary overhead. If the interval is too long, then resolution of deadlock situations will increase, delaying processes unnecessarily and negatively affecting performance.

Memory usage A primary performance tuning task is deciding how to divide the available memory among the areas in the database. An understanding of how DB2

34

Performance Tuning for Content Manager


organizes memory and the many configuration parameters that affect memory usage is essential.

Instance shared memory (also known as database-manager shared memory) is allocated when the database is started. All other memory is attached or allocated from the instance shared memory. When the first application connects or attaches to a database, database shared, application shared, and agent private memory areas are allocated. Instance shared memory can be controlled by the INSTANCE_MEMORY configuration parameter of the database manager. By default, this parameter is set to automatic so that DB2 calculates the amount of memory allocated for the instance.

Database shared memory (also known as Database Global memory) is allocated when a database is activated or connected to for the first time. This memory is used across all applications that might connect to the database. Database shared memory can be controlled by the DATABASE_MEMORY configuration parameter at database level. By default, this parameter is set to automatic so that DB2 calculates the amount of memory allocated for the database. The following memory areas contained in the database shared memory are extremely important in performance tuning:      

Buffer pools Lock list Database heap (includes the log buffer) Utility heap Package cache Catalog cache

Although the total amount of database global memory cannot be increased or decreased while the database is active, all of the memory areas listed above can be changed dynamically while the database is running. Refer to Chapter 8, â&#x20AC;&#x153;Tuning DB2 for Content Managerâ&#x20AC;? on page 171 for an in-depth look at the configuration parameter values and recommendations. A system administrator needs to consider overall balance of memory usage on the system. Different applications running on the operating system might use memory in different ways. For the Content Manager application, the majority of system memory should not be given to DB2 when using AIX. Only about 20% of available memory should be assigned to DB2 and the rest should be used for disk cache. As you will see often in this book, tuning is mandatory and should be performed on a regular basis to maintain optimal system performance.

Chapter 2. Content Manager base products

35


Figure 2-5 shows different portions of memory that the database manager allocates for various uses.

database manager shared memory

database manager global memory

application global memory

agent private memory

(1)

(1)

(MAXAPPLS)

(1)

(NUMDB)

application global memory

database global memory

application global memory

(MAXAGENTS)

agent private memory

Figure 2-5 DB2 memory model

When an application connects to a database in a partitioned database environment, in a non-partitioned database with the database manager intra-partition parallelism configuration parameter (INTRA_PARALLEL) enabled, or in an environment in which the connection concentrator is enabled, multiple applications can be assigned to application groups to share memory. Each application group has its own allocation of shared memory. In the application-group shared memory, each application has its own application control heap, but uses the share heap of the application group. The following three database configuration parameters determine the size of the application group memory:  The APPGROUP_MEM_SZ parameter, which specifies the size of the shared memory for the application group

36

Performance Tuning for Content Manager


 The GROUPHEAP_RATIO parameter, which specifies the percent of the application-group shared memory allowed for the shared heap  The APP_CTL_HEAP_SZ parameter, which specifies the size of the control heap for each application in the group The performance advantage of grouping application memory use is improved cache and memory-use efficiency. The figure also lists the following configuration parameter settings, which limit the amount of memory that is allocated for each specific purpose. In a partitioned database environment, this memory is allocated on each database partition:  NUMDB This parameter specifies the maximum number of concurrent active databases that different applications can use. Because each database has its own global memory area, the amount of memory that might be allocated increases if you increase the value of this parameter. When you set NUMDB to automatic, you can create any number of databases and memory consumption grows accordingly. Set this parameter to the number of databases within the instance.  MAXAPPLS This parameter specifies the maximum number of applications that can simultaneously connected (both local and remote) to a database. It affects the amount of memory that might be allocated for agent private memory and application global memory for that database. Note that this parameter can be set differently for every database. The default is automatic, which has the effect of allowing any number of connected applications. We do not suggest using the default. Set this parameter to the expected number of connections to the database.  MAXAGENTS and MAX_COORDAGENTS for parallel processing These parameters limit the number of database manager agents that can exist simultaneously across all active databases in an instance. (MAX_COORDAGENTS is not shown in the figure.) Together with MAXAPPLS, these parameters limit the amount of memory allocated for agent private memory and application global memory. The value of MAXAGENTS should be at least the sum of the values for maxappls in each database that are allowed to be accessed concurrently.

Database manager shared memory Memory space is required for the database manager to run. This space can be very large, especially in intra-partition and inter-partition parallelism environments.

Chapter 2. Content Manager base products

37


Figure 2-6 shows how memory is used to support applications. The configuration parameters shown enable you to control the size of this memory by limiting the number and size of memory segments, which are portions of logical memory. Database manager shared memory (including FCM) Monitor heap (MON_HEAP_SZ)

Audit buffer size (AUDIT_BUF_SZ)

Database global memory Utility heap (UTIL_HEAP_SZ)

Buffer pools

Backup buffer Extended memory cache Restore buffer (RESTBUFSZ)

Lock list (LOCKLIST)

Database heap (DBHEAP) Log buffer (LOG_BUFSZ) Catalog cache (CATALOGCACHE_SZ)

Package cache (PCKCACHESZ)

Application global memory (APP_CTL_HEAP_SZ)

Agent private memory Application heap (APPHEAPSZ)

Agent stack (AGENT_STACK_SZ)

Statistics heap (STAT_HEAP_SZ)

Short heap (SORTHEAP)

DRDA heap

UDF memory

Statement heap (STMTHEAP)

Query heap (QUERY_HEAP_SZ)

Java heap (JAVA_HEAP_SZ)

Client I/Q block (RQRIOBLK) (REMOTE)

Agent/Application shared memory Application support layer heap (ASLHEAPSZ) Client I/Q block (QRIOBLK) (LOCAL)

Figure 2-6 Memory usage

38

Performance Tuning for Content Manager

Note: Box size does not indicate relative size of memory.


You can predict and control the size of this space by reviewing information about database agents. Agents running on behalf of applications require substantial memory space, especially if the value of MAXAGENTS is not appropriate. For partitioned database systems, the fast communications manager (FCM) requires substantial memory space, especially if the value of FCM_NUM_BUFFERS is large. In addition, the FCM memory requirements are either allocated from the FCM buffer pool, or from both the database manager shared memory and the FCM buffer pool, depending on whether the partitioned database system uses multiple logical nodes.

2.1.3 Performance-related concepts The DB2 database management system provides a set of features and concepts that are closely related to the performance of a DB2 database system. Some of them reflect the architectural concepts that we discussed in 2.1.2, “DB2 architecture overview” on page 29. Depending on your specific database application, each of them can cause a performance bottleneck in your database environment if the database is not set up according to the requirements of the application. Thus it is essential to know and understand these concepts for effective DB2 performance tuning. In this section, we give an overview of these concepts and describe how they affect database performance. In Chapter 8, “Tuning DB2 for Content Manager” on page 171, we go through the individual configuration steps to adjust the setup of your database environment. In 11.4, “Monitoring and analysis tools for DB2” on page 295, we provide an overview of the tools and techniques that enable you on the individual platforms to verify that your current database setup meets your requirements. This section requires you to be familiar with the basic concepts and architecture as outlined in 2.1.1, “Basic DB2 database concepts” on page 20 and 2.1.2, “DB2 architecture overview” on page 29. Among others, we specifically discuss the following performance-related concepts:    

Query execution and access plan Indexes Data placement Clustering

Query execution and access plan Any SQL query sent to the database is first passed to the SQL Compiler. As part of this SQL Compiler process, the query optimizer analyzes a query. The Compiler develops alternative strategies, called access plans, for processing the query. DB2 UDB evaluates the access plans primarily on the basis of information

Chapter 2. Content Manager base products

39


about the data source capabilities (such as disk I/O capabilities) and about the data itself. In general, many access plans and variations are examined before the most optimum access plan is chosen based on the available information. Depending on the type of query compilation and access plan, evaluation is done at different points in time:  For Static SQL, the SQL query is embedded in some application program and explicitly bound to the database during the process of making this application program executable. In this case, compilation of the SQL query and evaluation of access plans is done once during the process of binding the application. For performance reasons, the access plan is then stored so that it can be reused whenever the query is perform. Nevertheless, the user might (and should!) explicitly request to recompile a query whenever any of the facts has changed that led the compiler to its choice of access plan, especially whenever the system catalog statistics have been updated as described in “System catalog statistics” on page 46.  For Dynamic SQL, including all SQL queries that are entered in some interactive query tool, the SQL query is compiled whenever it is submitted to the database for execution. DB2 might apply some intelligent query caching to avoid recompiling identical queries, but this is not under full control of the user. The access plan is the key when it comes to the performance of an individual query. Although an access plan is something internal and not automatically exposed to the user, DB2 provides means to visualize and analyze it. (In Chapter 11, “Performance monitoring, analysis, and tracing” on page 281, we show how to use these tools.) An access plan might be poor or good. In a situation when there is just a specific query that shows a poor performance, analysis of the access plan of this query typically provides the clue to the resolution of the performance problem. Key points:  An access plan of a query represents the database manager’s query execution model.  The query optimizer selects the optimum access plan based on various information.  The access plan is the key element to look at when analyzing performance problems of individual queries.

Indexes An index can be viewed as a means to support efficient access to data in a database table. An index always covers a set of columns of the underlying table, and after it is defined it can be used by the query optimizer to choose an access

40

Performance Tuning for Content Manager


plan that makes efficient use of an index access instead of a costly table scan. An index implicitly defines an order of the entries in a table based on the values of the index columns. An index does not automatically speed up any query on the underlying table. This is because an index can only speed up access to those columns of a table that are included in the index, but remember that the query optimizer takes into account a variety of information to determine the optimum access plan. An index is just one among many aspects that the optimizer considers. Technically, an index is an ordered set of pointers to rows in a base table. Each index is based on the values of data in one or more table columns. When an index is created, the database manager builds this object and maintains it automatically. Although it can speed up query execution, an index can slow down data manipulation such as INSERT, UPDATE, and DELETE on the table due to synchronous maintenance of the index. Furthermore, an index can reduce the degree of parallel access of multiple users and applications because DB2 might place locks on index entries when accessing data. This is similar to locks on table entries. These locks enable the database manager to virtually isolate and protect parallel applications and users from interfering with each other in an undesirable way. Depending on the access pattern, this might even result in deadlocks that would not occur without the index. Thus, an index should be introduced only to speed up a query after carefully evaluating the potential trade-off that might come as negative impact on performance for data manipulation. In 11.4, “Monitoring and analysis tools for DB2” on page 295, we go into detail about how to find out whether some index may be introduced in order to speed up performance of a query. In addition to the performance-related aspect of an index that we discussed here, an index can also be used for database design purpose. If an index is characterized as UNIQUE, it does not allow multiple entries in the table with identical values of the index-columns. Although the database manager takes care of automatically maintaining the index so that its pointers to table entries are kept consistent and correct, an index might need some explicit maintenance. This is, when after many updates, the index gets fragmented. In this situation, we might want to reorganize the index. (In 8.3, “Regular routine maintenance” on page 181, we describe how to do this.)

Chapter 2. Content Manager base products

41


Key points:  An index can be created to speed up a query (or a set of queries).  An index creates an extra load on the database system when updating data in a table.  An index might need some explicit maintenance for reorganization.

Data placement Data placement and the types of disks can play a significant role in the performance and availability of a database application. The overall objective is to spread the database effectively across as many disks as possible to try to minimize I/O wait. DB2 supports a variety of data placement strategies including mirroring, striping, and RAID devices. The administrator needs to understand the advantages and disadvantages of disk capabilities to arrive at the most appropriate strategy for an existing environment. Four aspects of disk-storage management affect performance:  Division of storage How you divide a limited amount of storage between indexes and data and among table spaces determines to a large degree how each will perform in different situations.  Wasted storage Wasted storage in itself might not affect the performance of the system that is using it, but wasted storage is a resource that can be used to improve performance elsewhere.  Distribution of disk I/O How well you balance the demand for disk I/O across several disk storage devices and controllers can affect how fast the database manager can retrieve information from disks.  Lack of available storage Reaching the limit of available storage can degrade overall performance. You should ensure that the log subdirectory is mapped to different disks than those disks that are used for your data. This can be a substantial performance benefit because the log files and database containers do not compete for the movement of the same disk heads. Also, a disk problem would be restricted to your data or the logs but not both.

42

Performance Tuning for Content Manager


Key points:  The physical placement of data on disks and distribution over multiple disks has major impact on performance.  A general goal should be to distribute data over multiple disks and to separate the database log from the data.

Clustering When evaluating a query and executing an access plan, a typical operation of the database manager retrieves records from a table with a given value for one or more of its columns. This directly reflects the â&#x20AC;&#x153;SELECT ... FROM ... WHERE ...â&#x20AC;? structure of SQL queries. If multiple entries match for the given SELECT criterion, it appears to be desirable to find as many of these entries on as few physical data pages as possible in order to minimize disk I/O. Thus, the concept of clustering in general deals with physically arranging data that is supposed to be retrieved in a sequential way such that this logically sequential data resides on sequential physical data pages. However, as data pages fill up over time, clustering cannot be guaranteed and data reorganization might be required. In a database system such as DB2, the concept of clustering is closely linked to the concept of indexes. That is, the sequential order implicitly defined by an index is used as the clustering order. This implies that there is a maximum of one clustering index for each table. Physical arrangement of data tables is completely within the responsibility of the database manager, so maintenance of the clustering order is done automatically. In an environment with heavy updates on data, this might result in measurable performance overhead â&#x20AC;&#x201D; analogous to automatic index maintenance. Figure 2-7 on page 44 illustrates how a table with at least the two columns Region and Year may be clustered using an index on Region.

Chapter 2. Content Manager base products

43


Figure 2-7 Table with clustering index

Key points:  Clustering can be used to minimize I/O of query processing assuming the query demands sequential access to logically sequential data (range-queries).  Clustering causes some performance overhead for data update.  Clustering might require some explicit maintenance for reorganization.

Multi-dimensional clustering Multi-dimensional clustering extends the idea of physically locating data contiguously when this data is logically coherent. Although the name of this concept might suggest it, this is not just clustering with respect to more than just one clustering dimension. It introduces a new and different organization of pointers to table entries that might improve performance through improved index efficiency and faster data retrieval. Although clustering maintains pointers to table data at record level, multi-dimensional clustering uses pointers to contiguous data blocks. For any of these referenced data blocks, DB2 makes sure that all records in these blocks have the same value with respect to any of the clustering dimensions. Blocks can be extended so that clustering can be maintained as data is updated.

44

Performance Tuning for Content Manager


Figure 2-8 illustrates this concept based on the table known from Figure 2-7 on page 44, using columns Region and Year each as clustering dimensions.

Figure 2-8 Table with multi-dimensional clustering index

In this example both indexes, on Region as well as on Year, use pointers to contiguous blocks of data, making them small and very efficient. Key points:  Multi-dimensional clustering can significantly speed up certain types of queries.  Multi-dimensional clustering does not require explicit maintenance because clustering is guaranteed by the database manager.

Materialized query tables A materialized query table (MQT) is a table whose definition is based on the result of a query, and whose data is in the form of pre-computed results that are taken from one or more tables on which the materialized query table definition is based. An MQT can be viewed as a table that caches the result of the query. The MQT thus holds precomputed data redundantly. When you create an MQT, you can select whether you want to be responsible for updating the MQT and keeping it consistent with the query that forms the MQT, or whether you want to leave this up to the database manager. Furthermore, if you leave the database manager responsible for maintaining consistency of the MQT, you can select whether you want to have any update to tables reflected immediately in the MQT or whether you want to explicitly request any update to the MQT. The decision about who is

Chapter 2. Content Manager base products

45


made responsible for maintaining consistency and when updates are propagated is primary guided by functional requirements in your database application environment. From a performance point of view, it might be better to update the MQT in some maintenance period at night, but your application scenario might require that updates are reflected immediately. When you have created an MQT, this MQT can implicitly be reused when executing further queries. This is not restricted to exactly the same query. Instead, an MQT can be reused in execution of any query that subsumes the query that forms the MQT. The decision whether to use the MQT or the query itself is made by the query optimizer as part of the general query execution process. The optimizer bases its decision on the estimation of costs for access plans with and without MQT. In addition to the performance improvement that comes with MQTs by reusing query results, there is another option for further performance boost: Indexes can be added to MQTs to improve performance for certain queries. Automatic maintenance of the MQT during updates implies some performance overhead that you should consider when creating an MQT. In a situation where updates are rare and similar expensive queries are repeatedly run against the database, MQTs can be the technique of choice to resolve performance bottlenecks. Key points:  MQTs provide for storing precomputed query results redundantly in tables.  The query optimizer makes use of MQTs whenever this results in a less expensive access plan.  The database manager supports automatic maintenance of consistency of the MQT with the query that forms the MQT.  Maintenance of MQT consistency, either user-controlled or automatic, implies some performance overhead when updating data.

System catalog statistics As discussed in “Query execution and access plan” on page 39, the query optimizer’s decision about the optimum access plan for query execution is guided by its knowledge of the database’s environment (such as disk speeds) and of the data in the database itself. Other than changes in configuration parameters, the database’s environment typically does not change much over time, and most changes are exposed to the query optimizer. But as data is loaded and updated in the database, the characteristics of this data can change significantly over time. This can cause the optimizer to select a completely inappropriate access

46

Performance Tuning for Content Manager


plan, which can result in severe performance problems. Thus it is very important to keep the optimizer’s knowledge about the data in the database up to date. Important: Maintaining up-to-date statistics in a database is crucial for performance of the database system. It has been shown that many performance bottlenecks, especially in Content Manager environments, are due to outdated statistics. Knowledge about the data internally is represented as some kind of statistics; for example, about distribution of values in columns of a table, selectivity and cluster ratio of indexes, numbers of rows in tables, and fragmentation of data. Collecting statistics is done by the database manager itself, but because this requires some locks on the data and it consumes performance-related resources, this can interfere with users and applications simultaneously accessing the database. Thus, updating statistics in a database is considered to be administrative task that must be controlled by a database administrator. The database administrator can do this either by manually requesting this or by specifying an administrative time window during which the update of statistics is done automatically. Doing this, the administrator can also control the level of details of the statistics that are collected. In 8.3, “Regular routine maintenance” on page 181, we describe how to do this. Whenever the nature of the data in the database has changed significantly (for example, after some initial load or after a series of updates), it is important to have the database manager update statistics to ensure that the query optimizer has the most accurate knowledge to make profound decisions about optimum access plans. This is also true if new objects such as indexes and tables are added to the database system, because otherwise the query optimizer will not have any statistics information about these objects. Key points:  Statistics about the data in the database is the most important source of knowledge that the query optimizer uses to decide which access plan to select.  Statistics must be maintained (updated) regularly, especially when data is updated in the database.  The database manager supports manual as well as automatic update of statistics.

Table space types: SMS versus DMS As described in 2.1.2, “DB2 architecture overview” on page 29, DB2 supports two different types of table spaces: system managed space (SMS) and database

Chapter 2. Content Manager base products

47


managed space (DMS). The actual storage of a table space is provided by containers that can either be files in the file system or devices (such as a UNIX raw disk) where only SMS table spaces may use only file containers and DMS table spaces can use file containers or device containers. The type of table space is determined when it is created. Both types come with certain advantages that you should consider before the table space is created. We give a brief overview of the trade-off between these two options. For completeness and because of the importance of this topic, we do not restrict simply to performance-related aspects.  File container advantage: The operating system might cache pages in the file system cache. Thus, even if some pages are not in a buffer pool, it does not require disk I/O to load and access them. That is, the file system caching can eliminate I/O that would otherwise have been required. This can allow for a smaller buffer pool and it might be advised to increase the file system buffer in turn. Especially in the case of table space for long data (using LOB or LONG data types), the database manager does not cache the data at all in its buffers. Depending on the access pattern of an application, it might be an advantage for performance if this table space is kept in file containers in order to make use of file system caching. Because system catalogs contain some LOB columns, you should keep them in SMS table spaces or in DMS-file table spaces.  Device container advantage: The location of the data on the disk can be controlled, if this is allowed by the operating system.  SMS table space advantages: – If file containers are used, space is not allocated by the system until it is required. – Creating a table space requires less initial work, because you do not have to predefine the containers.  DMS table space advantages: – The size of a table space can be increased by adding or extending containers. Existing data can be rebalanced automatically across the new set of containers to retain optimal I/O efficiency. – If all table data is in a single table space, a table space can be dropped and redefined with less overhead than dropping and redefining a table. As a general rule, a well-tuned set of DMS table spaces will outperform SMS table spaces (but it may require hard work to get there).

48

Performance Tuning for Content Manager


In 11.4, “Monitoring and analysis tools for DB2” on page 295, we demonstrate how to monitor and analyze whether indicators in your environment show that a different table space type might improve performance. In 8.4, “SMS to DMS table space conversion” on page 188, we discuss how to convert SMS table spaces into DMS table spaces. Key points:  The database manager supports different types of containers that come with different characteristics with respect to performance and administrative overhead.  The database manager supports different types of table spaces that correspond to container types and come with different characteristics with respect to performance and administrative overhead.  The type of a table space is determined when it is created; any change of the type afterwards requires a migration.

Prefetching and I/O servers As described in 2.1.2, “DB2 architecture overview” on page 29, prefetching deals with how the database manager proactively retrieves one or more pages from disk into some buffer pool in the expectation that they will be required by an application. Prefetching index and data pages into the buffer pool can help improve performance by reducing the I/O wait time. In addition, parallel I/O enhances prefetching efficiency, such as when data is spread over multiple disks. The database management system uses dedicated threads (in Windows operating system) response processes (in UNIX-based operating systems) to perform prefetching. While these prefetchers are used for asynchronous fetching of pages into buffer pools, the page cleaners analogously are used for asynchronous write of pages from buffer pool to disk when they are no longer needed. Prefetchers and page cleaners are also called I/O servers. The number of I/O servers that is used by the database management system can be adjusted to reflect the given database environment and the requirements of applications. Key points:    

Prefetching is a built-in technique to minimize disk wait time for data read. I/O between buffer pool and disk in general is done asynchronously. I/O can be parallelized using multiple I/O servers. Parallelizing I/O might improve performance.

Chapter 2. Content Manager base products

49


Coordinator agents As described in 2.1.2, “DB2 architecture overview” on page 29, each connection to database corresponds to a coordinator agent at server side that performs all the application requests. These agents can be created as needed or in advance and kept in a pool. Coordinator agents are owned by the database manager and shared among all databases managed by database manager. The maximum number of applications that can be connected to the database simultaneously can be adapted, as can the maximum number of agents that can be created by the database manager. Actually the maximum number of concurrent connections might even be higher than the maximum number of coordinator agents. This is known as database connection pooling and can be useful if concurrent connections tend to fall asleep on a regular basis (that is, when the application does not really interact with the database). There are a certain performance-related aspects regarding the maximum number of connections and the maximum number of coordinator agents:  Each coordinator agent consumes a certain amount of resources in the database environment (such as shared memory for various heaps) as well as in the underlying operating system. Especially in a UNIX environment where each agent is implemented as process, you should consider this consumption of system resources that comes with each process. There should not be more coordinator agents than are actually needed.  Dynamic creation of a coordinator agent on demand consumes operating system resources and time. Again, especially in a UNIX environment, you should in general consider the overhead that comes with creation of a process. On Windows-based platforms, agents are implemented as threads that come with less overhead in terms of operating system resources. With respect to database resources, there is no difference between UNIX and Windows. In 8.6, “Parameter tuning” on page 202, we show how to control the number of agents and connections. Key points:  Agent processes (resp. threads in Windows) are those entities in a database manager environment that perform the application’s requests.  The number of pooled agents can be controlled as well as the maximum number of agents.  The configuration of the number of agents in your database environment has an impact on the performance of the database.

50

Performance Tuning for Content Manager


Locking In a database system, typically many applications concurrently work on the same data. If this is done in an uncontrolled way, this might lead to anomalies that are known as:    

Lost update Uncommitted read Non-repeatable read Phantom read

It is the responsibility of the database manager to provide protection against these anomalies to applications. The database manager does this by maintaining locks on individual database objects such as rows in a table, index entries, or even whole tables. If a database lock is held on an object by an application, the database manager takes care that no other application accesses this element in a way such that one of the above anomalies shows up. This is done by suspending one of the applications from further execution, which in turn affects transaction throughput and visible performance. To minimize this impact, applications can specify the degree of protection that they require against interference with concurrent applications. This is done using the concept of isolation levels. Key points:  Locking is a concept to protect concurrent applications from interfering with each other and to protect consistency of data.  The number and nature of objects that are locked by concurrent applications generally affects performance in terms of transaction throughput.  Applications have a certain influence on the number and nature of locks that the database manager holds for them, but otherwise, locking is done implicitly under control of the database manager.  The database manager uses certain configurable memory pools to store locks.  Configuration of these memory pools affects the general locking behavior.

2.1.4 Routine maintenance Routine maintenance of your system is essential in maintaining optimum performance. In this section, we provide an overview of what should be included as part of routine maintenance (ideally performed during a scheduled maintenance window during which there is little to no user access to the database) as well as how often it should be scheduled. Your schedule can be more or less frequent depending on the amount of changes within your system.

Chapter 2. Content Manager base products

51


See Chapter 8, â&#x20AC;&#x153;Tuning DB2 for Content Managerâ&#x20AC;? on page 171 for details about routine maintenance, including applicable commands. The following routine maintenance are important:      

Keeping up with the latest software fix packs Monitoring system Cataloging statistics (runstats and rebind) Reorganization tables Pruning the log file Pruning the diagnostics file

Keeping up with the latest software fix packs Software bugs are found or reported by customers and new enhancements are continually being developed for software. DB2 is no exception. It is extremely important to keep your software up to the latest fix pack level so that you get the benefit of these software changes. When calls to external support are placed, one of the first things asked is whether your software is at the latest fix pack level. If not, you will be asked to update it before they will continue to troubleshoot your problem so that they do not try to solve a problem that has already been solved.

Monitoring system Monitoring your system on a regular basis is vital to ensuring that the system is running in a healthy and performance-oriented way. The database monitor switches must be turned on for sufficient time to enable DB2 to collect the data. We recommend regularly taking a DB2 snapshot of the database manager (instance) and each database, and that you keep the snapshot data in a file, a spreadsheet, or loaded into a DB2 table for historical and trend analysis. Analyze the results for immediate issues such as a high number of package cache overflows or database files closed. Perform a trend analysis to determine whether a problem is building. Is the buffer pool hit ratio decreasing? Are the sort overflows steadily increasing? If so, take appropriate corrective action.

Cataloging statistics (runstats and rebind) Statistical data stored in the system catalogs helps the optimizer choose the best access plan for queries. Executing the runstats command to update this statistical data has a tremendous impact on performance and should be one of the first things done (in conjunction with a rebind command) whenever system performance is perceived to be poor. We recommend executing runstats and rebind commands:  At frequent regular intervals for tables whose content changes continually. This might be daily or weekly.

52

Performance Tuning for Content Manager


 After creating a table or an index.  After any operation that adds or changes data in a significant number of table rows. Such operations include batch updates and data loading. These utilities could have a negative impact on performance and should be run during low user activity times, preferably during scheduled routine maintenance.

Reorganization tables After many changes to table data, logically sequential data might be on nonsequential physical data pages, which can cause the database manager to perform additional read operations to access the data. This increased I/O has a negative impact on performance. Running a table reorganization will re-cluster the data to match the primary index for efficient access and will reclaim space. Reorganizing a table takes more time than running statistics. A reorganization of a table might be necessary when any of the following are true:  Table has a high volume of insert, update, and delete activity.  Executing reorgchk indicates a need to reorganize the table. We recommend executing the reorgchk command at least weekly. If the amount of change to the data of your system is high, you might want to run it daily. Reorganize any table that has been flagged.

Pruning the log file When using the Log Retain strategy, log files are created continually as needed, filling up disk space of the log directory in the process. When the file system or disk is full, DB2 will be unable to create log records and will start writing error messages continually into the diagnostics log file (db2diag.log) until the problem is resolved. For this reason, old log files must be pruned or removed regularly before the file system gets full. The frequency of this action will depend on how fast it takes to use up most of the space in the log directory. Log files should not be deleted until it is absolutely sure that they will not be needed for any type of recovery. We recommend moving non-active log files from the log directory to another â&#x20AC;&#x153;logs to be deletedâ&#x20AC;? directory located on a different disk. Depending on your recovery strategy, when you are sure that the logs are no longer necessary, they can be deleted from this directory. TSM can also be used as a destination for the archival of log files.

Pruning the diagnostics file DB2 continually writes all diagnostic information to the db2diag.log file, which is used by DB2 customer support for troubleshooting purposes. You control the level (and amount) of detailed information that is written to the file by using the DIAGLEVEL configuration parameter. Valid values are from one (minimal

Chapter 2. Content Manager base products

53


information) to four (maximum information), with three being the default. Increasing the amount of diagnostic output can result in performance degradation and insufficient storage conditions in your database instance file system. The file will continue to grow forever and consume space, so it is necessary to maintain the file. Instead of manually editing the file to crop it down or simply deleting the file, we recommend that you periodically move the file to another disk. (DB2 will automatically create a new db2diag.log file.) Each moved file would overlay the previously moved file, allowing for one version of historical data. Attention: DB2 is a key underlying software of Content Manager and it plays a critical role in the performance of a Content Manager system. In this section, we merely touched the very basics of the DB2 components and architecture. In 11.4, “Monitoring and analysis tools for DB2” on page 295, we introduce some of the performance tuning monitoring tools for DB2. Chapter 6, “Best practices for Content Manager system performance” on page 143, and especially Chapter 8, “Tuning DB2 for Content Manager” on page 171 discuss DB2-specific tuning for a Content Manager system. To better understand DB2 and DB2 performance tuning, we recommend reading the publications we mentioned at the beginning of this DB2 Universal Database overview section.

2.2 WebSphere Application Server overview IBM WebSphere Application Server Version 5.1 is a comprehensive Java 2 Platform, Enterprise Edition 1.4, and Web services technology-based application server. It integrates enterprise data and transactions, and provides the core software to deploy, integrate, and manage Web applications. Through a rich application deployment environment, you can build, manage and deploy dynamic Web applications, handle high-transaction volumes and extend back-end business data and applications to the Web. WebSphere Application Server Network Deployment offers advanced Web services that can operate across disparate application frameworks and business-to-business (B2B) applications, and provides virtual any-to-any connectivity with transaction management and application adaptability. With multiple configuration options, WebSphere Application Server supports a wide range of scenarios, from simple administration of a single server to a clustered, highly available, high-volume environment with edge-of-network services. These specialized configuration options give you the flexibility to

54

Performance Tuning for Content Manager


respond to an ever-changing marketplace without the costs of migrating to a different technology base. In the following sections, we provide a general overview of J2EE, WebSphere Application Server architecture, and architectural components. The material provided are an extraction from the existing WebSphere publications. For more detailed information, refer to:  IBM WebSphere Application Server, Version 5.1 Information Center http://publib.boulder.ibm.com/infocenter/wasinfo/v5r1/index.jsp  IBM WebSphere Application Server, Version 5.1: Getting started, SC31-6323 http://www.ibm.com/support/docview.wss?uid=pub1sc31632301  WebSphere Application Server Information Center: http://www.ibm.com/software/webservers/appserv/was/library

2.2.1 J2EE overview Java 2 Platform, Enterprise Edition (J2EE) is the standard for developing, deploying, and running enterprise applications. WebSphere Application Server supports all of the J2EE APIs. The J2EE standard applies to all aspects of designing, developing, and deploying multi-tier, server-based applications. The J2EE standard architecture consists of:  Standard application model for developing multi-tier applications  Standard platform for hosting applications  Compatibility test suite for verifying that J2EE platform products comply with the J2EE platform standard  Reference implementation providing an operational definition of the J2EE platform The J2EE platform specification describes the runtime environment for a J2EE application. This environment includes application components, containers, and Resource Manager drivers. The elements of this environment communicate with a set of standard services that are also specified.

J2EE platform roles The J2EE platform also defines a number of distinct roles performed during the application development and deployment life cycle:  Product provider designs and makes available for purchase the J2EE platform, APIs, and other features defined in the J2EE specification.

Chapter 2. Content Manager base products

55


 Tool provider provides tools used for the development and packaging of application components.  Application component provider creates Web components, enterprise beans, applets, or application clients for use in J2EE applications.  Application assembler takes a set of components developed by component providers and assembles them in the form of an enterprise archive (EAR) file.  Deployer is responsible for deploying an enterprise application into a specific operational environment.  System administrator is responsible for the operational environment in which the application runs. Product providers and tool providers have a product focus. Application component providers and application assemblers focus on the application. Deployers and system administrators focus on the runtime. These roles help identify the tasks to be performed and the parties involved. Understanding this separation of roles is important for explaining the approach that should be taken when developing and deploying J2EE applications.

J2EE benefits The J2EE standard empowers customers. Customers can compare J2EE offerings from vendors and know that they are comparing parallel products. Comprehensive, independent compatibility test suites ensure vendor compliance with J2EE standards. Some benefits of deploying to a J2EE-compliant architecture are:  A simplified architecture based on standard components, services, and clients, that takes advantage of write-once, run-anywhere Java technology.  Services providing integration with existing systems, including Java Database Connectivity (JDBC); Java Message Service (JMS); Java Interface Definition Language (Java IDL); JavaMailâ&#x201E;˘; and Java Transaction API (JTA and JTS) for reliable business transactions.  Scalability to meet demand, by distributing containers across multiple systems and using database connection pooling, for example.  A better choice of application development tools, and components from vendors providing off-the-shelf solutions.  A flexible security model that provides single sign-on support, integration with legacy security systems, and a unified approach to securing application components.

56

Performance Tuning for Content Manager


The J2EE specifications are the result of an industry-wide effort that has involved, and still involves, a large number of contributors. IBM alone has contributed to defining more than 80% of the J2EE APIs.

Application components and their containers The J2EE programming model has four types of application components, which reside in four types of containers in the application server:    

Enterprise JavaBeans™ components, residing in the EJB™ container Servlets and JavaServer™ Pages™ files, residing in the Web container Application clients, residing in the application client container Applets, residing in the applet container

See 2.2.3, “Architectural components” on page 60 for a description of the components and containers. J2EE containers provide runtime support for the application components. There must be one container for each application component type in a J2EE application. By having a container between the application components and the set of services, J2EE can provide a federated view of the APIs for the application components. A container provides the APIs to the application components used for accessing the services. It also might handle security, resource pooling, state management, naming, and transaction issues.

Standard services The J2EE platform provides components with a set of standard services that they can use to interact with each other:            

HTTP and HTTPS Java Transaction API (JTA) Remote Method Invocation/Internet Inter-ORB Protocol (RMI/IIOP) Java Interface Definition Language (Java IDL) Java Database Connectivity (JDBC) Java Message Service (JMS) Java Naming and Directory Interface™ (JNDI) JavaMail and JavaBeans Activation Framework (JAF) Java Transaction API (JTA and JTS) XML J2EE Connector Architecture Resource Managers

For a description of each standard service, see the Sun Web page: http://java.sun.com/products

Chapter 2. Content Manager base products

57


J2EE packaging One of the most significant changes introduced by the J2EE specification is how application components are packaged for deployment. During a process called assembly, J2EE components are packaged into modules. Modules are then packaged into applications. Applications can be deployed on the application server. Each module and application contains a J2EE deployment descriptor. The deployment descriptor is an XML file providing instructions for deploying the application.

2.2.2 Three-tier architecture WebSphere Application Server provides the application logic layer in a three-tier architecture, enabling client components to interact with data resources and legacy applications. Collectively, three-tier architectures (see Figure 2-5 on page 36) are programming models that enable the distribution of application functionality across three independent systems, typically:  Client components running on local workstations (tier 1)  Processes running on remote servers (tier 2)  A discrete collection of databases, Resource Managers, and mainframe applications (tier 3) Tier 1 presentation clients

Tier 2 Application logic

application servers

Tier 3 data / resource

mainframe

resources (databases) resource manager

Figure 2-9 Typical WebSphere three-tier architecture

58

Performance Tuning for Content Manager


Tier 1 (presentation) Responsibility for presentation and user interaction resides with the first-tier components. These client components enable the user to interact with the second-tier processes in a secure and intuitive manner. WebSphere Application Server supports several client types. Clients do not access the third-tier services directly. For example, a client component provides a form on which a customer orders products. The client component submits this order to the second-tier processes, which check the product databases and perform tasks needed for billing and shipping.

Tier 2 (application logic layer) The second-tier processes are commonly referred to as the application logic layer. These processes manage the business logic of the application, and are permitted access to the third-tier services. The application logic layer is where the bulk of the processing work occurs. Multiple client components are able to access the second-tier processes simultaneously, so this application logic layer must manage its own transactions. Continuing with the previous example, if several customers attempt to place an order for the same item, of which only one remains, the application logic layer must determine who has the right to that item, update the database to reflect the purchase, and inform the other customers that the item is no longer available. Without an application logic layer, client components access the product database directly. The database is required to manage its own connections, typically locking out a record that is being accessed. A lock can occur simply when an item is placed into a shopping cart, preventing other customers from even considering it for purchase. Separating the second and third tiers reduces the load on the third-tier services, can improve overall network performance, and enables more eloquent connection management.

Tier 3 (data/resource) The third-tier services are protected from direct access by the client components by residing within a secure network. Interaction must occur through the second-tier processes.

Communication among tiers All three tiers must be able to communicate with each other. Open, standard protocols, and exposed APIs simplify this communication. Client components can be written in any programming language, such as Java or C++, and can run on any operating system, as long as they can speak with the application logic layer. Likewise, the databases in the third tier can be of any design, as long as the application layer can query and manipulate them. The key to this architecture is the application logic layer.

Chapter 2. Content Manager base products

59


2.2.3 Architectural components In this section, we cover major components in IBM WebSphere Application Server, including the application server and its containers, the HTTP server plug-in and embedded HTTP handling, and the virtual host configuration.

HTTP server and HTTP plug-in IBM WebSphere Application Server works with an HTTP server to handle requests for servlets and other dynamic content from Web applications. The terms HTTP server and Web server are used interchangeably in this section. The HTTP server and application server communicate using the WebSphere HTTP plug-in for the HTTP server. The HTTP plug-in uses an easy-to-read XML configuration file to determine whether a request should be handled by the Web server or the application server. It uses the standard HTTP protocol to communicate with the application server. It can also be configured to use secure HTTPS, if required. The HTTP plug-in is available for popular Web servers.

Application server An application server in WebSphere is the process that is used to run servlet and EJB-based applications, providing both Web container and EJB container. The application server collaborates with the Web server to return customized responses to client requests. Application code including servlets, JavaServer Pages (JSPâ&#x201E;˘) files, Enterprise JavaBeans (EJB) components, and their supporting classes run in an application server. In keeping with the J2EE component architecture, servlets and JSP files run in a Web container, and enterprise beans run in an EJB container. You can define multiple application servers, each running in its own Java virtual machine (JVM).  EJB container The EJB container provides the runtime services needed to deploy and manage EJB components, which are known as enterprise beans. It is a server process that handles requests for both session and entity beans. The enterprise beans (inside EJB modules) installed in an application server do not communicate directly with the server; instead, an EJB container provides an interface between the enterprise beans and the server. Together, the container and the server provide the bean runtime environment. The container provides many low-level services, including threading and transaction support. From an administrative viewpoint, the container manages data storage and retrieval for the contained beans. A single container can hold more than one EJB JAR file.

60

Performance Tuning for Content Manager


An EJB module is used to package one or more enterprise beans into a single deployable unit. An EJB module is represented by a JAR file that contains the enterprise bean classes/interfaces and the bean deployment descriptors. An EJB module can be used as a stand-alone application, or it can be combined with other EJB modules, or with Web modules, to create a J2EE application. An EJB module is installed and run in an enterprise bean container.  Web container Servlets and JavaServer Pages (JSP) files are server-side components that are used to process requests from HTTP clients, such as Web browsers. They handle presentation and control of the user interaction with the underlying application data and business logic. They can also generate formatted data, such as XML, for use by other application components. The Web container processes servlets, JSP files, and other types of server-side files. Each Web container automatically contains a single session manager. When handling servlets, the Web container creates a request object and a response object, then invokes the servlet service method. The Web container invokes the servlet destroy() method when appropriate and unloads the servlet, after which the JVM performs garbage collection. A Web container handles requests for servlets, JSP files, and other types of files that include server-side code. The Web container creates servlet instances, loads and unloads servlets, creates and manages request and response objects, and performs other servlet management tasks. WebSphere Application Server provides Web server plug-ins for supported Web servers. These plug-ins pass servlet requests to Web containers. A Web module represents a Web application. It is used to assemble servlets and JSP files, as well as static content such as HTML pages, into a single deployable unit. Web modules are stored in Web archive (WAR) files (.war) which are standard Java archive files. A Web module contains one or more servlets, JSP files, and other files. It also contains a deployment descriptor that declares the content of the module, stored in an XML file named web.xml. The deployment descriptor contains information about the structure and external dependencies of Web components in the module, and describes how the components are to be used at runtime. A Web module can be used as a stand-alone application, or it can be combined with other modules (other Web modules, EJB modules, or both) to create a J2EE application. A Web module is installed and run in a Web container.

Chapter 2. Content Manager base products

61


 Application client container Application clients are Java programs that typically run on a desktop computer with a graphical user interface (GUI). They have access to the full range of J2EE server-side components and services. The application client container handles Java application programs that access enterprise beans, Java Database Connectivity (JDBC), and Java Message Service message queues. The J2EE application client program runs on client machines. This program follows the same Java programming model as other Java programs; however, the J2EE application client depends on the application client runtime to configure its execution environment, and uses the Java Naming and Directory Interface (JNDI) name space to access resources.  Applet container An applet is a client Java class that typically runs in a Web browser, but can also run in a variety of other client applications or devices. Applets are often used in combination with HTML pages to enhance the user experience provided by a Web browser. They can also be used to shift some of the processing workload from the server to the client. The applet container handles Java applets embedded in a HTML document that reside on a client machine that is remote from the application server. With this type of client, the user accesses an enterprise bean in the application server through the Java applet in the HTML document.

Embedded HTTP server A nice product feature is the HTTP handling capability, embedded in the application server, which enables an HTTP client to connect directly to the application server. Or, as described earlier, an HTTP client can connect to a Web server and the HTTP plug-in can forward the request to the application server. The embedded HTTP transport is a service of the Web container. An HTTP client connects to a Web server and the HTTP plug-in forwards the requests to the embedded HTTP transport. The communication type is either HTTP or HTTPS between the plug-in and the embedded HTTP transport.

Administrative Console The WebSphere Administrative Console provides an easy-to-use GUI to a WebSphere cell, runs in a browser, and connects directly to a WebSphere server or to a Deployment Manager node that manages all nodes in a cell.

62

Performance Tuning for Content Manager


Virtual host A virtual host is a configuration that enables a single host machine to resemble multiple host machines. Resources that are associated with one virtual host cannot share data with resources associated with another virtual host, even if the virtual hosts share the same physical machine. Each virtual host has a logical name and a list of one or more DNS aliases by which it is known. A DNS alias is the TCP/IP host name and port number that is used to request the servlet (for example, yourHostName:80). The default ports and aliases are:  The default alias is *:80, using an external HTTP port that is not secure.  Aliases of the form *:9080 use the embedded HTTP port that is not secure.  Aliases of the form *:443 use the secure external HTTPS port.  Aliases of the form *:9443 use the secure embedded HTTPS port. When a servlet request is made, the server name and port number entered into the browser are compared to a list of all known aliases in an effort to locate the correct virtual host and serve the servlet. If no match is found, an error (404) is returned to the browser. WebSphere Application Server provides a default virtual host, aptly named “default_host,” with some common aliases, such as the machine’s IP address, short host name, and fully qualified host name. The alias comprises the first part of the path for accessing a resource such as a servlet. For example, it is “localhost:80” in the request http://localhost:80/servlet/snoop. Virtual hosts enable the administrator to isolate, and independently manage, multiple sets of resources on the same physical machine.

2.2.4 Performance-related tools and concepts WebSphere Application Server provides a set of features and tools that are closely related to the performance of the system. In this section, we give an overview of the performance-related concepts:     

Performance Monitoring Infrastructure Performance Advisors Dynamic caching Java Virtual Machine Profiler Interface Parameter hot list

Chapter 2. Content Manager base products

63


Performance Monitoring Infrastructure For the WebSphere environment, we provide the Performance Monitoring Infrastructure (PMI) to capture performance data with minimal performance impact to incorporate that data into an overall monitoring solution. PMI uses a client-server architecture. The server collects performance data in memory within the WebSphere Application Server. This data consists of counters such as servlet response time and data connection pool usage. A client can then retrieve that data using a Web client, a Java client, or a Java Management Extension (JMXâ&#x201E;˘) client. A client is an application that retrieves performance data from one or more servers and processes the data. Clients can include:  Graphical user interfaces (GUIs) that display performance data in real time  Applications that monitor performance data and trigger different events according to the current values of the data  Any other application that needs to receive and process performance data The PMI components and infrastructure have been extended and updated in WebSphere Application Server V5.1 to support the new management structure and to comply with the Performance Data Framework of the J2EE Management Specification. WebSphere Application Server contains Tivoli Performance Viewer, a Java client that displays and monitors performance data. PMI is composed of components for collecting performance data on the application server side and components for communicating between runtime components and between the clients and servers. Figure 2-10 on page 65 shows the overall PMI architecture. On the right side, the server updates and keeps PMI data in memory. The left side displays a Web client, Java client, and JMX client retrieving the performance data.

64

Performance Tuning for Content Manager


PerfServlet

Web Client

.PerfMBean App Server

JMX Client Java Client

Web Client Cell Manager PMI Client Wrapper J2EE Client Tivoli Performance Reviewer

JMX Connector HMI/IIOP or SOAP .PerfMBean App Server

Figure 2-10 General PMI architecture

Performance Advisors The Performance Advisors use the PMI data to suggest configuration changes to ORB service thread pools, Web container thread pools, connection pool size, persisted session size and time, prepared statement cache size, and session cache size. The Runtime Performance Advisor runs in the application server process, and the other advisor runs in the Tivoli Performance Viewer (TPV).

Dynamic caching Server-side caching techniques have long been used to improve performance of Web applications. In general, caching improves response time and reduces system load. Until recently, caching has been limited to static content, which is the content that rarely changes. Performance improvements are greatest when dynamic content is also cached. Dynamic content is the content that changes frequently, or data that is personalized. Caching dynamic content requires proactive and effective invalidation mechanisms, such as event-based invalidation, to ensure the freshness of the content. Implementing a cost-effective caching technology for dynamic content is essential for the scalability of todayâ&#x20AC;&#x2122;s dynamic, data-intensive, Web infrastructures.

Chapter 2. Content Manager base products

65


There are two basic types of content from the caching point of view:  Static content does not change over long periods of time. The content type can be HTML, JSP rendering less dynamic data, GIF, and JPG.  Dynamic content includes frequently updated content (such as stock exchange rates), as well as personalized and customized content. Although it is changing content, it also must be stable over a long enough time for meaningful reuse to occur. However, if some content is very frequently accessed, such as requests for pricing information of a popular stock, then even a short time of stability might be long enough to benefit from caching.

Java Virtual Machine Profiler Interface The Java Virtual Machine Profiler Interface (JVMPI) enables the collection of information, such as data about garbage collection, and the Java virtual machine (JVM) API that runs the application server. The Tivoli Performance Viewer leverages a JVMPI to enable more comprehensive performance analysis. JVMPI is a two-way function call interface between the JVM API and an in-process profiler agent. The JVM API notifies the profiler agent of various events, such as heap allocations and thread starts. The profiler agent can activate or deactivate specific event notifications based on the needs of the profiler. JVMPI supports partial profiling by enabling the user to choose which types of profiling information to collect and to select certain subsets of the time during which the JVM API is active. JVMPI moderately increases the performance impact.

Parameter hot list Each WebSphere Application Server process has several parameters that influence application performance. You can use the WebSphere Application Server Administrative Console to configure and tune applications, Web containers, EJB containers, application servers, and nodes in the administrative domain. Always review the tuning parameter hot list, which is a subset of the tuning parameter index. These parameters have an important impact on performance. Because these parameters are application dependent, the parameter settings for specific applications and environments can vary.

2.2.5 Routine maintenance WebSphere Application Server collects data about runtime and applications through the Performance Monitoring Infrastructure (PMI). Performance data can then be monitored and analyzed with a variety of tools.

66

Performance Tuning for Content Manager


Routine maintenance of WebSphere can be performed with the following tools:  Tivoli Performance Viewer (TPV) is a GUI performance monitor for WebSphere Application Server. TPV can connect to a local or a remote host. Connecting to a remote host minimizes performance impact to the application server environment. The Performance Advisor in TPV provides advice to help tune systems for optimal performance and gives recommendations about inefficient settings by using collected PMI data. The Performance Advisor in TPV provides more extensive advice than the Runtime Performance Advisor. For example, TPV provides advice about setting the dynamic cache size, setting the JVM heap size, and using the DB2 Performance Configuration Wizard.  IBM Tivoli Monitoring for Web Infrastructure: Provides best-practice monitoring of the key elements of WebSphere Application Server. This is the inside-out view, which enables administrators to quickly address problems before they affect users.  IBM Tivoli Monitoring for Transaction Performance: Provides a monitoring perspective unique from that of the users. This approach is the outside-in view, which verifies that end-to-end components provide a positive user experience. This tool monitors performance of actual and synthetic transactions, and verifies that the delivered content meets predefined guidelines. Attention: WebSphere Application Server is a key underlying software of Content Manager, and it plays a critical role in the performance of a Content Manager system. In this section, we merely touched the very basics of the J2EE, WebSphere architecture, and architectural components. In 11.5, “Monitoring tools for WebSphere Application Server” on page 339, we introduce the performance tuning monitoring tools for WebSphere. Chapter 6, “Best practices for Content Manager system performance” on page 143, and especially Chapter 9, “Tuning WebSphere for Content Manager” on page 255, discusses WebSphere-specific tuning for a Content Manager system. To better understand WebSphere and WebSphere performance tuning, we recommend visiting the WebSphere Application Server Information Center and the publication we mentioned earlier in this section.

2.3 Tivoli Storage Manager overview IBM Tivoli Storage Manager (TSM), now a part of the IBM TotalStorage® Open Software Family, ensures availability of business applications by providing data protection and resource utilization that scales with business needs. TSM is an

Chapter 2. Content Manager base products

67


enterprise-class recovery solution, protecting business critical data from laptop to systems, regardless of where it resides. TSM supports business continuity by helping to automate disaster recovery planning and recovery execution based on business priorities. TSM integrates the power of applications-aware technology in the recovery of leading database, content management, and workflow applications to ensure that the entire business process is protected. In the following sections, we provide a very general overview of TSM capabilities, advantages, and architecture. Our materials are extracted from existing TSM publications. For more detailed information, see:  Tivoli Storage Management Concepts, SG24-4877  IBM Tivoli Storage Manager: A Technical Introduction, REDP0044  Tivoli Storage Manager for AIX - Administratorâ&#x20AC;&#x2122;s Guide, GC32-0768  Tivoli Storage Manager home page: http://www.ibm.com/software/tivoli/products/storage-mgr/

2.3.1 TSM capabilities overview TSM provides an interface to a range of storage media, including disks, tapes, and optical disks. It enables Content Manager systems to support different storage devices, drives, and libraries. TSM provides the following capabilities:     

Backup and restore Disaster preparation and recovery Archive and retrieve Hierarchical Storage Management Application protection, 24 hours a day, 365 days a year

Backup and restore Complete data protection starts with data backups. Backups are a copy of the active online data stored on offline storage. If an online storage device fails, a data error occurs, or someone accidentally delete a file, the offline copy of that data can be copied back to online storage restored. TSM provides backup and restore functionality, using multiple techniques to reduce data transfer sizes to the minimum possible. These techniques reduce the total time required for both data backups and, more important, data restores.

Disaster preparation and recovery Complete data protection also involves disaster preparation and recovery. Local copies of data protect against discrete failures or errors in equipment, storage, or

68

Performance Tuning for Content Manager


people. But disasters tend to happen to entire facilities, not just a portion of the equipment inside those facilities. Using Tivoli Storage Manager, you can prepare an additional copy of the active data for safekeeping at an off-site location to provide extra insurance against disasters. If a disaster strikes and destroys online storage and computers, the off-site copy of the active data can be restored to new computers to get business up and running quickly.

Archive and retrieve Tivoli Storage Manager goes beyond data backups to include data archiving. Archiving inactive data is an effective way to reduce your online storage costs. The archive process creates a copy of a file or a set of files representing an end point of a process for long-term storage. Files can remain on the local storage media or can be deleted. The customer determines the retention period (how long an archive copy is to be retained). TSM also provides retrieval functionality to fetch the archived data for usage. The retrieval process locates the copies in the archival storage and places them back into a customer-designated system or workstation.

Hierarchical Storage Management Tivoli Storage Manager includes an automated version of archive called Hierarchical Storage Management (HSM), which provides the automatic and transparent movement of operational data from the user system disk space to a central storage repository. If the user accesses this data, it is dynamically and transparently restored to the client storage. As with archiving, HSM removes data from online storage and puts it on less expensive offline storage. Unlike archiving, HSM leaves a data-stub on online storage that shows the name of the removed file. With this stub, you can easily access the offline data, albeit much more slowly than if it were online. HSM capabilities are automated. It watches online data files to see how often they are used. If the data are not opened for an administrator-specified length of time, they will be removed to offline storage, leaving only the data -stub behind. For a business with large amounts of data that do not need to be online at all time, HSM is the best way to save on storage costs.

Application protection - 24x365 By directly controlling the time and method by which a data transfer to offline storage occurs, the application can continue operating with no interruption, 24 hours a day, 365 days a year.

Chapter 2. Content Manager base products

69


2.3.2 TSM architecture TSM is implemented as a client/server software application, consisting of a server software component, backup/archive client, storage agent, and other complementary Tivoli and vendor software products. Figure 2-11 shows the main components of Tivoli Storage Manager.

Figure 2-11 Tivoli Storage Manager architecture

The TSM server provides a secure environment, including automation, reporting, and monitoring functions, for the storage of client data. It also provides storage management policies and maintains all object inventory information. The TSM client software and complementary products implement data management functions such as data backup and recovery, archival and hierarchical storage management, and disaster recovery. The client software can run on different systems, including laptop computers, PCs, workstations, and server systems. The client and server software can also be installed on the same system for a local backup solution. The storage agent software in conjunction with the server software enables the implementation of LAN-free backup solutions exploiting the SAN infrastructure. It is also possible to define server hierarchies or multiple peer-to-peer servers in order to provide a multi-layer storage management solution or an electronic vaulting solution.

70

Performance Tuning for Content Manager


TSM server One of the principal architectural components of the TSM server is its built-in relational database. The TSM database was designed especially for the task of managing data, and it implements zero-touch administration. All policy information, logging, authentication and security, media management, and object inventory is managed through this database. Most of the fields are externalized through the TSM high-level administration commands, SQL SELECT statements, or for reporting purposes, by using an ODBC driver. This database is fully protected with software mirroring, roll-forward capability, and with its own management and online backup and restore functions. For storing the managed data, the TSM server uses a storage repository. The storage repository can be implemented using any combination of supported media including magnetic and optical disk, tape, and robotic storage devices, which are locally connected to the server system or are accessible through a SAN. To exploit SAN technology, the TSM server has features implemented to dynamically share SAN-connected automated tape library systems among multiple TSM servers. The TSM server provides built-in device drivers for more than 300 different device types from every major manufacturer. It can utilize operating system device drivers and external library manager software. Within the storage repository, the devices can operate stand-alone or can be linked to form one or more storage hierarchies. The storage hierarchy is not limited in the number of levels and can span over multiple servers using so-called virtual volumes.

Backup and archive client Data is sent to the TSM server using the TSM backup/archive client and complementary Tivoli and non-Tivoli products. These products work with the TSM server base product to ensure that the data you need to store is managed as defined.The TSM backup and archive client, included with the server, provides the operational backup and archival function. The client implements the patented progressive backup methodology, adaptive sub-file backup technology, and unique record-retention methods. The backup and archive clients are implemented as multi-session clients, which means that they can exploit the multi-threading capabilities of modern operating systems. This enables the running of backup and archive operations in parallel to maximize the throughput to the server system. Depending on the client platform, the backup and archive client may provide a graphical, command line, or Web user interface. Many platforms provide all three interfaces. The command line interface is useful for experienced users and

Chapter 2. Content Manager base products

71


enables generation of backup or restore scripts for scheduled execution. The graphical interface is designed for ease of use for the end user for ad hoc backups and restores. The Web client is especially useful for those clients, such as NetWare, where no native GUI is available, or for performing remote backup and restore operations (for example, in a help desk environment). Some clients (including some UNIX variants and Microsoft platforms) use a new plug-in architecture to implement an image backup feature for raw device backup. This allows you to back up and recover data stored in raw (that is, not a file system) volumes. It also provides an additional method to make point-in-time backups of entire file systems as single objects (image backup) and recover them in conjunction with data backed up by using the progressive backup methodology.

Storage agent TSM storage agent supports LAN-free backup solutions using a SAN infrastructure. The storage agent dynamically shares SAN connected tape libraries and disks with the TSM server, and it has the ability to write and read client data directly to and from server-owned storage media. The storage agent receives data objects via the TSM API and communicates with the server over the LAN using TCP/IP to exchange control information and metadata about the objects being backed up. The data movement itself utilizes the LAN-free path over the SAN to write directly to the storage media. Thus, the data movement is removed from both the LAN and the server processor for potentially greater scalability. The storage agent is available for selected backup and archive clients as well as for backing up popular databases and applications.

Administration For the central administration of one or more TSM server instances, TSM provides command line or Java-based administration interfaces, also called administration clients. Using the unique enterprise administration feature, it is possible to configure, monitor and manage all server and client instances from one administrative interface, known as the enterprise console. It includes:  Enterprise configuration  Administrative command routing  Central event-logging functions The enterprise configuration allows TSM server configurations to be defined centrally by an administrator and then propagated to other servers.

72

Performance Tuning for Content Manager


Administrative command routing enables administrators to issue commands from one TSM server and route them to other target servers. The commands are run on the target servers, and the command output is returned and formatted on the server where the command was issued. In an enterprise environment with multiple TSM servers, client and server events can be logged to a central management server through server-to-server communications, thereby enabling centralized event management and automation.

Externalized interfaces TSM provides a data management API, which can be used to implement application clients to integrate popular business applications, such as databases or groupware applications. The API also adheres to an open standard (XBSA) and is published to enable customers and vendors to implement specialized or custom clients for particular data management needs or non-standard computing environments. Tivoli Storage Manager is a key underlying software of Content Manager and it plays an important role in the performance of a Content Manager system. In this section, we merely touched on the very basics of TSM. In Chapter 10, â&#x20AC;&#x153;Tuning TSM for Content Managerâ&#x20AC;? on page 275, we discuss TSM-specific tuning for a Content Manager system.

Chapter 2. Content Manager base products

73


74

Performance Tuning for Content Manager


3

Chapter 3.

Performance tuning basics In this chapter we introduce performance tuning basics. We define what performance is, how it is measured, some of the reasons for performance tuning, and when to plan and tune your system for performance. In addition, we describe the performance methodology that you should follow for the entire life cycle of a Content Manager system, and the performance improvement process you should use to maintain and improve an existing system. Finally, we include general performance tuning guidelines that you should keep in mind while doing performance tuning. We recommend that you complete this chapter before you read the remaining chapters of this book.

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

77


3.1 Introduction To perform means to begin a task and to carry the task through to completion. Performance refers to the way in which a system performs or functions given a particular workload. In the Content Manager world, a Content Manager system must run efficiently and effectively, and it must be able to perform the expected workload within a business-required time frame. Designing, configuring, and tuning a Content Manager system to achieve the desired results is essential for your business to succeed.

Performance measurement Performance is typically measured in terms of response time, throughput, and availability.

Response time is the elapsed time between when a request is submitted and when the response from that request is returned. Examples include how long a database query takes and how long it takes to display a retrieved document.

Throughput is a measure of the amount of work over a period of time. In other words, it is the number of workload operations that can be accomplished per unit of time. Examples include database transactions per minute, kilobytes of a file transferred per second, and total number of legacy documents imported per hour.

Major performance bottlenecks Four major performance bottlenecks are:    

CPU Memory Disks Network

Source of performance problems System performance is affected by many factors, some of which include:  The available hardware and software in the system  The configuration of hardware and software in the system  The number of users, the types of users, and the number of concurrent users in the system  The number of applications running in the system, the types of running applications, and the application workload Performance problems might be caused by poor application and system design, inadequate system resources such as CPU, memory, disks, or non-optimal tuning of system resources.

78

Performance Tuning for Content Manager


Performance tuning goals Some of the goals we want to accomplish through performance tuning include:  Remove a bottleneck.  Increase existing throughput: Processing a larger or more demanding workload without additional processing cost such as adding new hardware or upgrade existing software.  Increase response time: Obtaining faster system response times without additional processing cost.  Increase system availability: Extending the availability of a system, thereby increasing productivity of the system.  Reduce processing cost: Reducing the number of users or the length of their working hours needed to process existing workload. Performance tuning is a complex and iterative process. It involves establishing quantitative objectives, constant system monitoring, and selective and careful tuning to ensure that the objectives are met over time.

Publication references Many publications have been written about performance tuning for Content Manager, DB2, WebSphere, and AIX. These guides are well written and contain very useful information. Most of the materials presented in this chapter are extractions from and summaries of these publications. The publications are:  Database Performance Tuning on AIX, SG24-5511  IBM DB2 Universal Database Version 8.2 Administration Guide: Performance, SC09-4821  DB2 UDB/WebSphere Performance Tuning Guide, SG24-6417  IBM Content Manager v8.3 Performance Tuning Guide http://www.ibm.com/support/docview.wss?uid=swg27006452  CMv8.3 Performance Monitoring and Monitoring Guide http://www.ibm.com/support/docview.wss?uid=swg27006451  CMv8.3 Performance Troubleshooting Guide http://www.ibm.com/support/docview.wss?uid=swg27006450 We include what we think is important for a quick overall view of the performance tuning basics for the Content Manager system. If time allows, we recommend that you read these publications because we found them extremely helpful.

Chapter 3. Performance tuning basics

79


3.2 When to plan and tune your system Everyone wants value for their money and tangible benefit for the cost of performance tuning. When should you plan and tune your system for performance? You need to plan and tune your system in any of the following situations:  Planning and designing a new system Keep the system performance goal in mind during the initial planning and designing phase of a system. Many performance problems arise from poor planning and poor design. It is hard to tune a system to correct design problems.  Installing a new system A newly built system requires initial tuning for maximum performance before going into production. In most of the cases, however, this system might already be in production because of the difficulty of generating user workload artificially and not being able to predict real user workload and work patterns. Ideally, the number of users and workload is still growing and has not reached maximum expected capacity. This allows tuning before system response times become unacceptable.  Regular maintenance Regular periodic monitoring and tuning should be standard practice. It may be a good idea to perform a system performance review on a quarterly, bi-annual, or yearly interval. Be proactive in identifying problem machines, applications, and systems. This will help avoid potential crises that halt business activities.  User complaint or a crisis just occurred Users are complaining or a crisis in performance or response time just occurred. Based on the severity of the performance problem, you might need to identify the bottleneck or issue immediately, recommend and run a solution that might include tuning, then work out procedures for avoiding these types of issues in the future.  Upgrading an existing system or changes in workload New resources might be added to the existing system or to replace some existing resources; additional workload might be demanded of the existing system. For example, you need to add a new Resource Manager to accommodate new and additional digital content. The current system must go through tuning activities for the new system configuration.

80

Performance Tuning for Content Manager


If you are among the fortunate few, you are currently in the planning and designing phase of your future system. We strongly recommend you read through 3.3, “Performance methodology” on page 81 to get you started the right way. It is extremely important for you to understand how to design a system that maximizes potential system performance. If you already have an existing Content Manager system that is up and running in a production environment, your main task is to maintain or improve performance, and resolve any crisis or problems along the way. In this situation, we advise you to read the performance methodology, and understand the principles behind it. Most important, we recommend that you read through 3.4, “Performance improvement process” on page 83, which provides a structured way to maintain and improve your existing system performance.

3.3 Performance methodology The Content Manager performance team has recommended a list of “best practices in the IBM DB2 Content Manager V8.3 Enterprise Edition Performance Tuning Guide at: http://www.ibm.com/support/docview.wss?uid=swg27006452 This guide is useful during the entire life cycle of a Content Manager system. The best practices tuning guide is incorporated into Chapter 6, “Best practices for Content Manager system performance” on page 143. Note: If you are planning and designing a new Content Manager system, read it through to understand and follow the methodology presented. If you are working with an existing system, we strongly recommend reading it anyway. In this section, we extract the performance methodology from the Performance Tuning Guide. The primary goal is to avoid surprises, start on the right path from day one, and routinely maintain your system to ensure optimal performance for your system. Performance and scalability cannot be achieved by performance tuning alone, but tuning is one key element of a Content Management application’s lifecycle. To build and maintain high performance and scalability requires a full lifecycle performance methodology, from pre-sales through installation to production, with the primary goal of avoiding surprises. The Content Manager performance methodology has a five-stage life cycle. By setting objectives early, performance tuning before going into production, and establishing processes that monitor key performance metrics, the methodology

Chapter 3. Performance tuning basics

81


helps ensure achievement of initial scalability objectives and provide information for capacity planning as workloads change and grow. The five stages of the performance methodology are: 1. Planning for performance and scalability. Document the projected system topology, the projected workload, and define measurable performance and scalability objectives. Perform initial capacity planning and sizing estimates. 2. Making solution design choices and performance trade-offs. Understand topology choices (types of client, eClient, Document Manager, Client for Windows, custom client), feature choices (such as replication, high availability, text search, and versioning) and their performance implications. 3. Performance tuning. Plan for an initial tuning period to maximize confidence and reduce risk, using the tools and techniques from the monitoring and troubleshooting phases to meet the objectives set in the planning phase. Focus on effective tuning of Content Manager server calls, memory, disk I/O, and network bandwidth. The sections that follow provide key parameters in various Content Manager components that give you the greatest performance impact. 4. Monitoring and maintaining performance. Maintain a performance profile of the workload metrics and resource utilizations on the Content Manager servers. Monitor over time to observe trends before they become a problem. 5. Performance troubleshooting. When performance issues arise, use a disciplined and organized approach to solve the problem using the performance profile, data, and performance tuning guidelines. Attention: If you are dealing with a new system not yet in production, plan for an initial tuning period to maximize performance and reduce risk before going into production. If possible, use automated test tools to drive a multi-user test load based on your projected workload. During this tuning period, focus on one area at a time and change only a small number of tuning parameters. Run the test workload to evaluate the effect of each set of changes before making additional tuning changes.

82

Performance Tuning for Content Manager


3.4 Performance improvement process If you are working with a running Content Manager system, your primary goal is to improve system performance through performance tuning. This requires planning, strategy, methodology, resources, time, and money. Be sure to read through the performance methodology in the previous section before you continue. Performance improvement process is an iterative and long-term approach to monitoring and tuning your system for performance. Depending on the results of monitoring, you and your performance team should adjust the configuration of the system resources and applications to maximize system performance, reduce down time, and avoid potential problems or a crisis. We recommend the following performance improvement process: 1. Gather business requirements on performance and user input. 2. Understand system topology and workload. 3. Define performance objectives. 4. Establish performance indicators for the major constraints in the system. 5. Develop a performance monitoring plan. 6. Run the plan. 7. Evaluate system performance, analyze the measured outcome, and determine whether objectives have been met. If objectives are met, consider simplifying the performance monitoring plan by reducing the number of measurements, because performance tuning can be resource intensive. Iterate the entire process from step 6 when necessary. 8. If objectives are not met, identify the area and major causes of the problems. 9. Decide where and how changes should be make in the hardware or software configuration that can address the problems and make a major impact. Understand, consider, and make trade-off decisions. When appropriate, re-assess performance objectives; modify if necessary so the objectives are realistic, achievable, yet still acceptable. 10.Adjust the system configuration according to the solution from the previous step. Make one change at a time. Work on one level at a time. If the system has already reached its optimal performance or no other configuration can make any significant impact, consider upgrading your hardware, software, or application re-design. Remember to follow the performance tuning methodology covered in later sections. 11.Iterate the entire process from step 6.

Chapter 3. Performance tuning basics

83


Periodically, and especially after significant changes to your system or workload:  Re-examine your objectives and indicators.  Refine your monitoring plan and tuning methodology.

3.5 General performance tuning guideline When you are working on improving system performance, we recommend that you follow this set of general guidelines for performance tuning. This is a collection of wisdom from the performance tuning experts:           

Establish quantitative, measurable, and realistic objectives. Understand and consider the entire system. Change one parameter at a time. Measure and reconfigure by levels. Consider design and redesign. Remember the law of diminishing returns. Recognize performance tuning limitationss. Understand the configuration choices and trade-offs. Do not tune just for the sake of tuning. Check for hardware as well as software problems. Put fall-back procedures in place before you start tuning.

Establish quantitative, measurable, and realistic objectives Performance is measured in response time and throughput. Performance tuning objectives must be quantitative, measurable, and realistic. Units of measurement include one or more of these: response time for a given workload, transactions per second, I/O operations, CPU use. A system performance can also be measured in terms of throughput, or how much specific workload such as loading and archiving can be completed over a specified time.

Understand and consider the entire system Never tune one parameter or one portion of a system in isolation. Before you make any adjustments, understand how the entire system works and consider how your changes could affect the entire system.

Change one parameter at a time Change one performance tuning parameter at a time to help you assess whether the changes have been beneficial. If you change multiple parameters at once, you might have difficulty in evaluating what changes contribute to what results. In addition, it might be difficult for you to assess the trade-off you have made. Every time you adjust a parameter to improve one area, there is a great chance that it affects at least one other area that you might not have considered. By changing

84

Performance Tuning for Content Manager


only one at a time, this gives you a benchmark to evaluate whether the result is what you want to accomplish.

Measure and reconfigure by levels Tune one level of your system at a time, for the same reasons that you would change one parameter at a time. Use the following list of levels within a Content Manager system as a guide:         

Hardware Hardware configuration Operating system Library Server configuration Resource Manager configuration Database manager Database configuration WebSphere Application Server Application configuration

Consider design and redesign Design plays an important role in system performance. If the design of a system is poor, it might be impossible to improve performance and achieve business requirements. When considering improving a systemâ&#x20AC;&#x2122;s performance, the impact of the design on a system should be considered. Sometimes, you need to go back to the drawing board, review the entire system design, and work with architects and developers to redesign the application or system if necessary to achieve desired performance. There might uncertainty whether redesign is considered part of performance tuning; nevertheless, design should be considered and if necessary redesign should be followed.

Remember the law of diminishing returns Greatest performance benefits usually come from a few important changes in the initial effort. Minor changes in the later stage usually produce smaller benefits and require more effort. Always remember the law of diminishing returns: Stop when the performance effort no longer justifies the outcome and when the outcome is already acceptable.

Recognize performance tuning limitations There are times when tuning improves performance dramatically if a system is encountering a performance bottleneck, which can be corrected. Other times, tuning will not make any additional difference because the system is already performing at its optimal limit. There is a point beyond which tuning can no longer help. You must recognize this limit. When this situation arises, the best way to improve performance might be to add additional disks, CPU, or memory or replace the existing ones with faster and higher performance hardware.

Chapter 3. Performance tuning basics

85


Understand the configuration choices and trade-off Nearly all tuning involves trade-offs among system resources and their performances. Understanding the configuration choices and trade-offs and deciding where you can afford to make a trade-off and which resources can bear additional load is important.

Do not tune just for the sake of tuning Make sure you have clear objectives and concentrate on the source of the performance problems. Make changes that address and relieve problems. Do not tune just for the sake of tuning. If you make changes to the area that is not the primary cause of the performance problems, it has little or no effect on the resolution of the problem. Furthermore, it may complicate any subsequent performance tuning. Again, make sure you have clear objectives.

Check for hardware and software problems Some performance problems may be corrected by applying service to your hardware, your software, or both. Do not spend excessive time monitoring and tuning your system when simply applying service might make it unnecessary.

Put fall-back procedures in place before you start tuning Some tuning can cause unexpected performance results. If this leads to poorer performance, it should be reversed and alternative tuning should be tried. If the former setup is saved in such a manner that it can be simply recalled, backing out of the incorrect information becomes much simpler.

86

Performance Tuning for Content Manager


4

Chapter 4.

Planning for performance To deliver good performance, a Content Manager system requires sufficient system resources to run the necessary workload. One of the most important tasks from the first day is to plan for performance. To determine how much system resources will be required, we need to estimate the workload the business application is likely to create, then translate that to workload for the Content Manager server (or servers). When we have an estimate of the load, we can compare it to the measured workloads tested on particular hardware configurations. In this chapter we run through the principles of sizing a system and look at the tools and techniques available to estimate the workload generated by a Content Management solution.

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

87


4.1 Introduction Eventually, after all of the tuning and system design has been optimized, the system performance comes down to the ability of the hardware resources to process commands, access data, and communicate across a network. Unfortunately, the hardware resources are the first components acquired when building a system. Although Content Manager provides the flexibility to expand and take advantage of more hardware that is added into the system, it is usually more cost effective to get the correct hardware resources in the first place. For new systems, we need to be able to determine the required resources even before the detailed system design has been done. If you already have an existing system, you might want to perform the sizing exercise to determine whether the existing hardware configuration can support new workload or projected growth. How can we determine what system resources are required? To do this, we need to accurately estimate the current workload and its rate of growth, then translate that into hardware and software configurations by comparing this workload to measured systems and workload. This is a critical exercise when implementing new systems, but should also be performed periodically for existing systems as part of a capacity management review. To create a capacity plan, we must:      

Understand the current process and data. Translate the current process to the new data model. Analyze the impact of the new workload. Estimate the storage requirements. Perform benchmark measurements. Forecast future workload and system performance.

An organization considering a new Content Manager application should examine the current methods of processing their documents and data. They need to have some business goals defined for the system. The task then is to use attributes, resource objects, documents, folders, links, authorization schemes, and work processes to develop a model that best represents the organization’s data flow. They will need to understand their business process and user needs well enough to translate the existing process into a Content Manager data model and document routing and work flow process. See 5.5.3, “Data model” on page 115” for details. Having mapped the existing process you might be able to enhance it with new capabilities provided by an electronic Content Manager system. The data model you have developed and the data volumes measured from the current processes enable you to calculate projected activities in the Content Manager system. To automate some of the calculations, sizing support is

88

Performance Tuning for Content Manager


available to IBM representatives through TechLine. Your IBM representative can access it from this Web site: http://w3.ibm.com/support/americas/techline/sizewise.html Sizing should not be done in a vacuum. You need to analyze the environment before and after any sizing exercise to ensure that the results match the real-world environment for which you are planning.

Goal of the sizing exercise With a data model that properly represents business data and processes, and an understanding of the volume of documents in these processes, you can effectively size the system. The ultimate goal of the sizing exercise is to estimate:  Storage requirements for the objects on disk, optical, or other media  Disk requirements for the databases, prerequisite software, and other requirements on the servers  The processing power of the servers, along with memory recommendations rewired to support the workload and deliver the response time required

4.2 Understanding current processes and data To start the capacity planning exercise, we need to understand how the existing system is dealing with this content. This includes:  Document types and counts Documents should be grouped and categorized by type (for example, invoices or claims). These groupings will be based on the way the documents are to be treated for security, and indexing data and storage requirements. The groups will probably translate to Content Manager item types.  Document arrival rate When, where, and how many documents are entering the system? Although annual document volumes are often used as a benchmark, it is dangerous to apply these statistics without knowing document arrival peaks and valleys. It is critical to identify the peak arrival rate and size the system accordingly. It is also useful to understand the locations and formats of the documents that can be captured. Are all application forms sent to a central mail room and processed there, or are they sent to individuals in branch offices?  Document usage When, where, and how is each kind of document used? For example, one type of form might go through a business process, following a specific workflow. The workflow process comprises seven different activities or steps

Chapter 4. Planning for performance

89


by different people. At each workflow step, this document is viewed by the person performing the activity. This example means that each form is retrieved at least seven times. That does not seem like much, but if there are 10,000 of these forms arriving each day, then there are 70,000 retrievals happening each day.

4.3 Translating current processes to the new data model When sizing the system, first you must determine what item types will make up the system. This can be the most difficult task in sizing the system. You need to understand how the customer currently uses these documents, and how the customer wants to conduct business with the Content Manager system. Develop the data model that will define the documents as item types and describe what attributes and objects the documents have. If folders or links are to be used, determine what they represent to the business and how they fit into the process. When choosing attributes for your item types, your primary concerns are identifying data that will be searched, and fields that will distinguish one document from another when a list of search results are displayed. When a user looks for an item in the system, what information about that item does the user already have? In a customer service application, when a customer initiates a query, does the customer already have a customer number, a reference number, or just the customer name? Is there a need to search by the date sent, received, processed, published, or a combination of the values? Remember that the Library Server tables (specifically the item type root component or child component tables) will have a column for every key field defined, and there will be one row in the root component table for every item in the item type, and possibly many rows in the child component table or tables. Avoid having an attribute for every thing you can think of attached to each item. Although you might want everything about the item to be used for searching, it could lead to masses of unnecessary data on the Library Server and, as a consequence, increased search times and more data to back up and maintain.

4.4 Analyzing the impact of a new workload Every action that is performed by a client talking to a Library Server uses some small amount of the resources of the Library Server. The key to estimating the total load on the server is to estimate the total number of actions the client community must perform to support the customerâ&#x20AC;&#x2122;s business requirements. For instance, if you create 10,000 documents per day, that is 10,000 store requests

90

Performance Tuning for Content Manager


on the Library Server. If the user retrieves each document five times, that is 50,000 retrieve requests. You can determine the number of searches, folder opens, and document route actions per day by asking appropriate questions of the customer in their terms and in the context of their existing process. The following questions deal with numbers and sizes of documents, folders, and item types, and the amount of stress placed on the servers and workstations.

Questions for new workload analysis You could have documents that vary greatly in size and other characteristics. You might want to answer each of the following questions by item type:  How many documents and folders of each type will be stored in the system? How many documents per day will users create in the system? How many folders per day will be created? The use of folders will be dependent on the data model developed for this application. Documents can be grouped in folders, and folders placed inside other folders. Also, note that one document can exist in multiple folders. Do users have old documents that will be scanned into the system? Is there a back-capture exercise to account for? Do we need to migrate an existing solution? Are we bulk loading old documents and images from a file system? Note: This is the most important question to answer, because it affects all of the estimates.  What is the size of the documents? How many bytes of data are required to hold the average document? If we are scanning data, this is the size of the document image file scanned at the required resolution. If this is an electronic document, then what is the average file size? Sample the existing data to determine this. If there are different types of documents, establish an average size and count for each type of document. Understand the size of each part that makes up a document if there is more than one. For example, there might be a high-resolution image part of 500 KB, a low-resolution image part of 40 KB, and another part that has a plain ASCII text description of 5 KB. The size of the documents and parts has the greatest effect on the amount of final media storage needed, and the network requirements needed to transmit the objects when they are stored or retrieved.  How many days will documents reside on disk? Along with the number of objects and the size of the objects, this question relates to the amount of disk required for storage before migration to other media (TSM). Typically, Content Manager implementations assume that new

Chapter 4. Planning for performance

91


documents will reside on disk for a set number of days while they are actively worked on and frequently retrieved. Afterwards, they are migrated off to slower media, such as optical, for archiving purposes, where they are less likely to be retrieved but are still needed for auditing purposes.  Will any documents be replicated to other Resource Managers? If so, how many copies will be held? If the documents are duplicated, they will take up twice as much storage space. These have to be counted as well.  Are you using the out-of-the-box Windows client, the eClient, or are you writing your own custom application? The Windows client and the eClient make assumptions about the document model. The eClient requires a mid-tier server. Will you convert document formats on the mid-tier server?  How many users will access the system? This includes both LAN and Web-based users. This affects the per-user interactions with the system.  Is there a target platform for the servers? Many customers have hardware platform preferences for the servers, and this can affect the size and the number of required machines.  What information will be used to index the documents and folders (the number and the size of attributes)? This translates to the data that is held in the Library Server database; hence its size. Each item type creates at least one Library Server database table (ICMUT01nnnnnn) that will have a column for each of these attributes, and a row for every item. In the case of child components, there might be multiple rows for each item. These tables are searched when retrieving items by attribute values. They also hold system defined and maintained attributes. In Content Manager there is indexing. The term “indexing data” can mean the parametric data that describes the image, document, or file, and is stored as rows and columns in the database. The term can also apply to database indexes created to help the database operate effectively. The database indexes enable you to search the data in the index much faster than searching across all data in the table. These database indexes take up additional space on the database and must be included in our calculations. Typically, database indexes are set up on a subset of the parametric data (indexing data) that describes these items. You should select item attributes that are used for the majority of user-initiated searches. Refer to “Indexes” on page 110 for more information about database indexes.

92

Performance Tuning for Content Manager


 How many times will documents and folders be retrieved and updated? Understanding the business process in which these documents exist helps us understand how often retrieve requests are expected per day. Workflow-driven processing: As a document passes through a fixed process it may be retrieved, on average, a certain number of times for it to be fully processed. For instance, if 10,000 claim forms are received each day and each is retrieved five times during its lifetime, the daily retrieves done by the system will be 50,000. This is to keep a steady state in the system; there must be as many items completing the process each day as entering, and on average, the same number passing through every step of the process. For example, 10,000 items are received, 10,000 are retrieved at step 1, 10,000 are retrieved at step 2, another 10,000 at step 2, and 4, and 5, giving a total of 50,000 retrievals. Note: If one of these retrieval steps can be eliminated from the process, then there can be a significant reduction in the Resource Manager and network load.  If you are using the Content Manager document routing, you also need to determine: â&#x20AC;&#x201C; Through how many steps or work nodes will the document be routed? Each route places a certain amount of stress on the Library Server. â&#x20AC;&#x201C; How many days will items reside in the document routing process? This affects the size of the Library Server database. The information concerning items in work lists and processes is kept in the database tables. There is at least one row in the document routing tables for every item that is currently in a work process.  How many ad hoc searches for documents and folders will be done per day? Will the customer service officers be searching the system to handle enquiries? If so, how many enquiries do they handle per day, and of those how many would involve searching the Content Manager system? Are there business analysts who might crawl the system to find out more about the organizationâ&#x20AC;&#x2122;s customers? Will external customers or business partners be accessing data through a Web interface? Are there Web site statistics that identify the number of requests that are likely to retrieve pages or items from the Content Manager system? Searches for items are done on the user tables for the item type. Each search places a stress on the Library Server itself, and may lead to the user actually retrieving and viewing a document.

Chapter 4. Planning for performance

93


 How many hours per day will the system be available for input, processing, and object migration? Assuming a fixed workload, if the workday is long (for example, 10, 16, or 24 hours), a smaller processor can be used on the Library Server than for a system with standard 8-hour workday to process the workload. You might want to size for a single hour of the peak workload, if one can be determined. One of the bottlenecks in the system might be migrating objects to optical or slower media off-shift. Increasing the number of hours allowed for off-shift could allow for a reduction in the number of Resource Manager servers otherwise recommended.

Sizing a Content Manager system When these questions have been answered, we can begin to calculate storage and load generated by this Content Manager application. Almost every client action interacts with the server and causes some load generated on that server. From logon through search, retrieve, update, and delete, they all generate work for the server. Stored Procedure Calls (SPC) and HTTP requests can be used as the workload metrics. SPC represents a generic unit of work for the Library Server and the HTTP request represents a unit of work for the Resource Manager. SPC replaces the Library Server Functional Request (LSFR) as the unit of measure for the Library Server. Readers with experience of earlier versions of Content Manager may recall LSFRs. To determine your workload, you can either trace a sample workload or have your IBM representative use the TechLine Sizing Support. To measure a workload, use the Library Server performance-level trace to monitor a sample workload on a test system client, count the number of SPCs generated, and multiply that by the total number of requests determined by the questions above. This will give an estimate of the number of SPCs likely to be generated by the system. We can then compare this to benchmark systems. The TechLine Sizing Support uses a Content Manager Sizer to generate a report that calculates the number of SPCs that are likely to be generated by the Windows client or eClient to perform the activity defined by the answers to the questions above. It also compares this calculated workload to measurements made on specific hardware configuration. TechLine will use the Sizer to help suggest the hardware that is required for the customer load. Equipped with this information, we can identify a current model server of similar relative performance. We then need to find the best value hardware combination for a server of that performance range. Always be sure to allow additional capacity for growth. Similarly, the performance metrics were taken under

94

Performance Tuning for Content Manager


laboratory conditions and a well-optimized system; these conditions might not be the same for your application so make allowance for that. The report also provides relative performance measures for the Resource Manager capacity. The Resource Manager is an application that runs and uses processor and memory capacity as well as I/O capacity.

4.5 Estimating storage requirements Apart from processing capacity, the system must also allow sufficient data storage capacity. Part of the planning must be to calculate the projected storage requirements now and into the future. Storage is required for the Library Server as well as the Resource Managers. The Library Server holds the data in the Library Server database and its size must be estimated and accounted for. The Resource Manager, as the primary data storage component, must have sufficient space to hold the images, documents, and files that make up the system, in combination with the TSM system, if used. It also has a database and we need to include this in our storage calculations.

Library Server database The size of the Library Server database grows as the number of documents and folders grows. To estimate its size, you need to know the number of documents or folders you expect to have in the system, the number that will be in workflow at a given time, the number of folders you expect each item to be contained in, the number of resource objects you expect to have, the total length of the attributes of each item type, and the number of items that will be in each of the item types. To estimate the size of the database accurately, we have to understand all of the database tables and how they are used. The database tables are documented in the API online reference, so we can calculate the table sizes exactly. This is quite tedious. Alternatively, we can measure sample systems and extrapolate, approximate by sizing only the largest tables, usually the item type tables, and then allowing additional space for the remaining control tables. Each folder or item will have:  A row in the item type table and, where applicable, items in the child component tables  A row in the links table for every link, or it â&#x20AC;&#x153;containsâ&#x20AC;? the relationship it has  A row in the document routing tables for every process in which it participates Each document will have:  A row in the item type table and, where applicable, items in the child component tables

Chapter 4. Planning for performance

95


 A row in the parts table for that item type  A row in the media object class table for that part type  A row in the document routing tables for every process in which it participates Resource items will have:  A row in the item type table and, where applicable, items in the child component tables  A row in the media object class table for that part type  A row in the document routing tables for every process in which it participates In each of these rows, there is a certain amount of system data. See Table 4-1 for root component system data, and Table 4-2 on page 97 for child component system data. There is also user-defined attribute data. It is possible to calculate the row lengths for each of the major tables; multiply them by the projected numbers of items; add in factors for the database row, page, and table overheads, and the other system tables; then add in extra space for the user and system indexes. For more about calculating database table and index sizes, see IBM DB2 Universal Database Version 8.2 Administration Guide: Planning, SC09-4822. Table 4-1 System entries for root component tables

96

Column name

Type

Length

Scale

Nulls

COMPCLUSTERID

INTEGER

4

0

No

COMPONENTID

CHARACTER

18

0

No

ITEMID

CHARACTER

26

0

No

VERSIONID

SMALLINT

2

0

No

ACLCODE

INTEGER

4

0

No

SEMANTICTYPE

INTEGER

4

0

No

EXPIRATIONDATE

DATE

4

0

Yes

COMPKEY

VARCHAR

23

0

No

CREATETS

TIMESTAMP

10

0

No

CREATEUSERID

CHARACTER

32

0

No

LASTCHANGEDTS

TIMESTAMP

10

0

No

LASTCHANGEDUSERID

CHARACTER

32

0

No

Performance Tuning for Content Manager


Table 4-2 System entries for child component tables Column name

Type

Length

Scale

Nulls

COMPCLUSTERID

INTEGER

4

0

No

COMPONENTID

CHARACTER

18

0

No

ITEMID

CHARACTER

26

0

No

VERSIONID

SMALLINT

2

0

No

PARENTCOMPID

CHARACTER

18

0

No

COMPKEY

VARCHAR

23

0

No

PAERENTCOMPKEY

VARCHAR

23

0

No

Alternatively, the sizing support output provides an estimated size of the Library Server database after conversion of existing items, after a year of production at the projected volumes, and at the end of the estimate period (as specified by the user). These estimates use the numbers of documents, the attribute value lengths, and the index sizes to perform the calculations suggested above. From our experience, we used the sizing support to calculate the database size based on the numbers of documents in the system. We compared the estimated database size against the actual size of the database as measured on the file system. The estimates provided were very generous and increasingly so as the number of items increased; however, we still recommend that you use the Techline Sizing Support through your IBM representative as it is far better in these situations to overestimate than to underestimate. Any excess capacity realized can allow additional room for growth or change in the future.

Text indexes for Library Server If you plan to implement text indexes for documents in your system, you should allocate storage space on the Library Server for those indexes. The disk space you need for an index depends on the size and the type of data to be indexed. As a guideline for an index, reserve disk space for about 0.7 times the size of the documents being indexed for single-byte documents. For double-byte documents, reserve the same size as the documents being indexed. The amount of space needed for the temporary files in the work directory is 1.0 to 4.0 times the amount of space needed for the final index file in the index directory. If you have several large indexes, you should store them on separate disk devices, especially if you have concurrent access to the indexes during index update or search.

Chapter 4. Planning for performance

97


Resource Manager database The Resource Manager database holds storage management information for the stored content, so it must be accounted for in storage estimation. The Resource Manager database is significantly smaller than the Library Server database (by a factor of about 10 to 1 in our test systems). There will be a Resource Manager database for each and every Resource Manager in the system. The Resource Manager database tables are fully documented, so you could calculate the exact size of each row held. In general, however, they hold a row in the database for each image, document, or file stored on that Resource Manager and additional rows for each replica stored there. You can estimate it by adding up the size of the columns of the tables and multiplying by the number of resource objects stored, or you can use the results from the TechLine Sizing Support. The sizer output provides a database size estimate for conversion items, the size after one year of processing, and the size after a period you specify. The Resource Manager database size estimate is also quite generous compared to the size actually measured in our test systems.

File storage for Resource Manager Usually, the largest storage usage in the system is the Resource Manager file system or LBOSDATA directory. To estimate the required amount of storage, simply multiply the average file size by the number of files expected to be stored here. The amount stored here will continue to grow while new data is added and old data is maintained. Old data may be removed from the system by deliberate deletions after the data is no longer required, or because it is migrated to another storage system, such as TSM. If files need only be stored on disk for 120 days, then the amount of storage required for the LBOSDATA directory is 120 times the arrival rate per day, multiplied by the average size of the documents. The Techline Sizing Support provides estimates for this.

TSM database and storage requirements If TSM is to be used in the system, we need to account for its requirements as well. If TSM is the long-term repository, then we need sufficient storage capacity to hold the data. TSM provides the tools to efficiently maintain files on other storage media, specifically disk, tape and optical disk. The storage capacity required for the TSM system can be calculated the same as Resource Manager: the number of items held for a period we specify, multiplied by their average size. The TechLine Sizing Support also provides estimates for this. The TSM database, though even smaller per item than that of the Resource Manager, can over time grow to be a considerable size and must be accounted for. TSM provides tools to estimate the database size, as does the Techline Sizing Support spreadsheet.

98

Performance Tuning for Content Manager


5

Chapter 5.

Designing and configuring for performance Good performance for a Content Manager system requires good design. In this chapter we look at many of the choices that have to be made during design and configuration phases and how they can affect performance. Content Manager systems are flexible and can be reconfigured based on your system requirements. This discussion might also be of interest to anyone who is maintaining an existing Content Manager system.

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

99


5.1 Introduction Design and configuration includes the overall architecture of a system, its layout, and what features and functions are used to meet the business requirements. It also means understanding these features and functions sufficiently well to predict their performance behavior. This sounds like a major challenge, but if we can understand the principles, then we can begin to make general predictions. If we need more detail information, then we can model and test. Content Manager is designed and tuned for performance. Out-of-the-box Content Manager system uses sophisticated techniques to optimize its own performance. On installation, Content Manager adjusts a range of DB2 parameters from default settings to production-ready values ready for at least medium-scale use. Content Manager also uses unique technology such as writing its own programs during run time, specifically for you and your implementation. It uses pre-compiled SQL wherever possible to reduce the load on the Library Server database and reduce response time at the server. If we understand some of these techniques, we can tune for them and optimize performance.

5.2 General design considerations For a Content Manager system, consider three main areas for performance tuning: the Library Server, the Resource Manager, and the client application. Two additional important areas to keep in mind are the hardware resources and the network that connects them. Always keep in mind during design:  Library Server as a database application  Resource Manager as a Web application  Client application design impact on performance

Library Server as a database application The Library Server is a database application, and the tuning is database specific. You should focus on database design when designing a Content Manager system. In addition, you should tune the database for maximum Content Manager system performance.

Resource Manager as a Web application The Resource Manager is a Web application that delivers files on request. It is primarily a Java application running as a WebSphere Web application that holds control data in a database, and objects in a file system. We need to tune WebSphere Application Server, the database, and the file system.

100

Performance Tuning for Content Manager


Client application design impact on performance The design of the client application can have a significant impact on performance. When building and testing the client application, be sensitive to the design and capabilities of the overall system. This includes the network that carries the data between distributed system components. Content Manager supplies two general-purpose clients (Windows client and eClient) and a list of APIs and application components that can be assembled to customize client applications. There can be many variations of client applications, which we will discuss in general. Usually, there is not much to tune on a client application after it is built. Most client performance issues are related to the design and the APIs that are used to implement the application. As with all application tuning, the biggest gains in performance come from well-designed client applications. Design the client applications to take best advantage of the servers’ capabilities and to minimize their limitations. For example, it is possible to store image files as binary large objects in the Library Server database and store nothing on the Resource Manager. This is possible, but it does not take advantage of the Resource Manager’s capabilities, but increases the load on the Library Server. There might be circumstances where this is the most appropriate design, although we cannot think of any. Most of our discussion and scenario performance tuning covers a two-tier application. The same considerations should apply for n-tier applications, at least from the Content Manager Connector, and down to the server. Other tiers can have an impact and should be tuned accordingly.

5.3 Configuration choices and trade-off An important aspect of planning a new system is to understand the configuration choices and trade-off. In this section, we include some of the important configuration and application design choices, and their implication to system performance. This material is an extraction from IBM Content Manager V8.2 Performance Tuning Guide, which can be found at: http://www.ibm.com/software/data/cm/cmgr/mp/support.html Search for “performance tuning guide” under white papers. Content Manager configuration choices and the performance trade-off:  Web clients or desktop clients? – Web clients are typically easier to deploy and maintain.

Chapter 5. Designing and configuring for performance

101


– Desktop clients typically have faster response time and more functionality.  For Web clients: direct retrieve or mid-tier conversion? – Direct retrieve is faster and more scalable. – Direct retrieve might require browser plug-ins or viewer applets.  For Web clients: direct connect or federated access? Federated access is slower than direct connection to Library Server, but more flexible supporting search across heterogeneous back-end servers.  For Web clients: connection pooling? – Connection pooling makes more efficient use of Library Server resources. – Scales to larger number of Web clients. – Allows more control of mid-tier impact on Library Server resources.  Out-of-the-box client program or custom client program? – A custom client program can be tuned to your exact business requirements. – Out-of-the-box client already includes the latest general-purpose tuning.  For custom client: beans or Java/C++ API? – Beans implement only the document models. – Beans support rapid application development with a federated “reach.” – Java/C++ APIs have the best performance.  For Java/C++ API custom clients: document data model or custom data model? – A custom data model can be tuned to your exact business requirements. – Data model already includes the latest general-purpose tuning.  Document routing or advanced workflow? – Document routing has better performance and higher scalability. – Advanced workflow has more functions unavailable with document routing.  Versioning? – Versioning increases the Library Server database size. – Accessing current version is faster than accessing previous versions.  Text search? – Text indexes increase the Library Server database size. – Indexing text content reduces scalability as content must be retrieved from the Resource Manager.  Unicode? – There is a substantial performance penalty for Unicode-enabling the Library Server. – Avoid using this feature unless you need the functionality.

102

Performance Tuning for Content Manager


 Attribute indexes? – Appropriate indexes improve performance of searches and reduce the Library Server resource usage. – Indexes increase the Library Server database size and affect store and update time.  Library Server and Resource Manager on the same machine or separate machines? – There is higher scalability when they are on separate machines. – If the system is small, putting them together is easier to maintain.  Single or multiple Resource Managers? LAN cache? Replication? – – – –

Multiple Resource Managers give higher total bandwidth for larger objects. Multiple Resource Managers give higher migrator parallelism. Distributed Resource Managers close to users provide better performance. If the system is small, single Resource Manager is easier to maintain.

 Number of Resource Manager collections? Multiple collections give higher migrator parallelism (one thread per collection).  Resource Manager asynchronous and third-party ingest/delivery? Asynchronous and third-party require custom clients. This is appropriate for very large objects, such as IBM DB2 Content Manager VideoCharger™.  Server platform? For mid-tier server, Content Manager Java API is supported on AIX, Sun, Linux, and Windows platforms. Some other connectors are Windows only. The Java conversion engine is cross-platform. For Library Server and Resource Manager, AIX, Sun, and Linux offer higher scalability than Windows. There will be more in-depth discussion on some of the design configuration choices and trade-offs in the subsequent chapter.

5.4 Hardware As with all systems, we need sufficient system resources to begin with. This means that we need to invest in hardware. At some point, all of the tuning in the world would not make the system deliver the throughput or response time required; there are just not enough processing resources available. So, scope the requirements, do an initial design, size the system, estimate the cost of the system, and compare that to the projected return on investment.

Chapter 5. Designing and configuring for performance

103


Your IBM representative can help with sizing estimations using information that you can provide about the nature of your application. These estimates compare projected workload against measured results on specific hardware configurations.

5.4.1 Scalability Content Manager is a scalable solution that provides a number of ways to increase capacity using additional hardware resources. Increased capacity can be achieved by adding more memory, disk, CPU, or servers. Content Manager is multi-threaded and multi-processor enabled. Additional processors at the servers can be used by the server components. DB2 and WebSphere both take advantage of lots of memory. Additional memory can also increase system performance. Refer to upcoming chapters for more information about determining whether memory contention exists on your system. In general, the more the better. Again, additional server resources in the form of more disk drives can be used by the system. There are a number of I/O operations that occur for most Content Manager server activities, such as logging database changes for both the Library Server and the Resource Manager (even more if dual logging is enabled), updates to the database tables themselves, and storing the resource objects to the Resource Manager file system. The more the I/O activities are spread across multiple disk drive spindles or arrays and device adapters, the less the contention is likely, and the more likely faster performance.

Resource Managers The system can utilize additional Resource Managers. As a first step, the Resource Manager can be split off from the Library Server to provide more capacity to the Library Server and possibly the Resource Manager. More Resource Managers can then be added to the configuration to spread the Resource Manager workload even more. If a single system is limited by its network connection speed or the rate at which data can be moved to or from its disk system, then a second server can double the throughput. If used, the TSM server function can be separated from the Resource Manager to further reduce workload and possible contention on any of the Resource Managers. As a WebSphere application, the Resource Manager can be deployed as the same application on multiple WebSphere Application Servers. The Web server plug-in can redirect requests to multiple WebSphere Application Server machines, each running a copy of the Resource Manager application. The processing load can be spread across multiple machines. Each instance of the Resource Manager

104

Performance Tuning for Content Manager


code requires identical access to the Resource Manager database, item storage file system, and TSM system. Alternatively, we can add independent Resource Manager servers. At some point, the I/O capacity of the system and its storage capacity or performance capacity might be exceeded. Additional Resource Managers provide the facility to add more storage capacity to the system. This provides distributed storage with a central point of control without the complications of a distributed database and multi-site updates. Using multiple instances of a Resource Manager running on multiple WebSphere Application Servers enables the work to be distributed dynamically. Using independent Resource Managers requires the workload to be distributed by Content Manager. WebSphere Application Server distributes the workload to the least busy application server while Content Manager distributes the load according to the Resource Manager, specified either by the application or as the configured default Resource Manager. You can set the default Resource Manager system to be derived from the Resource Manager associated with the item type or with the user. If you specify item type as the pointer to the Resource Manager, then all items of that type will go, by default, to that specific Resource Manager. If you set the user profile to specify the default Resource Manager, then all items regardless of their item type go to the users associated Resource Manager. Workload distribution by the default Resource Manager (either item type specified or user profile specified) works only for storing documents. The Resource Manager to service retrieval requests is, by default, the Resource Manager that holds the primary copy of the requested item. If there is a high number of requests for an item, they will all go to the Resource Manager where the primary copy of that item is stored while the other Resource Managers are not used. LAN cache can be used to spread this load. See 5.7.2, â&#x20AC;&#x153;Physical locationsâ&#x20AC;? on page 132â&#x20AC;? for more information about LAN cache.

Mid-tier servers Mid-tier workload can also be separated from the Library Server and Resource Manager servers. Where appropriate, the eClient Web application server or custom Web application server can be separated from the other Content Manager servers to remove the workload to a separate server. The mid-tier server can also be duplicated for additional capacity. Like the Resource Managers, the mid-tier servers can be multiple instances of a single Web application running on multiple WebSphere Application Servers or as multiple independent Web applications.

Vertical scalability What we have discussed so far is termed horizontal scalability, spreading the workload over more servers; Content Manager also offers vertical scalability. It

Chapter 5. Designing and configuring for performance

105


enables customers to move to bigger, higher-capacity servers and operating environments. For example, customers can move from Windows systems to AIX, SUN, or Linux systems.

Clients The client machines. including the machines supporting browser clients, also need sufficient hardware resources to give the users the response time they require. For these clients, memory and CPU speed are the main requirements to manipulate and display the images. Refer to your favorites browserâ&#x20AC;&#x2122;s hardware requirements, and allow extra resources to support the client side of the Web application and viewer applets if you plan to use them. Content Manager provides the client APIs on a range of platforms. Client APIs are available in Java on Windows 32-bit operating systems, AIX, Linux, and Sun Solaris, and the C++ APIs are supported on Windows 32-bit OSs and AIX. This increases your options for where and how to deploy client applications. Having the OO APIs available on Sun and AIX enables you to deploy thick clients or mid-tier servers on these platforms for performance, scalability, or server consolidation.

5.5 Library Server The Library Server is the heart of the Content Manager system. The Library Server controls the system configuration, indexing data, and user access. The Library Server in Content Manager Version 8 is now an extension of the database. It is primarily a set of database tables and stored procedures that hold the data and respond to client requests. Because it is the single point of control, almost all activities on the system interact with the Library Server. Optimizing the performance of the Library Server can improve response time for almost every user and process in the system.

Stored procedures The Client API calls are translated by the OO API layer and the DB2 client into SQL and sent to the Library Server database. The generated SQL makes calls to the Library Server stored procedures. The stored procedures run on the server, perform a number of actions on the Library Server database, then return a response to the DB2 client, the OO APIs, and the client application. This model has the distinct advantage of being able to make a single call across the network to perform a range of database searches, updates, and retrieves. The stored procedures are dynamically linked libraries (DLLs) or programs that run on the server and are called by the database engine in response to the client calls. The stored procedures perform system functions such as checking for

106

Performance Tuning for Content Manager


access control privileges and run user functions such as searching for an item. The stored procedures call other DLLs as required. When you define an item type to Content Manager, it uses Data Definition Language (DDL) calls to the database to create tables that match the data definitions that you specified for that item type. It creates a table for each root component and a table for each child component as well as tables to link to parts, and hold links and references. The SQL in these DLLs is pre-compiled, and bound to the database automatically during this process. These generated DLLs, along with other DLLs installed with the system, use pre-compiled SQL. This can reduce the database server load when these DLLs are run. Using pre-compiled SQL enables the database to pre-calculate the best way to access the data that is required by the SQL. The database stores the access path information in the application package created for each particular version of each specific DLL. At run time, the database optimizer does not need to re-calculate the access path for the data; it merely reads it from the package. In general, this saves only a fraction of a second each time the SQL in the DLL is run, but it might be run many thousands of times each minute and the savings therefore can be significant. The potential penalty you pay here is that the method the database chooses to get to the data is fixed at the time. The best method to get to the data might change over time. If the table has only one row in it, then getting the whole table at once is the best method to get to the data with or without any indexes. If you then add 10,000,000 rows to that table, the best method to get to the data is likely to be via an index. To get the database to re-evaluate the access path, rebind the application to the database. To complicate things just a little, the database will only check to see how many rows exist in a table when it is explicitly asked — by you. So to have the database choose the best access path, we must maintain the database by periodically updating the database statistics and rebinding the packages to the database, commonly referred as runstats and rebind. See 8.3.4, “Reorganizing tables through reorg” on page 184” for detail information about how to run runstats and rebind.

Table space Content Manager Version 8 uses database table space by default. Table space allows the system a number of configuration options of which even the default installation takes advantage. Table space enables the database user, in our case the Content Manager Library Server, to use multiple page sizes and multiple different buffer pools. We can also spread the database tables over multiple disk drives.

Chapter 5. Designing and configuring for performance

107


Table 5-1 lists the default table spaces that are created during Library Server installation, their associated buffer pool, and a description of how these groups of tables are used. This information can be useful when deciding how to lay out table space containers across physical disks. Table 5-1 Library Server default table spaces Buffer pool

Table space

Table space description

ICMLSMAINBP32

ICMLFQ32

Holds all the large, persistent, “frequently used” tables – most notably, all item-type tables. When a new item type is defined, the system administration client defines the new tables in this table space. If you customize the DDL, this table space should be defined across multiple containers for performance and scalability.

ICMLNF32

Holds the large, less frequently used tables, such as event logs, versions, and replicas. If you do not use any of these features, this table space will not need much space.

ICMLSVOLATILE BP4

ICMVFQ04

Holds the “volatile” tables, whose size is usually small but which can vary widely, and for which both updates and deletes are very common (for example, the tables associated with document routing “in progress” items, checked-out items, and so on). Putting this on a separate physical disk should help performance.

ICMLSFREQBP4

ICMSFQ04

Holds all the small, frequently used but seldom updated tables, such as system information, user definitions, ACLs, and so on, that do not take up much space but that are always needed in memory for good performance.

CMBMAIN4

CMBINV04

Holds the tables used for Information Integrator for Content federated administration.

The table space groups together tables that are used in similar ways, and enables us to tune each table space I/O set up individually. Note the numeric suffix on the table space and buffer pool names: This reflects the page size in kilobytes used by the tables in this table space. The database table is made up of pages, which in turn are made up of rows. Indexes are also broken up into pages for storage and retrieval. The database fetches and stores pages of data, not individual rows. This is done for performance particularly when queries access rows sequentially. The larger the page size, the larger the chunk of data that is moved into and out of memory at once.

108

Performance Tuning for Content Manager


The size of a row must be slightly smaller than the size of the page. If any of your item-type index data components have a combined attribute field length of greater than 32 KB excluding LOBs and CLOBs, then you should consider using LOBs or CLOBs. The database handles these differently from normal columns, and they are processed more efficiently inside the database. If the row size is just slightly more than half the size of a page, then each page will hold only one row. This creates a lot of unusable space in the database tables and can slow system operation. If this is the case, you might wish to consider using the DB2 control tools to create a new table space with a more appropriate page size and move your table there.

Buffer pools Separate buffer pools means that we can tune the memory usage differently for each table space. DB2 uses buffer pools to hold large amounts of database table data in memory, in an attempt to avoid every possible I/O operation. I/O operations take thousands of times as long as accessing data in memory. Any table data that is read frequently should be in memory, in a buffer pool. The larger the buffer pool, the more likely the data will be in memory and the faster the database operations can proceed. However, if you oversubscribe real memory, the operating system will start paging this data out to disk and getting to the data will be even slower than going directly from disk. DB2 allocates memory from the operating system because it requires up to the maximum specified in the buffer pool size. You can set the size of each buffer pool individually. Separating the data into different buffer pools enables certain sets of table data to be churned rapidly while preserving other data in memory. Some data is rarely ever reused; it was read in from the disk to answer a query perhaps, but there is not likely to be another query that needs that data again, at least not for some considerable time. This type of data is eventually moved out of memory to make way for other data that is required. In a congested system, other data that was more likely to be reused might have been pushed out of memory to make way for that data. All data has to be paged into the buffer pool before it can be used. All the time that the page of data was in memory after it was used, this page was occupying memory possibly better used by other data. Multiple buffer pools means that we can keep the data that is frequently re-referenced in one buffer pool and volatile data in another. The volatile data now cannot swamp the other frequently accessed data, because it is in a separate buffer pool.

Chapter 5. Designing and configuring for performance

109


Indexes Indexes are vital to the performance of the databases as they grow. When a table grows to more than a few thousand rows, it is much faster to access the data by an index than by looking at each row in the table. The index is maintained as an ordered list of column entries with pointers to the corresponding database table row. Ideally, all access to the indexing data in Content Manager is via an index. Indexes come at a cost. There is a minor performance cost associated with updating the indexes every time a change is made to table data. Additional storage is required to hold each index, which can, in the extreme, be as large as the table itself. Indexes are also accessed in the buffer pools and can be placed into separate table spaces and therefore different buffer pools. By separating the index data into a separate table space and buffer pool, it can be protected from being swamped by the table data. This is not done by default and you would need to modify the Content Manager database manually. The Content Manager administration tool provides a point-and-click interface to create database indexes for your application. Refer to 5.5.3, “Data model” on page 115 for more information. For a great deal more information about designing and configuring your databases refer to the DB2 Administration Guide: Planning.

5.5.1 Library Server configuration Configuration and design options in the Library Server database have performance ramifications. You can use the administration interface or the administration APIs to set and change configuration parameters, turn functions on or off, and define entities to the system. To get to the Library Server Configuration, open the system administration client. From the left-side pane, click the Library Server database → Library Server Parameters → Configurations. From the right-side pane, double-click Library Server Configuration. We discuss several configuration options in the Library Server Configuration that might influence system performance.

Definition tab You can set the following configuration in the Library Server Configuration Definition window.

110

Performance Tuning for Content Manager


Max users Content Manager enables administrators to enforce client licensing restrictions, either rigidly or more loosely. The Max user action parameter allows three options: Reject logon

Rigidly deny access to users in excess of the license.

Allow logon with warning

Allow the additional users (in excess of the license) in with a warning message.

Allow logon without warning Allow the additional users in without any warning message. The second and third options assume that sufficient additional client licenses will be acquired as soon as possible to accommodate these additional users. An alternative configuration option is to set the maximum number of users to zero. This bypasses license checking although it does not bypass the requirement to abide by the license agreement. License compliance checking requires processing at the Library Server. Bypassing the check removes this load. For a client to be considered concurrent in the Content Manager Version 8 client licensing scheme, the user must have had an interaction with the Library Server within the past 30 minutes. Users may be logged on, which will make them active for the first 30 minutes. If they have no further interaction with the Library Server, for example, they did not store or retrieve any information for the next 30 minutes, then they are no longer considered a concurrent user for licensing purposes. Tracking this level of user activity places a load on the system and is proportional to the number of users and their activity. Bypassing this checking reduces the load on the server. If you choose to bypass checking, you should periodically check your license compliance. This can be turned back on by setting the max users to the number of client licenses held.

Features tab You can set the following options in the Feature tab.

Text Information Extender (now Net Search Extender) New in Content Manager Version 8 is the internal text search capability. Text search works across both index fields and document text content, and is provided by the Net Search Extender (NSE) in Content Manager Version 8.2. The new text search capabilities are tightly integrated with the Library Server database that enables automatic synchronization of the Library Server data and the text index data. It also simplifies and controls user access to the text index.

Chapter 5. Designing and configuring for performance

111


The text search engine resides on the Library Server machine, creating additional processing, storage, and network load for this machine. Text indexing requires the text, from a document or attribute field, to be disassembled into text components and then have them indexed and ordered. The index is built and stored on the NSE server in the Library Server. Similarly, search requests involving traversing these indexes to identify the qualifying items are also done in the Library Server. Being centrally located has implications for large systems with multiple Resource Managers. During indexing operations, a copy of every document that is to be fully text-indexed is transferred from the Resource Manager to the Library Server, converted where necessary to a suitable format, then indexed by the NSE server. Typically, the number of documents in a Document Management style Content Manager system is relatively few; however, the number and the size must be assessed for network impact and index size calculations. To enhance performance during indexing, consider the following issues:  Using a VARCHAR data type to store the text attributes instead of LONG VARCHAR or CLOB.  Using different hard disks to store the text index and the database files. You can specify the directory in which an index is created when defining the text index as part of the item-type definition.  Ensuring that your system has enough real memory available for all this data. Net Search Extender can optimize text index searches by retaining as much of the index as possible in memory. If there is insufficient memory, the operating system uses paging space instead. This decreases the search performance.  Leaving the update commit count parameter as blank during text index definition of the item type definition. This is used during the automatic or manual updating of the index. It slows down the indexing performance during incremental indexing. Setting it to blank will disable the updating function.

ACL User Exit The ACL user exit enables you to have external systems to decide whether access to system resources (such as items and document routing objects) should be granted to users or groups. If this is enabled, every time access is requested to an object, this function will be invoked. It is vital for good system performance that this external function returns its result quickly. Any delays here can be accumulated many times for any one system activity. If a search results in a hit list of many items, each one must be tested through this user exit for this user before they can be presented to the user.

112

Performance Tuning for Content Manager


Defaults tab The defaults tab enables you to define where the default access control list, the default Collection ID, and the default Resource Manager are taken from when a new item is added to the Content Manager system. Unless you build your own client and use it to specify these values explicitly when creating an item, it is assigned the values that are calculated from the information specified by these defaults. The Resource Manager to which an item is assigned can have a major impact on response time to the end user. Refer to 5.7.2, “Physical locations” on page 132 for a discussion about how the location of users and their Resource Managers affects performance. Setting the default storage option “Get default Resource Manager from” to a value of “item type” ensures that all items of the same item type reside on a particular Resource Manager. This can be used to locate items close to the offices that use them if specific item types are used in specific locations. Alternatively, if the default Resource Manager is taken from the user profile, and the user generally works in the same location as the Resource Manager, then benefits accrue in storing and retrieving items from the closest Resource Manager. These options can also affect the load balancing across Resource Managers.

Default Replication options Content Manager provides a fail-over service that verifies that Resource Managers are available. If you are trying to store objects into a Resource Manager that is unavailable, then Content Manager tries to store into the next available Resource Manager. Without this fail-over service, you would get an error if you tried to store objects in a Resource Manager that was unavailable. The fail-over service monitors the availability of the Resource Managers based on the interval that you set on the “Interval to check server availability” field in the Library Server Configuration window. For example, if you set 60 seconds as the interval, it checks availability every 60 seconds. This service should remain running. The Library Server monitor service is named ICMPLSAP (Portable Library Server Asynchronous Process). Although this check does not take a significant amount of processing, it represents some work for the Library Server, the Resource Manager, and the network. Unless there are significant risks of servers becoming unavailable, we recommend that you set this value to 600. The server time-out threshold specifies how long the server should wait for a response from the Resource Manager before considering it to be unavailable. Set this value high enough to avoid “false positives” — that is, allow the Resource Manager sufficient time to respond when it and the network connection are under load. However, do not set it too high, such that users have to wait a long time before the system switches to an alternative Resource Manager.

Chapter 5. Designing and configuring for performance

113


Logs and Trace All logging will increase the load on the Library Server, so only turn on the necessary logging or tracing to meet the business requirements (for example, audit trails). When you turn on system administration event logging, entries are written to the ICMSTSYSADMEVENTS table for every system change made by the system administration client or administration APIs. The events table grows with each logged event. To reduce the size of the events table, you can remove the expired and unused events from the table. The EventCode column in the events table indicates the classification of events as the values shown in Table 5-2. Table 5-2 Administration event codes Value

Definition

1 - 200

System administration function event codes

200 - 900

Item, document routing, and resource management function event codes

1000+

Application event codes

You can delete events from the events table by issuing the following SQL command: delete from ICMSTSYSADMEVENTS where eventcode <=xxx and Created < yyyy-mm-dd-hh.mm.ss.uuuuuu  xxx is the highest event code you wish to delete (for example. 200).  yyyy-mm-dd-hh.mm.ss.uuuuuu is the timestamp of the oldest event you wish to keep in the tables (for example, 2002-01-01-12.00.00.000000). When tracing, wherever possible, set the log file and path to a different physical disk from other system activity, such as the database logs, database table spaces, and the Resource Manager file systems.

5.5.2 Authentication New in Content Manager Version 8 are compiled ACLs and administrative domains. Compiled ACLs enable Content Manager to pre-calculate a userâ&#x20AC;&#x2122;s access to every defined control list when they or their components are modified. Administrative domains enable a Content Manager system to be subdivided into domains; this includes the users and the managed objects. Pre-calculated access control lists speed up the control list processing during run time. The system does the general privilege check (is this user allowed to perform this function at all?) against the ICMSTCompiledPerm table, and then it

114

Performance Tuning for Content Manager


looks for the user, the tested ACL, and the specific privilege in the ICMSTCompiledACL table. This table holds a row for every privilege in every ACL that every user holds. Note that a large number of users and a large number of ACLs could make this a very large table.

5.5.3 Data model The primary issue for the data model is to capture the data required by the business problem. When that is established it can be massaged for performance. As mentioned earlier, the Library Server holds the metadata and system control data and is primarily a database application. That means that the same principles that are applied to good database design can also be applied to the Content Manager data model. Normalize the data to reduce the table size and de-normalize for performance. Add indexes to fields that are queried. Cluster the table into a sequence that reflects the way data is accessed.

Normalizing the data means to identify data that might be repeated for one particular entry and extract that data to a separate database table. Imagine that we wish to record information about a customer for whom we have an application form image. That customer might have one name but two addresses, a business address and a residential address. To represent this in our system, we could create two rows, one for each of this customer’s addresses. When we search on his or her name we find both addresses. We also see that the database table has the customer’s name and possibly other details repeated in our database, which might not be the most efficient use of our resources, especially as the number of our customers grows. One solution is to change our database model and store the data that gets repeated (in this case, the customer’s name and other customer details in one table and the address data in another table), and maintain a database relationship between the two tables. In Content Manager, this means implementing folders, links, references, or child components as needed. Refer to the System Administration Guide for more details about these components and additional information about how to model your data. In the example above, this could be implemented as a root and child component relationship. The customer’s name would be part of the root data, and their addresses would become entries in a child component attached to the root. It is also possible to model this using links. The customer name information may be modelled as a root component that is linked to other root components, which each hold the address information. Using our previous example, we might have several applications or other types of documents that relate to this same customer. We could have his or her name recorded many times. This could be another opportunity to reduce the amount of

Chapter 5. Designing and configuring for performance

115


duplication. We could create a folder that contains all customer-related documents and attach to the folder the name and addresses. Normalizing the data in this way has performance advantages and disadvantages. Removing the duplication of the data makes the database smaller. Two (or more) tables are created whose combined size is smaller than the original. This means that the database has less data to scan when searching, the data rows are shorter so we get more rows in each page, so this reduces the number of I/O operations and increases the number of entries in memory, thereby potentially avoiding I/O operations altogether. On the other hand, we now have to go to two tables for the complete set of information about any one item. We need to go to the customer table and the address table and perform a database table join. In this operation, rows from one table must be matched against the corresponding rows in the other table. We also need to create a build and maintain the database relationship between the two sets of related data. All of these actions cost processing time on the server. In general, normalize the database design for size and simplicity, and de-normalize only when query performance becomes a major issue.

Versioning Versioning creates a new copy of index data and the associated document when an update is made. The system preserves the original as well as the new updated item. This can be extremely useful where users wish to regress to an earlier version of a document or undo erroneous changes, or to create a new derivative work that might take a different development path. It can also be required by legislation or management practice to retain the history of development of an article or policy contained in a document. This can have an impact on performance. When the previous versions of a resource object are maintained in the system, they occupy space in both the Library Server and the Resource Managers. Similarly, if previous versions of only the index data is kept, then the impact is limited to storage on the Library Server. Additional rows in the database can have an impact on its performance. Versioning typically requires the system to restrict the query results to one version of the item â&#x20AC;&#x201D; usually the latest. This means additional criteria for the query (predicates) must be resolved, so the indexes must be maintained to match the criteria (filter) for the required version. This adds complexity to the database and to the query processing. Filtering for the correct version has to be done for almost every action on an item in the system.

116

Performance Tuning for Content Manager


Logging Just like system administration logging, item-level activity logging will increase the load on the Library Server. Turn on only to the level of logging necessary to meet the business requirements. If you enable Read logging, the system will warn you that this can have an effect on performance because of the amount of data that is likely to be recorded. When you turn on item event logging, entries are written to the ICMSTITEMEVENTS table for every selected action on all items in this item type made by all of the clients and APIs in your system. The events table grows with each logged event. To reduce the size of the events table, you can remove the expired and unused events from the table. The EventCode column in the events table indicates the classification of events as the values shown in Table 5-3. Table 5-3 Item event codes Value

Definition

1 - 200

System administration function event codes

200 - 900

Item, document routing, and resource management function event codes

1000+

Application event codes

You can delete events from the events table by issuing this SQL command: delete from ICMSTITEMEVENTS where eventcode <=xxx and Created < yyyy-mm-dd-hh.mm.ss.uuuuuu  xxx is the highest event code you wish to delete (for example, 600).  yyyy-mm-dd-hh.mm.ss.uuuuuu is the timestamp of the oldest event you wish to keep in the tables (for example, 2002-01-01-12.00.00.000000).

Document model - item type classification When defining the data model, you choose the item-type classification, which defines the relationship between the resource object and the customerâ&#x20AC;&#x2122;s indexing data. Item-type classifications are items, resources, documents, and document parts. In the simplest case (the items), there is only indexing data, and no resource object. This is somewhat like a folder or a database table entry, in that there are data and relationships to other items in the system but no image or document file associated with this item. Unlike the folder, however, there are absolutely no resource objects associated with it, and that includes no note log. The resources item type is indexing data that is associated with a resource object, such as an image file, a word-processing document, or an XML file. It is

Chapter 5. Designing and configuring for performance

117


important to note that there is a one-to-one relationship here: only one set of indexing data for one resource object. For many situations, this could be sufficient. For other applications, we might need to keep multiple resource items for one set of indexing data. For example, in some implementations of Content Manager system, an image might be composed of multiple parts:    

The image itself The file that holds annotations and mark-ups A note log The text extracted by OCR from the image

The documents item type allows multiple resource objects to be linked to a single set of indexing data, enabling us to accommodate resources that might be made up of different parts, as in the example above. Specifically, the system comes with predefined parts for image, text, note log, annotation, and streaming data. If these predefined parts are not sufficient for a particular application, you can define customized document parts using the document parts item type. This enables you to define an item type that carries index data to support specific properties of the part you wish to create. Content Manager then makes this a part you can include in an item type you define with an item type classification of documents. What then are the performance implications of choosing one item-type classification over the another? Several trade-offs should be weighed here. Only the document model item type classification is supported by the out-of-the-box clients. Unless you are designing and building customized clients, you should stick with the document classification. If you are building a custom client, the next trade-off depends on what the particular application demands. If you are integrating scanning systems that assume the document classification, you should stick with that. If the particular files and documents to be used are best modelled as multi-part documents, then the document model provides the simplest implementation. If your application utilizes the items and resources item model, in general, this is more efficient. The documents model requires intermediate tables to hold the links between the document and the parts of which it is comprised. This means that there are additional tables that must be updated when documents are added or changed and must be traversed when fetching data, as compared to items and resource items. All of these cost processing time on the Library Server and there are consequent performance advantages if they can be avoided.

Database indexes As discussed earlier, indexes are vital to Content Manager performance, particularly as the number of items stored in the system increases. When you define a new item type, the appropriate indexes on the new tables are created

118

Performance Tuning for Content Manager


automatically. Indexes on user-defined item attributes are not generated automatically. You must define them manually. For the attributes that are frequently used for regular searches, define indexes on these attributes to improve response time and reduce Library Server resource usage for queries over these attributes. The system administration client provides a point-and-click interface to define indexes. You can choose one or a combination of attributes to create each index and you can specify the order of the index, either ascending or descending. Use combinations of attributes to create the index where users typically use that combination of attribute values to find items in the item type. Content Manager cannot anticipate the way your application will choose to search for the data. Application queries are embedded in the compiled programs as Dynamic SQL, which enables the system to calculate the access path at run time. This means that the database optimizer calculates the best access path to the data for your queries when they are run. The database then immediately starts using your indexes for queries after the indexes are created, assuming that the optimizer chooses to use them. The optimizer might decide that the indexes do not help resolving the queries because they are on the wrong columns or because the optimizer is working with outdated statistical information about the table. It is always a good idea to run runstats soon after you create an index so that the optimizer has the best information. If you do run runstats, it is also a good idea to rebind the database packages as well. The optimizer might be able to use the new index to improve access speed for other SQL and might improve other access paths based on the new statistical information. Unless there is a major performance problem affecting most users at the time, it is best to create indexes, run runstats, and rebind at times of low user activity. These utilities can have a negative impact on server performance while they are running. See 8.2.5, “Create attribute indexes” on page 177 for details about creating indexes for attributes. See 8.3.3, “Keeping database statistics and execution plans up to date through runstats/rebind” on page 183 for more about runstats and rebind.

Multiple item types Many Content Manager applications tend to work actively on an item while it is relatively new to the system. At some point, that work completes and the item is archived. For these types of applications, it can be a performance advantage to separate the active documents into a separate item type from those that have already been dealt with (archived). Sooner or later the archived documents will out-number the active documents, and all being well, the number of active documents should plateau or at least have relatively slow growth. The number of

Chapter 5. Designing and configuring for performance

119


archived documents will continue to grow at about the rate new documents enter the system. This can provide performance benefits in several ways. The active documents item type can be tuned from maximum performance. It might be feasible to create additional indexes on this item type that were too costly on all the data. This new, active data tends to be updated more frequently and that can lead to fragmentation of the database tables. If this is a smaller database table, it takes less time to re-organize. Because these tables are smaller and more frequently accessed, they can be retained in the system memory. The item type can be separated into a new table space and have a dedicated buffer pool. Poor queries that cause table scans will not have as dramatic an effect on performance because these tables will be of limited sizes. Note that information from previous versions of an item is not transferred across when the item type is changed.

5.5.4 Document routing Document routing is a fast and efficient method of managing document-centric work flows. Because it is built into the Content Manager system, no additional program layers are required to connect to the work flow and synchronize operation. If document routing offers enough function to meet your business requirements, then it is the simplest and most efficient work flow tool to use. Note that the work flow APIs and user exists enable you to extend the functionality of document routing if required. For example, parallel routing can be emulated by sending the same item through two different processes at the same time.

5.6 Resource Managers How we use Resource Managers can play an important part in overall system performance. The Resource Manager provides the resource object storage management functions. It receives, stores, and sends images and document files from and to the client applications. New in Version 8 is the use of standard Web ports and protocols to send and receive requests. The Resource Manager acts like a Web server, serving resource objects on request. However, the request must have a valid token supplied by the Library Server. Behind the scene, the Resource Manager runs in a Web application host: WebSphere Application Server, which is a Java application that holds control data in the Resource Manager database and stores the resource objects on the file systems connected to the server, and in an optional TSM backup archive.

120

Performance Tuning for Content Manager


Note: The Resource Manager uses its own database connection pooling methods, not the standard WebSphere database connections. Upcoming chapters cover tuning the configuration parameters for the database, WebSphere, and TSM. First, we should consider configuration and design options in the Resource Manager that can affect performance. As in the case of the Library Server, you use the system administration client or the administration APIs to set and change configuration parameters, turn functions on or off, and define storage to the system.

5.6.1 Resource Manager properties To reach the Resource Manager properties, open the system administration client. From the left-side pane, expand the Library Server (ICMNLSDB) Æ Resource Managers → the Resource Manager database (RMDB). Right-click the Resource Manager database to select Properties to open the Resource Manager Properties window. Right-click the Resource Manager database to select the Staging area to open the Staging Area panel.

Properties In the Resource Manager Properties panel, you can set the details stored in the Library Server about the Resource Manager, including communications details, logon details, token duration, the LAN cache switch, and a disable switch. The Library Server and the system administration client use this connection information to make connections to the Resource Manager. The connection information includes host name, user ID, password, protocols, port numbers, and access data used to log on.

Token duration Token duration specifies how long the security token, provided to a client to access a document on this Resource Manager, will remain valid. The default is 48 hours. A client application may continue to access the resource object without reference to the Library Server for the period that the token remains valid. If your application frequently re-reads the same item on the Resource Manager, you can store the item’s URL and go directly to the Resource Manager for the item for as long as the token remains valid.

LAN cache LAN cache enables objects to be retrieved from the user’s default Resource Manager if a copy is cached there. The user’s default Resource Manager should be the physically closest Resource Manager or the Resource Manager with the

Chapter 5. Designing and configuring for performance

121


highest bandwidth network connection to the user. If the requested resource object is not cached on the Resource Manager, then the original is sent to the requestor from the Resource Manager on which it is stored and a second copy is sent to the user’s default Resource Manager. If the user or a colleague in the same location requests that resource object again, it can now be sent from the user’s local (default) Resource Manager. LAN cache can provide a big benefit where there are relatively slow or busy network links between locations, and users need the same item multiple times. If the item is accessed only once, then there is unnecessary network load generated sending a second copy to the Resource Manager that will never be used. In earlier versions of Content Manager, there existed a function to “pre-stage” or copy a list of items from one Resource Manager to the staging area or LAN cache of another Resource Manager. This meant that objects could be copied to the cache prior to users requesting them. This function is no longer supported. The same effect can be achieved using replication and a specialized client that finds the nearest copy.

Staging area The staging area is the cached disk space used to hold copies of objects stored permanently elsewhere in the system. The staging area holds files recalled from TSM and, when LAN cache is enabled, files from other Resource Managers. Unlike previous versions of Content Manager, new resource objects are written directly to the first storage location for that item’s storage management hierarchy. That is, they now bypass the staging area. This significantly reduces the internal movement of data inside the Resource Manager and leaves the staging area as a pure cache. Also new in Content Manager Version 8.2 is the ability to divide the cache into multiple subdirectories and set a limit on the largest file cached there. The ability to create subdirectories reduces the number of files per directory, which helps avoid the file system limitations when the directory holds a large number of files (10,000 to 40,000+). The ability to control the maximum size of cached resource objects means that we can avoid flooding the cache with very large objects. A few very large files can force the deletion of a large number of smaller files to make space. The staging area is managed by the system to a fixed size. Items are stored in other locations according to a time period specified in the management class assigned to the item. Items in the staging area are there only while there is sufficient room. When the stagging area grows too large, the system deletes items from it in order, starting with the least recently used. This function is carried out by the purger process.

122

Performance Tuning for Content Manager


In the Staging Area panel, you can specify the size of the staging area and the thresholds by which the size of the storage is controlled. The thresholds are checked by the purger process to determine when to start and stop the purging. At its scheduled start time, the purger determines whether the current size of the staging area exceeds the “Start purge when size equals” value (as a percentage of the maximum staging area size). If it is, then the purger removes the least recently referenced resource objects until the size is reduced to the “Stop purge when size equals” value (as a percentage of the maximum size). When setting these values, the aim is to keep the staging area as full as possible, while always allowing room for new items to be added. Files coming from TSM, placed in the staging area, indicate that they might have been migrated before users had finished using them. Files from LAN cache could indicate that usage patterns do not match the item type, user profile, or Resource Manager assignment expected by the system design. Finding the item in the cache usually provides significant performance improvement for the end user over getting it from near-line media through TSM or a remote Resource Manager. The larger the staging area, the more likely resource objects are to be found there; thus, better performance. However, this sacrifices storage for other activities. The staging area will overrun the maximum size if required, as long as there is space on that file system. Unless the file system on which the staging area resides is close to 100% utilized, set the “Start purge when size equals” threshold close to 100% of the staging area size and the staging area as large as you can without interfering with other server storage requirements, and leaving some spare. The “Stop purge when size equals” threshold should be tuned as best as you can to the processing of these exceptions. For example, the majority of items in the staging area might have been recalled from TSM because of customer inquiries about prior transactions and these are handled on the spot. They are unlikely to be re-referenced, so there is no point in retaining them in the cache. Conversely, the files could be in the stagging area because of periodic reviews as part of some business control process that might take several days or weeks to complete. In this case, it is worth keeping as much of the old data in the cache as possible. This is done by setting the “Stop purge when size equals” threshold closer to the maximum value. If access is unpredictable, set the stop threshold to 80%.

5.6.2 Configuration Configuration within the Resource Manager has performance ramifications. You can use the system administration client or the administration APIs to set and change configuration parameters, turn functions on or off, and define entities to the system.

Chapter 5. Designing and configuring for performance

123


To reach the Resource Manager Configuration, open the system administration client. From the left-side pane, click the Library Server (ICMNLSDB) → Resource Managers → the Resource Manager database (RMDB). From the right-side pane, double-click Resource Manager Configuration.

Definition tab The Resource Manager Configuration’s Definition panel sets the timeout values for Library, client, and purger.

Time out - Library, client, and purger These values specify how long the Resource Manager waits for a response. If no response is received within that time frame, the Resource Manager assumes that the component is no longer able to communicate and it should start the appropriate recovery processing. Set these values high enough to avoid “false positives”; that is, allow the Library Server, client, and purger sufficient time to respond when they and the network connections are under load. However, do not set them so high that users have to wait too long before the system recognizes an error condition.

Cycles tab The Cycles panel sets the time period between executions of the background Resource Manager tasks. In general, because no user is waiting on the completion of these tasks, the performance considerations relate to minimizing the impact of these tasks on the system. On the Cycles panel, set the frequency at which these tasks run. The Schedule panel for the migrator and replication specify when these functions run. To have them run at all, the functions must be enabled elsewhere. For Resource Managers running on Windows 2000, each of these processes is installed as a System service. They can be started and stopped from the standard Windows 2000 services control window. To start these tasks on AIX selectively, enter the following command: etc/rc.cmrmproc start dbname rmappname application In this example:  dbname is the database name on which these processes are running.  rmappname is the name of the Resource Manager Web application.  application is the Resource Manager stand-alone process that you want to start (RMMigrator, RMPurger, RMReplicator or RMStager).

124

Performance Tuning for Content Manager


For example, /etc/rc.cmrmproc start rmdb icmrm RMMigrator starts the Resource Manager migrator on the database rmdb with icmrm as the name of the Resource Manager Web application. Start only the processes you need. For example:  Unless you are running VideoCharger on this server, it is not necessary to run the RMStager.  If you are not running TSM or LAN cache is not turned on for this Resource Manager, it is not necessary to run the RMPurger.  If you are not running replication of resource objects from this Resource Manager, it is not necessary to run the RMReplicator. Do start the migrator even if you are not running TSM. The migrator is responsible for deleting items and tidying up after failed transactions. See the migrator discussion below.

Purger The purger clears free space in the staging area by deleting old copies of items. In the purger field, type or select the amount of time in hours and minutes that must elapse before the purger begins purging (if necessary). Run the purger often enough to avoid running out of space in the staging area file systems.

Threshold In the Threshold field, type or select the amount of time in hours and minutes that must elapse before the migrator checks the capacity of the Resource Manager disk volumes to ensure that they are below the capacity threshold that was set when defining the volumes. New in Content Manager Version 8.2 is the ability to define migration based on available capacity on Content Manager managed volumes, in addition to time. When defining a new volume to a Resource Manager, you can specify the capacity threshold at which migration will begin. If that threshold is set below 100%, the migrator will check the capacity of this volume. If the used space on the volume is greater than the threshold value, the migrator will begin to move resource objects to their next storage location. The next storage location is defined by the workstation collection definition for the resource object. It is the next available volume in the storage group for the collection that is of the storage class defined by the next step in the migration policy for that collection. In this case, migration occurs earlier than is defined by the migration policy. Threshold migration works like normal migration, except that it occurs when the volume space used exceeds the threshold. The objects are migrated to the same place as during normal migration. First it performs normal migration (action date

Chapter 5. Designing and configuring for performance

125


expired), then it migrates according to least recently used and nearest action date until the percentage of volume used is under the threshold.

Stager In the Stager field, type or select the amount of time in hours and minutes that must elapse before the stager checks whether the capacity threshold of the VideoCharger volume has been reached.

Migrator The migrator is a function of the Resource Manager that moves objects to the next storage class according to their assigned migration policy. The migrator is also responsible for deleting from the file system the items that have been marked for deletion by usersâ&#x20AC;&#x2122; explicit action. The movement of items to other storage media is an important task that can add significant workload to the Resource Manager, both for the database and its I/O subsystem. Items become eligible for migration to the next storage location based on a date field (OBJ_ACTION_DATE column of the RMOBJECTS table in the Resource Manager database). This date is calculated when the item was stored based on its assigned management class. It is recalculated every time it is eligible for migration. The Content Manager Version 8 administration tool does not allow you to create a migration policy whose last assigned storage class is anything other than â&#x20AC;&#x153;forever.â&#x20AC;? This reduces the server load that migration previously caused when continuously updating the next action date of resource objects with no next storage class but a due date to be migrated. Previously, it was possible to create a migration policy that stored items to disk for only 30 days. After 30 days, the migrator would set the next action date for all items stored on the first day to 30 days into the future. So every day the migrator updated 1/30th of the items in the system, but with no change in the actual storage of the items. In the current version of Content Manager, this does not happen. If necessary you can change the next action date using SQL to make items store forever on a particular storage class eligible to be migrated. The next action date field is not a timestamp but a date field. All objects due for migration on any particular day become eligible to migrate at 12:00 AM that day. As such, it is usually best practice to migrate all items outside of prime business hours at one go instead of letting the migrator run until the following day. The query to determine items due for migration can take a long time. This query is run for each batch of files until there are no more qualifying items. By setting the batch size larger you can reduce the number of times the query is run each day and speed up the migration process. The trade-off here is that your

126

Performance Tuning for Content Manager


Resource Manager JVM needs to be large enough to hold and process the whole batch. In Content Manager Version 8, the migrator is multi-threaded. This means that multiple jobs can be initiated by the migrator to move data. The migrator creates a thread for each volume from which data is being moved. To get concurrent processing, you need to have more than one volume defined in the storage class from which data is migrating, which might improve overall migration time. The migrator is also responsible for delete processing. Deletion of the Resource Manager resource objects is an asynchronous activity. Resource deletion consists of three steps: 1. A Content Manager application deletes an item from the Library Server. 2. The Asynchronous Recovery Deletion Reconciliation utility marks the resource for deletion on the Resource Manager. 3. The Resource Manager migrator physically deletes the resource during scheduled running periods. When the migrator is started but not scheduled to run, it periodically runs the Asynchronous Recovery utility. The Asynchronous Recovery utilityâ&#x20AC;&#x2122;s purpose is to periodically restore data consistency between the Library Server and its Resource Manager. This process is necessary for the following reasons:  To provide a rollback function for failed transactions. The Library Server and the Resource Manager can become inconsistent in the event that the Resource Manager crashes or communications between the Information Integrator for Content (formerly known as EIP) toolkit and Resource Manager fails. The inconsistent state can be reconciled with the Asychronous Transaction Reconciliation utility.  To complete scheduled deletion of items that were designated for deletion. Note this only occurs during times when the Asynchronous Recovery Utility runs and the migrator is scheduled to run.  To delete tracking table records (for both the Library Server and the Resource Manager) for transactions that are determined to have completed successfully. As each create or update transaction for resource items completes, a record is placed into the Library Server database. The number of these records and the database table that holds them becomes larger over time. The records are removed by the Transaction Reconciliation utility. It is important to run the utility on all of the Content Manager Version 8 Resource Managers. The migrator should be started even when it is not scheduled to run because it helps to maintain the transaction tables.

Chapter 5. Designing and configuring for performance

127


Items are checked out from the Library Server during the migration process to control the movement transaction. This means that the Library Server will be involved in the migration as well.

Replication Replication copies resource objects to a workstation collection on the same or another Resource Manager. If the primary Resource Manager becomes unavailable, a replica of a required resource can be retrieved from the other Resource Manager. This provides increased availability, but should not take the place of normal backups. In the Replicator field, type or select the amount of time in hours and minutes that must elapse before the system checks to see whether replication is necessary. Similar to migration, replication involves moving a potentially large amount of data at a scheduled time. Unlike migration, items can become eligible for replication throughout the day. Typically, customers want the replicas copied as soon as possible after the original is stored. Another complicating factor for replication is that it frequently involves moving data over relatively slow network links. Replication is also multi-threaded. A number of threads are pooled for each target Resource Manager. Each replication request is assigned to one of these worker threads. Again, the item is checked out for the duration of the replication process. You specify the maximum number of replicator worker threads as ICM_MAX_REP_THREADS (value between 1 and 50, default 10) in the icmrm.properties file. Replication uses significant processing power on the Resource Manager. It is recommended that you run replication almost continuously. If it is run only after prime shift, the system might not be able to keep up with the load. If necessary, schedule replication to pause during a few of the peak hours of the day. In Windows, the process runs as a service: ICM Replicator (RMDB). For AIX, rc.cmrmproc can be used to manually or automatically start the service. The replication process must run as the same user that runs the application server. The scripts start a Java virtual machine using the WebSphere Application Server JVM and the deployed Resource Manager class files. You might need to modify the JVM memory settings to allow for a larger virtual machine size on very active systems. If the Resource Manager servlet is cloned, only one clone should run the Replication process.

128

Performance Tuning for Content Manager


5.6.3 System-managed storage Device Managers, Storage Classes, Storage Systems, Storage Groups, Migration Policies, and Workstation Collections are the components used to build the storage management policies that are applied to each item or part stored in a Content Manager system. For convenience, we deal with the entire process as a group. Much of the design and terminology of system-managed storage (SMS) goes back a very long time, at least in computing terms. The principle behind SMS is that storage is assumed to be both more expensive and faster, the closer it is to the CPU. The purpose of the system-managed storage components is to give you the flexibility to configure the system to keep active data on faster media and move inactive data to slower, cheaper media. The systemâ&#x20AC;&#x2122;s ability to achieve this depends very much on the designerâ&#x20AC;&#x2122;s ability to predict the way in which data will be accessed. In general, time is a very good indicator of the way in which files will be accessed. For forms-processing systems, the very nature of the application is that there is a certain target time for handling these forms, and almost every form will be actively used during that period. At completion of the forms process, the document is rarely accessed. For document-management systems, the period of high activity is less well defined but typically occurs at the early part of the document life cycle. During creation and editing, there are frequent accesses by the author or the editor as the document is prepared for release. Most documents have high read activity soon after release, then the read activity dies down. Rich media and Web application information typically have a similar pattern: lots of activity during preparation, and high read access tapering off over time. In this environment, there can also be archiving activity to remove items from the pubic access area when they become outdated. Archive systems usually do not have this early life period of activity. This activity has already occurred in the archiving application.

Time-based migration In setting the SMS configuration, you need to match the access patterns with the speed of the storage media. Keep the items that are likely to be accessed on fast media and move the inactive data to slower or near-line media. Wherever possible, spread the data on your disk based on storage classes over multiple volumes and use the maximum number of subdirectories to reduce the number of files per directory.

Chapter 5. Designing and configuring for performance

129


SMS fills each volume assigned to a storage class in a storage group, then moves to the next volume. This means that if you have multiple disk volumes dedicated to a storage class in a workstation collection, each volume will be filled in turn. SMS enables you to share disk volumes between storage groups and workstation collections, so that items from multiple workstation collections can exist on the same disk volumes. To spread the workload over the disk drives, you should dedicate the drives to a particular workstation collection. If you need to share them, create multiple storage groups and change the order in which the volumes are defined.

Space-based migration Content Manager Version 8.2 enables you to define migration according to capacity as well. Setting a volume threshold below 100% will cause the migrator to monitor the volumeâ&#x20AC;&#x2122;s free space and potentially move data before the migration policy predicts. If you are using migration and you have a good understanding of the usage patterns, set this value to 100% and use overflow volumes to avoid running out of storage. Alternatively, if you wish to keep as much data on the Content Manager managed disk as possible, set the migration period long (but not forever) and the volume capacity threshold below 100%. Define a next storage class in the migration policy as the destination for resource items when they are eventually migrated. The migration period for the Content Manager disk storage class (or classes) can be set long enough that any access after this time is very unlikely. Resource objects will be on disk for this period and get fast access. Set the migration thresholds on the volumes for the Content Manager managed disk under 100 % to avoid running out of space. The resource items will reside on disk for up to the migration period, as long as you might possibly need them, unless you begin to run out of disk capacity before the migration period. The volume threshold will become the trigger for migration; old data will move to the next storage location, maintaining some free space on disk for your new data.

5.6.4 Logging There is a Resource Manager activity log file as well, but it is not controlled by the administration panels. The method and detail recorded in this log is specified in the icmrm_logging.xml file, which is located in the following directory: .../Websphere/AppServer/installedApps/<node name>/<Resource Manager name>.ear/icmrm.war/ In this syntax:  <node name> is the WebSphere Application Server node name. This is usually the server host name.

130

Performance Tuning for Content Manager


 <Resource Manager> name is the name of the Resource Manager Web application, which is icmrm by default. It is followed by .ear as the directory name. It might be necessary to increase the level of detail recorded in the log to resolve problems with the Resource Manager. As mentioned earlier, additional logging involves additional workload on the server. The system uses the Log4J standard logging utility and by default it logs asynchronously. In the logging configuration file, you have the option to specify that logging is done directly to a file. This too can have an impact on system performance. In one of our scenarios, with pure load test run with debug level tracing directly to a file, the system throughput dropped by more than half as compared to the default logging settings.

5.6.5 Move Resource Manager storage area Move the Resource Manager storage areas to a different disk to achieve better I/O and increase performance on a data loaded system. The instructions to perform this action are the same as replacing or repartitioning a hard drive. This instructions can be found in the IBM Content Manager for Multiplatforms System Administration Guide, SC27-1335. Plan ahead. You can avoid this kind of procedure if at installation time you plan for growth of your system. This task includes many steps that have to be performed by an experienced DBA because of the data involved. This procedure really affects general performance; thus it distributes the charges and I/O capacity. If you are having I/O contention, this is a good procedure to improve system performance and avoid I/O contention. You can check the I/O contention problem on AIX using topas. Look for the disk utilization where the Resource Manager storage area is located. If it is constantly near 100% utilization, you have a I/O contention problem.

5.7 Network Network plays an important part in a systemâ&#x20AC;&#x2122;s response time and performance.

5.7.1 The triangle or â&#x20AC;&#x153;inverted Vâ&#x20AC;? The data flows between the client, Library Server, and Resource Manager changed dramatically in Content Manager Version 8. The client became the central cohesive element in fetching resource objects from the Resource Manager. In general, there is very little communication between the Library Server and the Resource Manager. The client communicates with the Library

Chapter 5. Designing and configuring for performance

131


Server to find and request access to data. When the Library Server authorizes the access, it sends a key (a token) related to the asset directly back to the client, unlike previous Content Manager implementations. The client then passes the key (token) to the Resource Manager to request the resource object (Figure 5-1).

W e b Ap p lica tio n S e rve r L ib ra ry S e rve r

R e so u rc e M anager

D a ta b a s e

to k e n

to ke n o b je c t

s e a rch

S ys te m Ad m in is tra tio n C lie n t

Figure 5-1 Data flow triangle

5.7.2 Physical locations One aspect of scalability and performance that has a major impact on design is the support for multiple Resource Managers in the same system. Additional Resource Managers can spread the I/O and network load involved in storage and retrievals. If installed at physically remote sites, this can mask wide area network delays. The Resource Managers can be located next to users to potentially reduce the network traffic and delays associated with moving a relatively large amount of data across a wide area network. A customer with multiple sites can run a single Content Manager system with a central Library Server for a single point of control and have distributed Resource Managers at each location. Users can store and retrieve their data to and from their local server. All data is available to all authorized users throughout the system, but the frequently accessed data is located on a nearby server.

132

Performance Tuning for Content Manager


For example, a global organization has offices in Chicago, Munich, and Singapore. Almost all work done in Munich revolves around Munichâ&#x20AC;&#x2122;s customers and their interactions with the organization. All the information about the customer, their interactions, and the local staff can be held in Munich. The Chicago office holds similar information about its own customers and staff, as does Singapore. For this global organization, it is rare that Munich-based customers would have dealings with the Singapore office. If they do, the system still can support them. The Singapore staff can access the Munich data, but response time might be slower because of the requirement to move large images or document files over the relatively slow network links to the Munich office. There are three ways to configure Content Manager to store data in a particular location:  You can set the storage location for each item as it is stored using APIs in your customized client application.  Create specific item types for each location and have the item types set the default Resource Manager.  Set the system to get the default Resource Manager based on the user storing the item, and for each user, set the default Resource Manager to be the one in their physical location. In Version 8.2, Content Manager reintroduces the LAN Cache facility. LAN Cache enhances this distribute operation by taking another copy of remote objects and storing it in the requestorâ&#x20AC;&#x2122;s local Resource Manager staging area. Subsequent requests for that resource object at that location would be served by the local Resource Manager. In the global organization example above, after the first time Singapore requested the Munich data that resided only in Munichâ&#x20AC;&#x2122;s Resource Manager, any subsequent request to access the Munich data will be served by the Singapore Resource Manager, and the data will be from the Singapore Resource Manager staging area.

5.7.3 Network and resource objects Moving resource objects within a network affect network performance. On the other hand, network performance can have a big impact on the Content Manager system performance. Compared to typical transactional data, documents or resource objects are, in general, relatively large: a database row update might be a matter of a dozen bytes and an insert might be up to 1000 bytes; whereas an image file is typically 50,000 bytes. The time taken to move this amount of data affects user response time. Every store and retrieve operation moves the document across the network (WAN or LAN). Consider the forms processing application referred to earlier in this chapter. If each item is stored once and retrieved and viewed in five workflow

Chapter 5. Designing and configuring for performance

133


steps, then the document must traverse the network between the client application and the server six times. If the resource image is 50 KB and we have to process 10,000 images per 8-hour day, then we can calculate the network traffic approximately as follows: 50 KB * 10,000 images/day * 6 times on the network (8 hours * 3600 seconds/hour) = 100 KB / second = 800 Kb per second (average) Although the work process might take weeks to complete, in order to keep a steady workload (in this example, no build-up of work in the system), then the system needs to process the number of items equivalent to the number entering the system each day. This means, in our example where 10,000 items entering the system everyday, the system needs to process 10,000 items every day to avoid a backlog. The calculation above is only approximate, because additional header and trailer information is appended by the HTTP response, and neither the request nor calls to the Library Server are considered. This does not seem to be much load when minimum LAN speed is 10 Mb per second; however, the network is not installed specifically for this system. All estimates for the network traffic have to take account of existing network traffic and that the effective throughput of a 10 Mb LAN is really only about 6 Mb. Another factor that affects the calculation above is the difference between average and peak utilizations. Often, the end users’ work habits can mean that the majority of the work is done in certain highly productive time slots. Some users might get a late start each morning, by doing things other than “real” productive work, such as reading e-mail, getting coffee, conducting meetings, and doing other administrative tasks. Users might delay serious production work until mid-morning. This could present a peak system utilization demand during mid-morning. Lunch time can have an effect as some users go early and some go late. The workload declines around the lunch break. There might be another peak period in the afternoon after the late lunch. Finally, closing time also has an effect because some users go home early, and others stop using the system in preparation for closing up for the night or finishing other activities. This all means that peak load can be two or more times the average load. When considering network performance, batch processing and overnight processing must be included. Object replication and migration move large amount of data over the network. Typically these background tasks can be done at any time; they are best deferred to times when user activities are low. Prefetching items during the off hours in preparation for the next day’s processing is another

134

Performance Tuning for Content Manager


effective way of reducing the network traffic during the day and improving system response time. Other network operations outside of Content Manager also can have network impact. Things such as file system backups shift large amounts of data across the network that potentially interfere with system response time or the elapsed time of the background operations. If network contention is severe, it could delay background activities into times of high user activity and cause user response time problems. Planning and scheduling background tasks are important to improve system performance and response time.

5.7.4 Network and Web clients So far, we discussed moving the resource objects directly to the user requesting them. This happens by default for client/server or two-tier configurations. In the default configuration of eClient, resource objects are sent by the Resource Manager to the Web Application Server, which then sends them to the browser client. On a single-server system this might work, but when the Resource Manager and the eClient are on separate servers this can load the network unnecessarily. Consider our previous example, of the global organization again. Suppose the Munich office establishes a Web interface to their system and encourages the whole organization to use it. If they use the default configuration, users in Chicago requesting their own resource objects via the Web will cause the files to go from the Chicago Resource Manager across the Atlantic Ocean to the Munich eClient application, only to be turned around and sent back to Chicago. The solution is to configure the eClient to send the resource object directly from the Resource Manager to the userâ&#x20AC;&#x2122;s browser. In our example, the resource request from the browser client in Chicago receives a URL pointing to the resource on the Chicago Resource Manager, so the resource object goes directly from the Chicago Resource Manager to the Chicago browser. When the resource object is sent directly to the browser, there is an additional complication. The browser becomes responsible for handling the particular MIME type of the resource object. In the default configuration, the eClient application on the mid-tier server converted a range of MIME types into browser-friendly format. TIFF is not supported directly by Netscape or Internet Explorer, so the eClient converts TIFF to GIF or JPEG before sending the objects to the browser. When the objects are sent to it directly, the browser needs to handle them. In this case, the eClient can be configured to send an applet to view a number of MIME types that are not supported by most browsers. Sending the files directly to the browser can enhance system performance because the workload of converting the file from one MIME type to another is removed from the eClient server. The machine on which browser is running now

Chapter 5. Designing and configuring for performance

135


does the work of rendering the image in the applet and therefore requires sufficient resources to meet the response time expectations. There is a cost for this configuration. The applet must be downloaded from the eClient Web application server to the browser every time the browser is shut down and restarted to access a resource object that requires the applet. The applet is nearly 400 KB. This could add more network traffic than it saves for a large number of infrequent users.

5.7.5 Network and Library Server Most API calls interact with the Library Server database. Every time there is an interaction, there are network delays and additional workload on the database. We should avoid these as much as possible. To avoid calls to the Library Server database, assemble all parts to an item before adding it and conversely only get from the Library Server (or the Resource Manager) the data your application needs. If you are building a collection of child components, links, or parts, assemble the collection in the client application and add them all at once wherever possible.

5.8 Client applications The Content Manager clients vary between particular Content Manager applications styles and even between functions within any one system. Content Manager provides sample applications that demonstrate many functions that are available in a Content Manager system. The sample applications can also help developers in the use of the APIs and creating custom Content Manager clients. The Windows client and the eClient, which provide production-ready application clients for Content Manager, are generalized applications that have a wide range of capabilities and applications. They are designed for maximum flexibility and assume very little about the setup of the Library Server to which they will connect. They do not take any short cuts or assume the existence of the components that might be required for any one particular implementation. As a consequence, they might not be a fast or efficient as a custom-built application client. A custom client application can take advantage of a predefined data model and make assumptions about the data that have been enforced elsewhere in the application. For example, the system designer knows that folders are created only when they contain a document and documents cannot be deleted, so the application can assume that there will be a document contained by that folder and jump straight to it. The generic, out-of-box clients cannot make this assumption, because it might not be true in other customer situations. These clients must determine

136

Performance Tuning for Content Manager


whether there is a contained document, and if so, assume that there might be multiple documents and have to deal with them all. This can take a little additional time and create additional load for the server. To develop clients specific to a business requirement, we provide some general development rules for better performance:  Call the right APIs. This might seem intuitive but is an important point. A wide range of function calls and designs is available to the developer. Refer to the code samples and test your application against a database of a size that will be representative of the real system. In our testing scenario, we were able to improve performance of one particular function by choosing a different set of APIs and ways to deliver that function. The response time was reduced by more than half — generating the biggest performance gain in the performance tuning process.  Minimize the number of calls to the server. Refer to 5.7.5, “Network and Library Server” on page 136. This reduces the network delays and the workload on the servers. In addition, wherever feasible, cache data and definitions on the client.  Request only the data required. Get only the indexing data if you do not need to view the document; get only the PID if you do not need the indexing data. When retrieving data from the Content Manager servers, get only the data you need by specifying the retrieval option most appropriate for the task at hand. See Table 5-4 for a list of options. The table is taken from the sample code supplied with the Content Manager connector for Enterprise Information Integrator.  In three-tier designs, offload as much work from the mid-tier servers to the end client as possible. Where possible, allow the viewer applet to perform the image rendering on the client.  Deliver the items from the Resource Manager to the client directly wherever possible. Table 5-4 Retrieve options Retrieval

Option constant

Explanation

PID information only

DK_CM_CONTENT_IDONLY

Useful in Query/Search. Query will return collection with empty DDOs but complete PID information.

Attributes only

DK_CM_CONTENT_ATTRONLY

Retrieves only attributes.

Children

DK_CM_CONTENT_CHILDREN

Retrieve the child components.

Chapter 5. Designing and configuring for performance

137


Retrieval

Option constant

Explanation

One level

DK_CM_CONTENT_ONELEVEL

Retrieve one level of information, children, and attributes.

Outbound links

DK_CM_CONTENT_LINKS_OUTBOUND

Retrieve outbound link information.

Inbound links

DK_CM_CONTENT_LINKS_INBOUND

Retrieve inbound link information.

Item tree

DK_CM_CONTENT_ITEMTREE

Retrieve entire tree of information, including all metadata / attributes, children and sub-children, links.

Item tree without links

DK_CM_CONTENT_ITEMTREE_NO_LI NKS

Retrieve the entire tree of information, but save time and increase performance by not checking for links.

No resource content

DK_CM_CONTENT_NO

If it is a resource item, do not retrieve the Resource object content.

Resource content

DK_CM_CONTENT_YES

If it is a resource item, retrieve the resource object content.

Only resource content

DK_CM_CONTENT_ONLY

If it is a resource item, retrieve only the resource object content.

Check out item

DK_CM_CHECKOUT

In addition, check out / lock the item.

Latest version

DK_CM_VERSION_LATEST

If versioning is enabled, retrieve the latest version, despite what the specific object the PID information specifies.

5.8.1 The OO API layer component of the client New in Content Manager Version 8 is the use of the DB2 client as the connection from the client or mid-tier server to the Library Server. In many earlier implementations of the Content Manager, the DB2 client was installed alongside the Content Manager client to support other functions required by the client. Now the Content Manager client can take advantage of the DB2 communications capability. The Content Manager client OO APIs convert the API calls to the database calls. This removes the multiple conversions that previously occurred in the earlier Information Integrator for Content (EIP) client APIs.

138

Performance Tuning for Content Manager


The major redesign of Content Manager Version 8 takes advantage of database stored procedure calls, which reduce the amount of back-and-forth communication between the client application or mid-tier application and the database. In a single stored procedure request, the client can initiate a long series of database queries and changes that might otherwise have taken many client-to-database requests. This provides a major performance boost, especially over low-bandwidth connections. Although this reduces the workload, it is still good to avoid any unnecessary client requests and data transfer to or from the servers. To monitor the interactions with the database, you can run the DB2 CLI or JDBC traces from the client or mid-tier server and see the calls and response times. By default, these facilities are disabled and use no additional computing resources. When enabled, the trace facilities generate one or more text log files whenever an application accesses the appropriate driver (DB2 CLI or DB2 JDBC). These log files provide detailed information about:  The order in which CLI or JDBC functions were called by the application  The contents of input and output parameters passed to and received from CLI or JDBC functions  The return codes and any error or warning messages generated by CLI or JDBC functions DB2 CLI and DB2 JDBC trace file analysis can benefit application developers in a number of ways. First, subtle program logic and parameter initialization errors are often evident in the traces. Second, DB2 CLI and DB2 JDBC traces might suggest ways of better tuning an application or the databases it accesses. The configuration parameters for both DB2 CLI and DB2 JDBC traces facilities are read from the DB2 CLI configuration file db2cli.ini. By default, this file is located in the \sqllib path on the Windows platform and the /sqllib/cfg path on UNIX platforms. You can override the default path by setting the DB2CLIINIPATH environment variable. On the Windows platform, an additional db2cli.ini file might be found in the userâ&#x20AC;&#x2122;s profile (or home) director if any user-defined data sources are defined using the ODBC Driver Manager. This db2cli.ini file will override the default file. To view the current db2cli.ini trace configuration parameters from the command line processor, issue the following command: db2 GET CLI CFG FOR SECTION COMMON

Chapter 5. Designing and configuring for performance

139


There are three ways to modify the db2cli.ini file to configure the DB2 CLI and DB2 JDBC trace facilities:  Use the DB2 Configuration Assistant if it is available.  Manually edit the db2cli.ini file using a text editor.  Issue UPDATE CLI CFG command from the command line processor. For example, the following command issued from the command line processor updates the db2cli.ini file and enables the JDBC tracing facility: db2 UPDATE CLI CFG FOR SECTION COMMON USING jdbctrace 1 Important: Disable the DB2 CLI and DB2 JDBC trace facilities when they are not needed. Unnecessary tracing can reduce application performance and might generate unwanted trace log files. DB2 does not delete any generated trace files and will append new trace information to any existing trace files. It is best to perform these traces on a client or mid-tier server that is physically separated from the Library Server and Resource Manager servers. These trace files are common to all client activities, so they can include server-side SQL and JDBC calls from the Resource Manager Web application to the Resource Manager database. For more about JDBC and CLI tracing, refer to the DB2 Information Center at: http://publib.boulder.ibm.com/infocenter/db2luw/v8//index.jsp The trace information enables you to see the stored procedures called by the application and the time it takes to service those requests. From this data, you can get a clear picture of the interactions with the Library Server and identify areas of poor performance.

140

Performance Tuning for Content Manager


6

Chapter 6.

Best practices for Content Manager system performance In this chapter we cover a variety of performance tuning best practice tips for Content Manager. We look at common performance tuning considerations; operating system tuning for AIX, Linux, and Windows; network tuning; Library Server and Resource Manager tuning; and application tuning. These best practice steps are assembled into a convenient checklist for you to follow.

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

143


6.1 Introduction Tuning a Content Manager system involves tuning the operating system, DB2, WebSphere Application Server, and Tivoli Storage Manager. Some of the tuning and configuration can be done during or immediately after the system installation. It is very important to include operating system tuning considerations before the system goes into production. Some other tasks should be performed during regular maintenance tasks. Proper performance tuning is more than a solution to a performance problem. With proper monitoring, performance tuning can be addressed during regular maintenance schedules, minimizing performance impact on the system users. Attention: Most of the material covered in this chapter is extracted from the following publications, written by the Content Manager Performance Team:  IBM Content Manager v8.3 Performance Tuning Guide http://www.ibm.com/support/docview.wss?uid=swg27006452  CMv8.3 Performance Monitoring Guide http://www.ibm.com/support/docview.wss?uid=swg27006451  CMv8.3 Performance Troubleshooting Guide http://www.ibm.com/support/docview.wss?uid=swg27006450

6.2 Best practices for system performance This checklist includes some of the most common applicable performance tuning recommendations from the Content Manager Performance Team at the Silicon Valley Lab and services practitioners in the field. The tasks presented here are the most common suggestions for an initial install. These tasks are not all of the performance tuning tips available, but they are the most common and considered the best practice steps. The tasks are covered in detail in their respective chapters in this book. We do not cover all operating systems and databases specifically. Keep in mind that many of the principals and methodologies are still applicable. Figure 6-1 on page 145 shows the performance methodology over the life cycle of a Content Manager system we followed when designing the best practices checklist. Even though the assumption is that the best practices are followed during an initial install and configuration, these best practices can be implemented at any time.

144

Performance Tuning for Content Manager


After initial installation, implementing routine maintenance and monitoring can start the performance tuning process again. This practice can help to ensure that your Content Manager system is always being tuned for performance to ensure that the user community is not affected by performance problems.

Plan for Performance Sizing

Design and Configure for Performance

Tune Operating System

Monitoring

Effective Disk I/O

Tune DB2 (Database)

Routine Maintenance

Tune WebSphere Application Server

Tune Tivoli Storage Manager

Optimize database

Figure 6-1 Content Manager best practices performance methodology

6.2.1 AIX Table 6-1 provides a line-item overview of steps to be followed, with links to the detailed descriptions in their respective chapters. Table 6-1 Best practice checklist in an AIX environment AIX best practice checklist Plan for performance Chapter 4, “Planning for performance” on page 87 Design and configure for performance Chapter 5, “Designing and configuring for performance” on page 99

Chapter 6. Best practices for Content Manager system performance

145


AIX best practice checklist Adjust maximum number of processes 7.2.1, “Adjust maximum number of PROCESSES allowed per user” on page 152 Use JFS2 and EJFS for logical volumes and file systems 7.2.2, “Use JFS2 and EJFS for logical volumes and file systems” on page 153 Configure DB2 instance owners ulimit values 7.2.3, “Check the default values in ulimit” on page 154 Set all machines to full duplex 7.2.5, “Set all machines to full duplex” on page 155 Set MTU size to maximum supported by LAN 7.2.6, “Set MTU size to the maximum supported by LAN” on page 155 Configure effective disk I/O usage 7.2.4, “Configure disk I/O for effective usage” on page 155 Spread DB components over multiple disks 8.2.1, “Spread the database components over multiple disks” on page 173 Create separate instances during installation 8.2.3, “Create separate instances during installation” on page 176 Set default database location “Change the database default location” on page 173 Set default database log location “Change log file location” on page 174 Initial parameter tuning for Library Server database 8.2.4, “Initial parameter tuning for Library Server database” on page 176 Increase default buffer pool if more than 1 GB RAM 8.5, “Buffer pool tuning” on page 196 WebSphere Application Server: Tune Java heap size 9.6.1, “Initial and maximum heap size for the JVM” on page 273 Tune TSM buffer pool size 10.2.1, “Self-tune buffer pool size (BUFPOOLSIZE)” on page 277 Initial index configuration: Use runstats/rebind “Database indexes” on page 118 Regular maintenance: Use snapshots 8.3.2, “Monitoring system performance using DB2 snapshot” on page 181

146

Performance Tuning for Content Manager


AIX best practice checklist Regular maintenance: Use runstats/rebind 8.3.3, “Keeping database statistics and execution plans up to date through runstats/rebind” on page 183 Regular maintenance: Reorgch/reorg 8.3.4, “Reorganizing tables through reorg” on page 184

6.2.2 Linux Table 6-2 provides a line-item overview of steps to be followed, with links to the detailed descriptions in their respective chapters. Table 6-2 Best practice checklist in a Linux environment Linux best practice checklist Plan for performance Chapter 4, “Planning for performance” on page 87 Design and configure for performance Chapter 5, “Designing and configuring for performance” on page 99 Set limits 7.3.1, “Configure appropriate ulimit” on page 157 Configure shared Memory 7.3.2, “Configure shared memory” on page 157 Configure semaphores 7.3.3, “Configure semaphores settings” on page 158 Configure message queues 7.3.4, “Configure message queues” on page 158 Tune network 7.3.5, “Tune network” on page 159 Tune file systems for external storage7.3.6, “Tune file systems for external storage” on page 160 Configure effective disk I/O usage 7.2.4, “Configure disk I/O for effective usage” on page 155 Spread DB components over multiple disks 8.2.1, “Spread the database components over multiple disks” on page 173 Create separate instances during installation 8.2.3, “Create separate instances during installation” on page 176

Chapter 6. Best practices for Content Manager system performance

147


Linux best practice checklist Configure default database location “Change the database default location” on page 173 Change database log location “Change log file location” on page 174 WebSphere Application Server: Configure Java heap size 9.6.1, “Initial and maximum heap size for the JVM” on page 273 Configure TSM buffer pool size 10.2.1, “Self-tune buffer pool size (BUFPOOLSIZE)” on page 277 Initial index configuration: Use runstats/rebind “Database indexes” on page 118 Regular maintenance: Use snapshots 8.3.2, “Monitoring system performance using DB2 snapshot” on page 181 Regular maintenance: Use runstats/rebind 8.3.3, “Keeping database statistics and execution plans up to date through runstats/rebind” on page 183 Regular maintenance: Use reorgch/reorg 8.3.4, “Reorganizing tables through reorg” on page 184

6.2.3 Windows Table 6-3 provides a line-item overview of steps to be followed, with links to the detailed descriptions in their respective chapters. Table 6-3 Best practice checklist in a Windows environment Windows best practice checklist Plan for performance Chapter 4, “Planning for performance” on page 87 Design and configure for performance Chapter 5, “Designing and configuring for performance” on page 99 Disable Indexing service 7.4.1, “Disable Indexing Service” on page 162 Disable computer browser service 7.4.2, “Disable Computer Browser Service” on page 162 Disable Internet information service 7.4.3, “Disable Internet Information Service” on page 163

148

Performance Tuning for Content Manager


Windows best practice checklist Disable 8.3 file name creation service 7.4.4, “Disable 8.3 file name creation (NtfsDisable8dot3NameCreation)” on page 163 Check memory 7.4.5, “Check memory” on page 164 Tune network 7.4.6, “Tune Windows network” on page 164 Set boot.ini parameters 7.4.8, “Set the /3GB parameter option in boot.ini” on page 166 Configure effective disk I/O usage 7.4.7, “Configure disk I/O for effective usage” on page 166 Spread DB components over multiple disks 8.2.1, “Spread the database components over multiple disks” on page 173 Create separate instances during installation 8.2.3, “Create separate instances during installation” on page 176 Change default database location “Change the database default location” on page 173 Change default database log location “Change log file location” on page 174 WebSphere Application Server: Configure Java heap size 9.6.1, “Initial and maximum heap size for the JVM” on page 273 Tune TSM buffer pool size 10.2.1, “Self-tune buffer pool size (BUFPOOLSIZE)” on page 277 Initial index configuration: Use runstats/rebind “Database indexes” on page 118 Regular maintenance: Use snapshots 8.3.2, “Monitoring system performance using DB2 snapshot” on page 181 Regular maintenance: Use runstats/rebind 8.3.3, “Keeping database statistics and execution plans up to date through runstats/rebind” on page 183 Regular maintenance: Use reorgch/reorg 8.3.4, “Reorganizing tables through reorg” on page 184

Chapter 6. Best practices for Content Manager system performance

149


6.2.4 Maintenance After a Content Manager system has been designed, installed, and configured, it is important to perform maintenance tasks regularly. By maintaining a Content Manager system properly and in a timely manner, you get the best possible performance from it over time, as well as potentially avoid problems that can manifest themselves as system-endangering situations. The checklist shown in Table 6-4 provides a line-item overview of the maintenance steps to be followed, with links to the detailed descriptions in their respective chapters. Table 6-4 Best practice for performance maintenance checklist Maintenance checklist Optimize server databases using runstats/rebind “runstats and rebind database tables” on page 398 Optimize server databases using reorgchk “reorgchk” on page 400 Monitor LBOSDATA directory size 14.3, “Monitoring LBOSDATA directory size” on page 403 Manage staging directory space 14.4, “Managing staging directory space” on page 406 Remove entries from the events table 14.5, “Removing entries from the events table” on page 409 Remove log files 14.6, “Removing log files” on page 410 Manage Resource Manager utilities and services 14.7, “Managing Resource Manager utilities and services” on page 411

150

Performance Tuning for Content Manager


9

Chapter 9.

Tuning WebSphere for Content Manager This chapter covers tuning WebSphere for Content Manager. It includes hardware selection and configuration, operating system tuning, Web server tuning, WebSphere Application Server tuning, and Java virtual machines tuning.

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

255


9.1 Introduction Tuning WebSphere for Content Manager is about processing as many requests as possible by utilizing the resources to their fullest potential. Many components in WebSphere Application Server have direct performance impact on Resource Manager. This chapter covers the tuning area including our recommendations that might have implications on Resource Manager performance. As a shortcut into WebSphere tuning, we offer Table 9-1. This table is designed for easy access to symptoms and related tuning area for the symptoms. Table 9-1 WebSphere performance tuning short cut Symptoms

Related tuning area

Throughput and response time are undesirable.

Processor speed

AIX: Memory allocation error.

AIX file descriptors (ulimit)

Windows NT or 2000: Netstat shows too many sockets are in TIME_WAIT.

Windows NT or 2000 TcpTimedWaitDelay

Throughput is undesirable and the application server priority has not been adjusted.

Adjusting the operating system priority of the WebSphere Application Server process

Under load, client requests do not arrive at the Web server because they time out or are rejected.

For IBM HTTP Server on Windows NT, see ListenBackLog

Tivoli Performance Viewer Percent Maxed Metric indicates that the Web container thread pool is too small.

Thread pool

Netstat shows too many TIME_WAIT state sockets for port 9080.

MaxKeepAliveConnections, MaxKeepAliveRequests

Too much disk input/output occurs due to paging.

Heap size settings

Much of the material provided here is extracted from IBM WebSphere Application Server, Version 5.1: Monitoring and Troubleshooting, found in: http://www.ibm.com/software/webservers/appserv/was/library For more in-depth tuning information, you can also check out the WebSphere Application Server Information Center: http://publib.boulder.ibm.com/infocenter/wasinfo/v5r1/index.jsp

256

Performance Tuning for Content Manager


9.2 Hardware tuning This section discusses considerations for selecting and configuring the hardware on which the application servers will run.

9.2.1 Processor speed In the absence of other bottlenecks, increasing the processor speed often helps throughput and response times. A processor with a larger L2 or L3 cache can yield higher throughput even if the processor speed is the same as a CPU with a smaller L2 or L3 cache.

9.2.2 Disk speed Disk speed and configuration can have a dramatic effect on the performance of application servers that run applications that are heavily dependent on database support, that use extensive messaging, or are processing workflow. Disk I/O subsystems that are optimized for performance, such as RAID array, are essential components for optimum application server performance in these environments.

9.2.3 System memory Increasing memory to prevent the system from paging memory to disk is likely to improve performance. Try adjusting the parameter when the system is paging (and processor utilization is low because of the paging). Allow at least 256 MB memory for each processor. We recommend a minimum value of 512 MB.

9.2.4 Networks Run network cards and network switches at full duplex. Running at half-duplex decreases performance. Verify that the network speed can accommodate the required throughput. Make sure that 100 MB is in use on 10/100 Ethernet networks.

Chapter 9. Tuning WebSphere for Content Manager

257


9.3 Operating system tuning This section discusses parameters you can tune on the operating system side to potentially enhance system performance. We describe each parameter in detail, with its default value and our recommendation for what value you should set.

9.3.1 TCP timed wait delay (TcpTimedWaitDelay) This is a Windows NT/2000 parameter. This parameter determines the time that must elapse before TCP can release a closed connection and reuse its resources. This interval between closure and release is known as the TIME_WAIT state or twice the maximum segment lifetime (2MSL) state. During this time, reopening the connection to the client and server cost less than establishing a new connection. Reducing the value of this entry enables TCP to release closed connections faster, providing more resources for new connections. Adjust this parameter if the running application requires rapid release, creation of new connections, and there is a low throughput due to many connections sitting in TIME_WAIT.

Default values 0xF0 (240 seconds = 4 minutes)

How to view or set 1. Using the regedit command, access HKEY_LOCAL_MACHINE\ SYSTEM\CurrentControlSet\Services\TCPIP\Parameters and create a new REG_DWORD named TcpTimedWaitDelay. 2. Set the value to decimal 30, which is Hex 0x0000001e, or another value. 3. Restart the system. By using the netstat command, you will be able to see that there are fewer connections in TIME_WAIT.

Our recommendation Use the minimum value of 0x1E (30 seconds) for Content Manager OLTP applications. Adjust this parameter if the application requires rapid release and creation of new connections, and there is a low throughput due to many connections sitting in TIME_WAIT.

258

Performance Tuning for Content Manager


9.3.2 Maximum user ports (MaxUserPort) This is a Windows NT/2000 parameter. It determines the highest port number TCP can assign when an application requests an available user port from the system.

Default values 1024 - 5000 are available

How to view or set 1. Using the regedit command, access HKEY_LOCAL_MACHINE\SYSTEM\ CurrentControlSet\Services\TCPIP\Parameters and create a new REG_DWORD named MaxUserPort. 2. Restart the system.

Our recommendation The server might run out of ports if too many clients try to connect and this value is not changed. Use at least decimal 32768 until 65534 (the maximum). With this, you increase the number of possible ports to be used to connect to the Content Manager system. Important: These MaxUserPort and TcpTimedWaitDelay should be used together when tuning WebSphere Application Server on a Windows NT or Windows 2000 operating system.

9.3.3 Number of open files permitted (ulimit) This AIX parameter specifies the permitted number of open files. The default setting is typically sufficient for most applications. If the value set for this parameter is too low, a memory-allocation error is displayed.

Default value 2000

How to view or set To set ulimit, use the following command: ulimit -n <parameter value> In this syntax, <parameter value> stands for the number of open files permitted. For SMP machines, to unlimit the ulimit value, use the following command: ulimit -unlimited

Chapter 9. Tuning WebSphere for Content Manager

259


To display the current values for all limitations on system resources, use the following command: ulimit -a

Our recommendation The default setting is typically sufficient for most applications. If the value set for this parameter is too low, a memory-allocation error is displayed. This error occurs when a cluster of four clones is run on an S80 24-way and the ulimit value should be changed to unlimited. See 7.3.1, â&#x20AC;&#x153;Configure appropriate ulimitâ&#x20AC;? on page 157 for additional discussion about default values for ulimit.

9.3.4 Duration of an active connection (TCP_KEEPIDLE) The keepAlive packet ensures that a connection stays in an active/ESTABLISHED state.

Default value 14400 milliseconds

How to view or set Use the no command to determine the current value or to set the value. The change takes effect immediately. The change is effective until the next time you restart the machine. To permanently change the value, add the no command to the /etc/rc.net directory. For example: no -o tcp_keepidle=600 Our recommendation 600 milliseconds

9.4 Web server tuning Tuning IBM HTTP Server increases the throughput while balancing the system resources. WebSphere Application Server provides plug-ins for several Web server brands and versions. Each Web server operating system combination has specific tuning parameters that affect application performance. IBM HTTP Server settings are set in the configuration file called httpd.conf, which is normally located on UNIX under /usr/IBMHttpServer/conf directory, and on

260

Performance Tuning for Content Manager


Windows under <installed drive>:\IBM\IBMHttpServer\conf directory. The definition of each parameter is described in the httpd.conf file itself. In this section, we discuss the performance tuning settings that are associated with the Web servers.

9.4.1 Access logs Access logs collect all incoming HTTP requests. Logging degrades performance because the I/O operation overhead causes logs to grow significantly in a short time.

Default value Logging of every incoming HTTP request is enabled.

How to view or set 1. Open the IBM HTTP Server httpd.conf file, located in the directory IBM_HTTP_Server_root_directory/conf. 2. Search for a line with the text CustomLog. 3. Comment out this line by placing # in front of the line. 4. Save and close the httpd.conf file. 5. Stop and restart the IBM HTTP Server.

Our recommendation Disable the access logs.

9.4.2 Web server configuration refresh interval (RefreshInterval) WebSphere Application Server administration tracks a variety of configuration information about WebSphere Application Server resources. Some of this information, such as URIs pointing to WebSphere Application Server resources, must be understood by the Web server. This configuration data is pushed to the Web server through the WebSphere Application Server plug-in at intervals specified by this parameter. Periodic updates enable new servlet definitions to be added without having to restart any of the WebSphere Application Server servers. However, the dynamic regeneration of this configuration information is costly in terms of performance.

Default values 60 seconds.

Chapter 9. Tuning WebSphere for Content Manager

261


How to view or set This parameter, <RefreshInterval=xxxx> (where xxxx is the number of seconds), is specified in the websphere_root/config/plug-in.xml file.

Our recommendation Use the default. Content Manager is not directly affected by this parameter.

9.4.3 HTTP server maximum clients (MaxClients) The MaxClients directive controls the maximum number of simultaneous connections or users that the Web server can service at any one time. If, at peak usage, your Web server needs to support 200 active users at once, you should set MaxClients to 220 (200 plus an extra 10% for load growth). Setting MaxClients too low can cause some users to believe that the Web server is not responding. You should have sufficient RAM in your Web server machines to support each connected client. For IBM HTTP Server 1.3 on UNIX, you should allocate about 1.5 MB MaxClients of RAM for use by the IBM HTTP Server. For IBM HTTP Server 1.3 on Windows, you should allocate about 300 KB MaxClients of RAM for use by the IBM HTTP Server. Some third-party modules can significantly increase the amount of RAM used per connected client.

Default values 150

How to view or set 1. Edit the IBM HTTP server file httpd.conf, which is located in the directory IBM HTTP Server_root_directory/conf. 2. Change the value of the MaxClients parameter. 3. Save the changes and restart the IBM HTTP server.

Our recommendation Initially use the default and monitor your system performance. In general:  Use a lower MaxClients value for a simple, CPU-intensive application.  Use a higher MaxClients value for a more complex, database-intensive application with longer wait times.

9.4.4 Minimum spare server processes (MinSpareServers) The MinSpareServers directive sets the desired minimum number of idle child server processes. An idle process is one that is not handling a request. If the

262

Performance Tuning for Content Manager


number of the idle child server processes falls below the value specified for MinSpareServer, then the parent process will create new child processes. Setting this directive to some value m ensures that you always have at least n + m httpd processes running when you have n active client requests. Attention: This directive has no effect on Microsoft Windows.

Default values 5

How to view or set 1. Edit the IBM HTTP server file httpd.conf, located in the directory IBM_HTTP_Server_root_directory/conf. 2. Change the value of the MinSpareServers parameter. 3. Save the changes and restart the IBM HTTP server.

Our recommendation Use the default. Monitor the CPU usage for creating and destroying HTTP requests. You should have to tune this parameter only on very busy sites. Setting this parameter to a large number is almost always a bad idea.

9.4.5 Maximum spare server processes (MaxSpareServers) The MaxSpareServers directive sets the desired maximum number of idle child server processes. An idle process is one that is not handling a request. If the number of idle processes exceeds the value specified for MaxSpareServers, then the parent process will kill the excess processes. This is the maximum number of spare servers, not the maximum total number of client requests that can be handled at one time. If you wish to limit that number, see the MaxClients directive. Attention: This directive has no effect when used with the Apache Web server on a Microsoft Windows platform.

Default values 10

Chapter 9. Tuning WebSphere for Content Manager

263


How to view or set 1. Edit the IBM HTTP server file httpd.conf, located in the directory IBM_HTTP_Server_root_directory/conf. 2. Change the value of the MaxSpareServers parameter. 3. Save the changes and restart the IBM HTTP server.

Our recommendation Use the default. Monitor the CPU usage for creating and destroying HTTP requests. You should have to tune this parameter only on very busy sites. Setting this parameter to a large number is almost always a bad idea. For optimum performance, specify the same value for the MaxSpareServers and the StartServers (see below) parameters. Specifying similar values reduces the CPU usage for creating and destroying httpd processes. It also preallocates and maintains the specified number of processes so that few processes are created and destroyed as the load approaches the specified number of processes (based on MinSpareServers).

9.4.6 Startup server processes (StartServers) The StartServers directive sets the number of child server processes created on startup. The number of processes is dynamically controlled depending on the load, so there is usually little reason to adjust this parameter. Attention: When running under Microsoft Windows, this directive has no effect. There is always one child that handles all requests. Within the child, requests are handled by separate threads. The ThreadsPerChild directive controls the maximum number of child threads handling requests, which will have an effect similar to the setting of StartServers on UNIX.

Default values 5

How to view or set 1. Edit the IBM HTTP server file httpd.conf, located in the directory IBM_HTTP_Server_root_directory/conf. 2. Change the value of the StartServers parameter. 3. Save the changes and restart the IBM HTTP server.

264

Performance Tuning for Content Manager


Our recommendation For optimum performance, specify the same value for the MaxSpareServers (see above) and the StartServers parameters. Specifying similar values reduces the CPU usage for creating and destroying httpd processes. It also preallocates and maintains the specified number of processes so that few processes are created and destroyed as the load approaches the specified number of processes (based on MinSpareServers).

9.4.7 Maximum requests per child (MaxRequestPerChild) Apache always creates one child process to handle requests. If it dies, another child process is created automatically. Within the child process, multiple threads handle incoming requests. The MaxRequestsPerChild directive sets the limit on the number of requests that an individual child server process will handle. After MaxRequestsPerChild requests, the child process will die. If MaxRequestsPerChild is zero, then the process will never expire. Setting MaxRequestsPerChild to a non-zero limit has two beneficial effects:  It limits the amount of memory that process can consume by (accidental) memory leakage.  It helps to reduce the number of processes when the server load reduces by giving the processes a finite life time. However, on Win32速, It is recommended that this be set to zero. If it is set to a non-zero value, when the request count is reached, the child process exits, and is re-spawned, at which time it rereads the configuration files. This can lead to unexpected behavior if you have modified a configuration file but are not expecting the changes to be applied yet. See also (ThreadsPerChild).

Default values 0 for Windows platforms 0000 for AIX platforms

How to view or set 1. Edit the IBM HTTP server file httpd.conf, located in the directory IBM_HTTP_Server_root_directory/conf. 2. Change the value of the MaxRequestPerChild parameter. 3. Save the changes and restart the IBM HTTP server.

Chapter 9. Tuning WebSphere for Content Manager

265


Our recommendation In general, do not force a server to exit after it has served some number of requests. This means that if there are no known memory leaks with Apache and Apache’s libraries, set the value to zero for both Windows and AIX platforms. If you want the server to exit after they have run for a long time (to help the system clean up after the process), set this to a large number, such as 10,000. In this example, each child server will exit after serving 10,000 requests, and another server will take its place.

9.4.8 Threads per child (ThreadsPerChild) Apache always creates one child process to handle requests. If it dies, another child process is created automatically. Within the child process, multiple threads handle incoming requests. The ThreadsPerChild directive tells the server how many concurrent threads it should use at a time. It is the maximum number of connections the server can handle at once. Attention: This directive has no effect on UNIX systems. UNIX users should look at StartServers and MaxRequestsPerChild.

Default values 50

How to view or set To change: 1. Edit the IBM HTTP Server file httpd.conf, which is located in the IBM_HTTP_Server_root_directory/conf directory. 2. Change the value of the ThreadsPerChild parameter. 3. Save the changes and restart the IBM HTTP server. Two ways to find out how many threads are being used under the current load:  Use the Windows NT or 2000 Performance Monitor: a. Click Start → Programs → Administrative Tools → Performance Monitor. b. Click Edit → Add to chart and set:

266

Object: IBM HTTP Server

Instance: Apache

Counter: Waiting for connection

Performance Tuning for Content Manager


To calculate the number of busy threads, subtract the number waiting (Windows NT or 2000 Performance Monitor) from the total available (ThreadsPerChild).

 Use IBM HTTP Server server-status (this choice works on all platforms, not just Windows NT or 2000): a. Edit the IBM HTTP Server file httpd.conf as follows: •

Remove the comment character “#” from the following lines: - #LoadModule status_module modules/ApacheModuleStatus.dll - #<Location /server-status> - #SetHandler server-status - #</Location>

b. Save the changes and restart the IBM HTTP server. c. In a Web browser, go to http://yourhost/server-status and click Reload to update status. Or, if the browser supports refresh, go to http://yourhost/server-status?refresh=5 to refresh every 5 seconds. You will see five requests currently being processed, and 45 idle servers.

Our recommendation Set the value to the expected number of concurrent users. Monitor your system during normal workload to see whether this value should be changed. Be sure that this value is high enough for your site if you get a lot of hits. Allow just enough traffic through to the application server to prevent bottlenecks.

9.4.9 Pending connections queue length (ListenBackLog) When several clients request connections to the IBM HTTP server, and all threads are being used, a queue exists to hold additional client requests. The ListenBackLog directive sets the maximum length for the queue of pending connections. However, if you are using the default Fast Response Cache Accelerator (FRCA) feature, the ListenBackLog directive is not used because FRCA has its own internal queue. This is often limited to a smaller number by the operating system. This varies from operating system to operating system. Also note that many operating systems do not use exactly what is specified as the backlog, but instead use a number based on (but normally larger than) what is set.

Default values  511 with FRCA disabled  1024 with FRCA enabled

Chapter 9. Tuning WebSphere for Content Manager

267


How to view or set For non-FRCA: 1. Edit IBM HTTP Server file httpd.conf, which is located in the IBM_HTTP_Server_root_directory/conf directory. 2. Change the ListenBackLog directive. 3. Save the changes and restart the IBM HTTP server.

Our recommendation Generally, use the default.

9.5 WebSphere Application Server tuning Each WebSphere Application Server process has several parameters that influence application performance. Each application server in WebSphere Application Server is comprised of an EJB container and Web container. In this section, we discuss the performance tuning settings that are associated with the WebSphere Application Server.

9.5.1 Application server process priority Improving the operating system process priority of the application server can help performance. On AIX systems, a smaller setting has a higher priority.

Default values 20

How to view or set Using the WebSphere administrative console: 1. 2. 3. 4. 5.

In the administrative console, select the application server you are tuning. Under Additional Properties, click Process Definition. Under Additional Properties, click Process Execution. Specify the value in the Process Priority field and click Apply. Save the changes and restart the application server.

To see the current process priority on AIX, type ps -efl.

Our recommendation Lower the default value to zero. Monitor the system to see whether giving more priority to the application server affects the other components of the Content

268

Performance Tuning for Content Manager


Manager System such as DB2. Make small increment changes at a time. If the trade-off is not acceptable, then go back to the previous value. By improving the priority of an application server, noticeable improvements have been seen on AIX; but you always want to be able to balance performance of all components of Content Manager and avoid bottlenecks.

9.5.2 Web container maximum thread size To route servlet requests from the Web server to the Web containers, the product establishes a transport queue between the Web server plug-in and each Web container. Use this configuration to specify the maximum number of threads that can be pooled to handle requests sent to the Web container. Requests are sent to the Web container through any of the HTTP transports.

Default values 50

How to view or set 1. 2. 3. 4.

In the administrative console, select the application server you are tuning. Under Additional Properties, click Web Container. Under Additional Properties, click Thread Pool. Enter the desired maximum number of maximum threads in the Maximum Size field. 5. Select the Growable Thread Pool (see below) check box. 6. Click Apply or OK. 7. Click Save.

Our recommendation Tivoli Performance Viewer displays a metric called Percent Maxed that determines the amount of time that the configured threads are in use. If this value is consistently in the double digits, the Web container could be a bottleneck and the number of threads should be increased.

9.5.3 Thread allocation beyond maximum When this option is selected (enabled), more Web container threads can be allocated than specified in the maximum thread size field. When unselected, the thread pool cannot grow beyond the value specified for the maximum thread size.

Default values Unchecked

Chapter 9. Tuning WebSphere for Content Manager

269


How to view or set 1. 2. 3. 4. 5. 6.

In the administrative console, select the application server you are tuning. Under Additional Properties, click Web Container Service. Under Additional Properties, click Thread Pool. Select the check box Growable thread pool. Click Apply to ensure that the changes are saved. Stop and restart the application server.

Our recommendation This option is intended to handle brief loads beyond the configured maximum thread size. However, use caution when selecting this option because too many threads can cause the system to overload.

9.5.4 Max keep-alive connections (MaxKeepAliveConnections) This parameter describes the maximum number of concurrent connections to the Web container that are allowed to be kept alive (that is, to be processed in multiple requests. The Web server plug-in keeps connections open to the application server as long as it can. However, if the value of this property is too small, performance is negatively affected because the plug-in has to open a new connection for each request instead of sending multiple requests through one connection. The application server might not accept a new connection under a heavy load if there are too many sockets in TIME_WAIT state. If all client requests are going through the Web server plug-in and there are many TIME_WAIT state sockets for port 9080, the application server is closing connections prematurely, which decreases performance. The application server will close the connection from the plug-in, or from any client, for any of the following reasons:  The client request was an HTTP 1.0 request when the Web server plug-in always sends HTTP 1.1 requests.  The maximum number of concurrent keep-alives was reached. A keep-alive must be obtained only once for the life of a connection (that is, after the first request is completed, but before the second request can be read).  The maximum number of requests for a connection was reached, preventing denial of service attacks in which a client tries to hold on to a keep-alive connection forever.  A timeout occurred while waiting to read the next request or to read the remainder of the current request.

Default values No default setting

270

Performance Tuning for Content Manager


How to view or set 1. Open the administrative console; select the application server you are tuning. 2. Under Additional Properties, click Web Container. 3. Under Additional Properties, click HTTP Transports. 4. Click the port_number link in the Host column. 5. Under Additional Properties, click Custom Properties. 6. Click New. 7. In the Name field, enter MaxKeepAliveConnections. 8. Enter the value in the Value field. 9. Click Apply or OK. 10.Click Save.

Our recommendation The value should be at least 90% of the maximum number of threads in the Web container thread pool. If it is 100% of the maximum number of threads in the Web container thread pool, all of the threads could be consumed by keep-alive connections, leaving no threads available to process new connections.

9.5.5 Maximum requests for a connection (MaxKeepAliveRequests) The maximum number of requests that are allowed on a single keep-alive (persistent) connection. This parameter can help prevent denial of service attacks when a client tries to hold on to a keep-alive connection. The Web server plug-in keeps connections open to the application server as long as it can, providing optimum performance. Set the value to zero to allow an unlimited amount.

Default values No default setting

How to view or set 1. Open the administrative console. Select the application server you are tuning. 2. Under Additional Properties, click Web Container. 3. Under Additional Properties, click HTTP Transports. 4. Click the port_number link in the host column. 5. Click Custom Properties under Additional Properties. 6. Click New. 7. In the Name field, enter MaxKeepAliveRequests. 8. Enter the value in the Value field. 9. Click Apply or OK. 10.Click Save.

Chapter 9. Tuning WebSphere for Content Manager

271


Our recommendation Set this value high to get the maximum performance. A good starting value is 100. Monitor your system using Tivoli Performance Viewer to see whether this value is adequate for your environment.

9.5.6 URL invocation cache The invocation cache holds information for mapping request URLs to servlet resources. A cache of the requested size is created for each thread. The number of threads is determined by the Web container maximum thread size setting. The default size of the invocation cache is 50. If more than 50 unique URLs are actively being used (each JavaServer Page is a unique URL), you should increase the size of the invocation cache. Note: A larger cache uses more of the Java heap, so you might need to increase maximum Java heap size. For example, if each cache entry requires 2 KB, maximum thread size is set to 25, and the URL invocation cache size is 100 then 5 MB of Java heap is required.

Default values 50

How to view or set 1. In the administrative console, click Servers â&#x2020;&#x2019; Application servers and select the application server you are tuning. 2. Under Additional Properties, click Process Definition. 3. Under Additional Properties, click Java Virtual Machine. 4. Under Additional Properties, click Custom Properties. 5. Specify InvocationCacheSize in the Name field and the size of the cache in the Value field. You can specify any number higher than zero for the cache size. Setting the value to zero disables the invocation cache. 6. Click Apply and then Save to save your changes. 7. Stop and restart the application server.

Our recommendation Start with a value of 50. Monitor your system and increase this value as needed. For example, if more than 50 unique URLs are actively being used (each JSP is a unique URL), consider increasing this value.

272

Performance Tuning for Content Manager


9.5.7 Security Security is a global setting. When security is enabled, the performance can decrease between 10 to 20 percent.

Default values Disabled

How to view or set 1. 2. 3. 4.

In the administrative console click Security â&#x2020;&#x2019; Global Security. Select Enabled and Enforce Java 2 Security setting. Click Apply to ensure that the changes are saved. Stop and restart the application server.

Our recommendation When security is not needed, keep the default disable setting.

9.6 Java virtual machines tuning Java virtual machines (JVM) offer several tuning parameters that affect the performance of WebSphere Application Servers and application performance. In this section, we discuss the performance tuning settings associated with JVM.

9.6.1 Initial and maximum heap size for the JVM These parameters set the initial and the maximum heap sizes for the JVM. In general, increasing the size of the Java heap improves throughput until the heap no longer resides in physical memory. After the heap begins swapping to disk, Java performance drastically suffers. Therefore, the maximum heap size must be low enough to contain the heap within physical memory. The physical memory usage must be shared between the JVM and the other applications, such as the database. For assurance, use a smaller heap, for example 64 MB, on machines with less memory.

Default values  Initial heap size: 0 MB  Maximum heap size 256 MB

How to view or set 1. Click Servers â&#x2020;&#x2019; Application Servers.

Chapter 9. Tuning WebSphere for Content Manager

273


2. Click the application server you want to tune. 3. Under Additional Properties, click the Process Definition. 4. Under Additional Properties, click Java Virtual Machine. 5. Enter values in the General Properties field for the following fields: Initial Heap Size and Maximum Heap Size. 6. Save your changes. 7. Restart the application server.

Our recommendation Try a maximum heap of 128 MB on systems with less than 1 GB of physical memory, 256 MB for systems with 2 GB memory, and 512 MB for larger systems. The starting point depends on the application. For production systems where the working set size of the Java applications is not well understood, an initial setting of one-fourth the maximum setting is a good starting value. The JVM will then try to adapt the size of the heap to the working set of the Java application. Monitor your system. When you have verified the normal size of your JVM heap size, change the initial heap size to this value using the Tivoli Performance Viewer.

274

Performance Tuning for Content Manager


10

Chapter 10.

Tuning TSM for Content Manager This chapter covers tuning Tivoli Storage Manager (TSM) for Content Manager. It includes tuning server options that affect Content Manager performance and setting the system to automatically tune some of the server options for performance.

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

275


10.1 Introduction Various tuning parameters affect Tivoli Storage Manager performance. The number of parameters that might be set in TSM is quite small. It is the tuning of the client, server, and network options that can become very complex. Some of the factors that affect TSM performance are:         

Average client file size Percentage of files and bytes changed since last incremental backup Client/server hardware (CPUs, RAM, disk drives, network adapters) Client/server operating system Client/server activity (non-TSM workload) Server storage pool devices (disk, tape, optical) Network hardware, configuration, utilization, and reliability Communication protocol and protocol tuning Final output repository type (disk, tape, optical)

In the following sections, we cover the very basic aspects of the TSM configuration and tuning for performance. This is only a starting point for tuning TSM for Content Manager. For a complete list of parameters and the tuning features in TSM, refer to the following publications:  Tivoli Storage Manager for AIX - Administratorâ&#x20AC;&#x2122;s Guide, GC32-0768.  Tivoli Storage Manager Problem Determination Guide; search at: http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp

10.2 Automatic tuning of server options Tuning a Tivoli Storage Manager environment to achieve the best performance requires experience, knowledge, and skill from experts, and is a time-consuming process. The current version of TSM can automate the tuning of some of the existing parameters and achieve more intelligent storage management. TSM calculates performance figures, usually in throughput per unit of time, and changes a pre-set parameter that can be adjusted during the next set of iterations to test its effect on performance. The buffer pool is one of many parameters that affect performance. The buffer pool size (BUFPOOLSIZE) can be self-tuned by TSM. For optimal performance and simplified tuning, we recommend the following action.

276

Performance Tuning for Content Manager


10.2.1 Self-tune buffer pool size (BUFPOOLSIZE) The database buffer pool provides cache storage so that the database pages can remain in memory for longer periods of time. When the database pages remain in cache, the server can make continuous updates to the pages without requiring I/O operations to external storage. Although a database buffer pool can improve server performance, it will also require more memory. For UNIX platforms, the buffer pool size may not exceed 10% of physical memory. For Windows NT/2000 platforms, the buffer pool size may not exceed 10% of physical memory and memory load may not exceed 80%. An optimal setting for the database buffer pool is one in which the cache hit percentage is greater than or equal to 99%. Performance decreases drastically if buffer pool cache hit ratio decreases below 99%. To check the cache hit percentage, use the query db format=detail. Increasing the BUFPOOLSIZE parameter can improve the performance of many TSM server functions such as multi-client backup, storage pool migration, storage pool backup, expiration processing, and move data. If the cache hit percentage is lower than 99%, increase the size of the BUFPOOLSIZE parameter in the DSMSERV.OPT file. You need to shut down and restart the server for changes to take effect. For most servers, if tuning the parameter manually, we recommend starting with a value of 32,768 KB, which equals 8,192 database pages. If you have enough memory, increase in 1 MB increments. A cache hit percentage greater than 99% is an indication that the proper BUFPOOLSIZE has been reached. However, continuing to raise BUFPOOLSIZE beyond that level can be very helpful. While increasing BUFPOOLSIZE, care must be taken not to cause paging in the virtual memory system. Monitor system memory usage to check for any increased paging after the BUFPOOLSIZE change. (Use the reset bufp command to reset the cache hit statistics.) If you are paging, buffer pool cache hit statistics will be misleading because the database pages will be coming from the paging file.

Automatic tuning through SELFTUNEBUFPOOLSIZE For optimal performance and simplified tuning, we recommend having the server automatically tune the BUFPOOLSIZE by setting the SELFTUNEBUFPOOLSIZE option to Yes. The option is disabled by default. Before expiration processing, the server resets the database buffer pool and examines the database buffer pool cache hit ratio. The server accounts for the amount of available storage and adjusts the buffer pool size as needed. When the SELFTUNEBUFPOOLSIZE is set to Yes, the buffer pool cache hit ratio statistics are reset at the start of expiration. After expiration, the buffer pool size is increased if the cache hit ratio is less than 98%. The increase in the buffer pool

Chapter 10. Tuning TSM for Content Manager

277


size is in small increments and might change after each expiration. The change in the buffer pool size is not reflected in the server options file. You can check the current size at any time using the query status command. Use the setopt bufpoolsize command to change the buffer pool size. Note: Although the value of BUFPOOLSIZE might be changed, the settings in the server options file are not changed. Issuing a query option command displays only what is set in the server options file. This is a quick introduction of performance tuning TSM for Content Manager. If you want to explore more TSM tuning options, refer to the publications listed at the beginning of this chapter.

278

Performance Tuning for Content Manager


13

Chapter 13.

Case study This chapter extends the troubleshooting discussion from the previous chapter by using Document Manager performance tuning as a case study to further show how you can begin to understand what your system is doing and start troubleshooting Content Manager performance problems.

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

385


13.1 Document Manager overview IBM DB2 Document Manager addresses the origination, approval, distribution, and revision of unstructured information. In the simplest terms, Document Manager enables electronic files to be reused or repurposed by multiple individuals. Document Manager applies a discipline called library sciences to catalog corporate information, which could be in the form of hard copy (physical documents), electronic files generated by desktop applications, or just metadata or XML information created internally or externally. In any form, Document Manager is an application that enables users to index and file corporate information in a uniform and centralized way. Document Manager is a client for Content Manager hosted by Microsoft Internet Explorer. A Document Manager system is built on a three-tiered, scalable computing model. You can deploy the systemâ&#x20AC;&#x2122;s components as needed and distribute them across multiple servers or clients. Document Manager is built on a three-tier model:  Document Manager clients tier  Document Manager server and services tier  Content repository tier In this case study, the content repository for the Document Manager system is Content Manager.

13.2 A search scenario This case study illustrates a way to identify potential problems in your custom application and show how you can use cache setting to improve the performance of a Document Manager - Content Manager system. We start with a plain system that uses Document Manager as a client and Content Manager as the content repository. Document Manager cache is not configured initially. When users use the system, they complain that the logging time and system search response time seem to be slow. To identify the problem, we turn on Content Manager Library Server performance trace. See 13.3, â&#x20AC;&#x153;Enabling Library Server traceâ&#x20AC;? on page 389 for details. We use echo to insert text into the icmserver.log, so that we will know at every step what actions we, as user, performed on the system, and what Content Manager operations have been triggered due to that action.

386

Performance Tuning for Content Manager


Our exercise consists of the following steps: 1. Open a command prompt window and go to the location where icmserver.log resides. 2. Launch the Document Manager client by typing: echo Launching the Document Manager client.>icmserver.log 3. Log in to Document Manager as John Doe by typing: echo The Document Manager client is now launched.>>icmserver.log echo Logging into Document Manager as John Doe.>>icmserver.log 4. Perform a simple search. Type: echo Log in completed.>>icmserver.log echo Perform a search.>>icmserver.log 5. Perform the same search: echo Search is done.>>icmserver.log echo Search again.>>icmserver.log 6. Type: echo 2nd search is done.>>icmserver.log We extract part of the icmserver.log file when we perform a search in Example 13-1. For simplification, we reduced some of the white spaces, and use an ellipsis (...) to represent the repetitive numbers for the log entries. Example 13-1 icmserver.log: log of searches in a Document Manager desktop with Content Manager

Perform a search. ICMPLSLG ICMLOGON 00672 04/10 Library server code built < Mar ICMPLSLG ICMLOGON 01907 04/10 ICMPLSGA ICMGETATTRTYPE 00412 04/10 ICMPLSGT ICMGETITEMTYPE 00919 04/10 ICMPLSLX ICMLISTXDOOBJECT 00335 04/10 ICMPLSGA ICMGETATTRTYPE 00412 04/10 ICMPLSQU ICMSEARCH 00482 04/10 ICMPLSGI ICMGETITEM 04346 04/10 ICMPLSLK ICMLISTNLSKEYWRD 00860 04/10 ICMPLSLF ICMLOGOFF 00200 04/10 Search is done. Search again. ICMPLSLG ICMLOGON 00672 04/10 Library server code built < Mar ICMPLSLG ICMLOGON 01907 04/10 ICMPLSGA ICMGETATTRTYPE 00412 04/10

18:17:34.609 GMT 1 2006 17:20:53 18:17:34.609 GMT 18:17:34.609 GMT 18:17:34.640 GMT 18:17:34.640 GMT 18:17:34.640 GMT 18:17:34.796 GMT 18:17:34.796 GMT 18:17:34.796 GMT 18:17:34.906 GMT

; ... > ; ... ; ... ; ... ; ... ; ... ; ... ; ... ; ... ; ...

? JOHN.DOE ? JOHN.DOE 9 msec JOHN.DOE 1 msec JOHN.DOE 21 msec JOHN.DOE 1 msec JOHN.DOE 1 msec JOHN.DOE 65 msec JOHN.DOE 2 msec JOHN.DOE 0 msec ? JOHN.DOE 0 msec

18:19:19.390 GMT ; ... ? JOHN.DOE 1 2006 17:20:53 > 18:19:19.390 GMT ; ... ? JOHN.DOE 18:19:19.406 GMT ; ... JOHN.DOE

10 msec 2 msec

Chapter 13. Case study

387


ICMPLSGT ICMGETITEMTYPE ICMPLSLX ICMLISTXDOOBJECT ICMPLSGA ICMGETATTRTYPE ICMPLSQU ICMSEARCH ICMPLSGI ICMGETITEM ICMPLSLK ICMLISTNLSKEYWRD ICMPLSLF ICMLOGOFF 2nd search is done.

00919 00335 00412 00482 04346 00860 00200

04/10 04/10 04/10 04/10 04/10 04/10 04/10

18:19:19.421 18:19:19.437 18:19:19.437 18:19:19.578 18:19:19.578 18:19:19.593 18:19:19.640

GMT GMT GMT GMT GMT GMT GMT

; ; ; ; ; ; ;

... ... ... ... ... ... ...

JOHN.DOE JOHN.DOE JOHN.DOE JOHN.DOE JOHN.DOE JOHN.DOE ? JOHN.DOE

22 0 0 65 2 0

msec msec msec msec msec msec 0 msec

We observed several interesting points in the log:  When we perform a search, the ICMLOGON operation is called. It does not seem right for ICMLOGON to be called because we already logged into the system earlier. If you see this in your application, you should look into it and see why the operation is called. Is it a redundancy somewhere?  At the end of the search, the ICMLOGOFF operation is called. Again, it does not seem right for ICMLOGOFF to be called because we are performing a search and not logging in and out of the system. If you see this in your application, you should look into it and see why this operation is called. Is it necessary? If not, you should remove this extra work from your application.  Although we perform the exact same search twice, the system is making the same number of calls in both instances. For an efficient system, you might want to work on the application so that you save some calls to the system the second time a user performs the same search (or other action).  There are also some ICMGETATTRTYPE and ICMGETITEMTYPE calls. Depending on your application, you might look into whether these calls are necessary. Different applications make different calls to the system. The log can help you understand what calls are really being made to Content Manager Library Server. From it, you can identify any area that might cause a performance problem. In this case study, we decided to turn on the mid-tier cache and perhaps save the extra login and logout calls. After we turn on the mid-tier cache and make some configuration changes, we perform the same search operation twice. For more information about Document Manager cache, refer to 13.4, “Configuring Document Manager cache” on page 391. Example 13-2 on page 389 shows the portion of the output of the imcserver.log when we again made a simple search twice in a role.

388

Performance Tuning for Content Manager


Example 13-2 icmserver.log: log of searches in a Document Manager desktop with Content Manager 2

Perform a search. ICMPLSQU ICMSEARCH ICMPLSGI ICMGETITEM ICMPLSLK ICMLISTNLSKEYWRD Search is done. Search again. ICMPLSQU ICMSEARCH ICMPLSGI ICMGETITEM 2nd search is done.

00482 04/10 18:39:24.328 GMT ; ... JOHN.DOE 04346 04/10 18:39:24.343 GMT ; ... JOHN.DOE 00860 04/10 18:39:24.343 GMT ; ... JOHN.DOE

74 msec 3 msec 0 msec

00482 04/10 18:40:41.453 GMT ; ... JOHN.DOE 04346 04/10 18:40:41.468 GMT ; ... JOHN.DOE

73 msec 3 msec

This time we see a significant improvement, including:  The number of calls to Content Manager is significantly less than the previous scenario.  There are no more ICMLOGON and ICMLOGOFF calls.  There are no more ICMGETATTRTYPE or ICMGETITEMTYPE calls.  The second search calls one less operation than the initial search call. In summary, using echo (or some technique similar to this) and monitoring icmserver.log helps you identify problems and troubleshoot your application. You can use this technique to test your custom application before it goes to production and make sure it is only making calls as needed. When you reduce the number of unnecessary calls, you reduce the extra response time and thus improve the overall system performance. Note: For guidelines to setting up a Document Manager system, refer to Appendix A, “Performance tuning guidelines for a Document Manager system” on page 435 for more details.

13.3 Enabling Library Server trace As a background reference for the case study, this section shows how to enable Library Server trace. The Content Manager Library Server performance trace can be controlled from the system administration client. Starting or stopping this trace takes effect immediately. You can also start and stop the Library Server performance trace from a script. Example 13-3 on page 390 shows the script to turn on the performance trace.

Chapter 13. Case study

389


Example 13-3 Turn Library Server performance trace on db2 db2 db2 db2

connect to icmnlsdb user icmadmin using password update icmstsyscontrol set tracelevel=8 update icmstsyscontrol set keeptraceopen=1 connect reset

The trace data goes into the trace file specified in the System Administration client. This method has the advantage of specifying keeptraceopen=1, which buffers the log and makes its impact low enough for use on a production system. Example 13-4 shows the script to turn off the trace. Example 13-4 Turn Library Server trace off db2 db2 db2 db2

connect to icmnlsdb user icmadmin using password update icmstsyscontrol set tracelevel=0 update icmstsyscontrol set keeptraceopen=0 connect reset

Example 13-5 shows sample output of the Library Server performance trace log. Example 13-5 Sample output of the Library Server performance trace log ICMPLSP3 ICMPLSLM ICMPLSC2 ICMPLSQU ICMPLSGI ICMPLSCO

ICMPREPCRTDOCPART 00735 11/06 18:21:28.744 ?06182118536221 4 msec ICMLISTRESOURCEMGR 00788 11/06 18:21:28.754 ?06182118536221 2 msec ICMCREATEDOCPART 01252 11/06 18:21:29.516 ?06182118536221 11 msec ICMSEARCH 00228 11/06 18:22:01.071 ?06182118536221 6 msec ICMGETITEM 03832 11/06 18:22:01.091 ?06182118536221 13 msec ICMCHECKOUTITEM 00407 11/06 18:22:04.015 ?06182118536221 2 msec

The second field of each entry specifies the name of the Library Server operation. On the same entry, you can also find the timestamp of when the operation is performed and its elapsed time in milliseconds. This information helps you monitor the overall workload of the Content Manager system. It can clearly show whether a performance issue is due to time spent in the Library Server or some other component of the Content Manager system. This often makes capturing a Library Server trace the first and most important step in troubleshooting a performance problem. It is especially useful for tuning custom applications to make most efficient use of Content Manager and minimize the applicationâ&#x20AC;&#x2122;s impact on the Content Manager system. Note: Remember to enable the trace only when you troubleshoot a problem because turning on the performance trace affects system performance. Turn off the trace as soon as you no longer need the information.

390

Performance Tuning for Content Manager


13.4 Configuring Document Manager cache As background reference information for the case study, this section describes configuring Document Manager cache. We assume you have some basic knowledge of Document Manager components, its servers, and services.

13.4.1 Cache Manager overview Cache objects are small files that contain configuration information about things such as menus, action dialogs, and folders. The cache files are stored on the Document Manager server. As changes are made to the Document Manager configurations through Designer, the Cache Manager updates those configurations in the library and to the cache files, which are stored on the Document Manager server. The use of cache files enhances Document Manager Desktop performance by reducing the resource utilization that occurs in client/server applications. Document Manager compares the time and date stamp of the cache objects stored on the userâ&#x20AC;&#x2122;s desktop to the cache objects stored on the Document Manager server. If the Document Manager server has a cache object that is newer, it is delivered to the Desktop. If the cache object is the same time and date, it is not updated. The Document Manager Desktop reads the configuration information from these local cache objects instead of continually accessing the library for the information. This improves performance and reduces network traffic between the Document Manager Desktop and the repository.

13.4.2 Cache Manager considerations and strategies The Cache Manager default is set to a login frequency of every 30 seconds. Every time this service runs, it consumes every available server resource to process the cache update. During the initial configuration setup, you can use this short frequency because of the ongoing changes made in the Document Manager Designer. However, when the system reaches a point where the changes are less frequent, you no longer need to have such a short frequency. Optimal server-desktop performance can be achieved by completely turning off the cache server and setting client cache to update after a large interval of time, such as every four weeks. When planning the frequency of cache, determine how often you want the client desktop to check for more recent cache information. By default, the client desktop checks the cache at the start of every session. When Internet Explorer is launched and the Document Manager Desktop is started, Document Manager compares the local cache date and timestamps against those on the Document Manager server. If the server-side cache files are more recent, they are pulled to

Chapter 13. Case study

391


the local machine. Updates are performed automatically, so you never have to plan and implement rollouts of new configurations. The administrator has to weigh the benefit of having nearly real-time data (by forcing cache to update more frequently both on the Document Manager Desktop and server) against improved performance by having cache update less frequently.

13.4.3 Configuring Document Manager cache Both the server cache and client cache updates can be configured through the Designer or the Configure command in the Document Manager Desktop after the Desktop is opened. Note: Cache must be run once before following the steps below. The first time the cache service runs, the cache object is created.

How to configure a cache object Follow these steps to configure a new cache object: 1. Click Start → Programs → IBM DB2 Document Manager → Designer. 2. Expand the database, Global - [Administrator] component, Desktop, and Cache Configuration. 3. From the Designer toolbar, click New. The New Cache Configuration dialog appears. 4. To configure server schedule information for the new cache object, perform the following actions: a. Click the Server Schedule tab. A list of all cache objects written to the DB2 Document Manager server cache appears (Figure 13-1 on page 393).

392

Performance Tuning for Content Manager


Figure 13-1 Cache configuration: Server Schedule tab

b. Select one or more of these objects and alter the frequency of server-side updates, or force an immediate update by taking any of these actions: •

Select the OFF check box to turn off the update setting. This cache object cannot be updated while the OFF check box is selected.

Increase or decrease the Update Frequency number.

To the right of the Update Frequency number, you can select an increment for server update frequency: seconds, minutes, hours, days, weeks, or months.

Click Update Now to force the update of selected objects, even though their update frequency is not expired.

c. Click Refresh to show the new time and date stamps after the cache service runs. d. Click Set to post the changes made to the Server Update Settings. There is no need to click Set when using the Update Now button. e. Click Apply to post the changes to the library environment.

Chapter 13. Case study

393


5. To configure client schedule information for the new cache object: a. Click the Client Schedule tab. A list of all cache objects written to the Document Manager Client Cache appears. b. You can select one or more of these objects and alter the frequency of client-side updates by taking any of the following actions: •

Select the Do not store cache data locally. Remove immediately after use check box to remove temporarily stored client cache on the user’s computer when the user exits the DB2 Document Manager Desktop. Attention: This option is not recommended.

In the Update Frequency list, choose one of the following options: Each Access: Each time the user accesses a component of the DB2 Document Manager Desktop, such as a menu, the Desktop requests the DB2 Document Manager server to determine whether the menu cache object has been updated. If so, the DB2 Document Manager Desktop updates cache for the menu. Each Session: Each time the user starts a DB2 Document Manager Desktop session and logs into a library environment, the client cache is checked for update. Interval: The Desktop checks the DB2 Document Manager server for new cache files at the specified interval. Choose the frequency interval and increment to the right of Update Frequency. Calendar: Choose a calendar interval of Daily, Weekly, or Monthly.

c. Click Set to post the changes to the cache object list. d. Click Apply to post the changes to the library environment. 6. The Purge Schedule tab (Figure 13-2 on page 395) enables you to configure the purging of client cache objects. Purging removes the data files representing the client cache in the DB2 Document Manager client subdirectory. To configure the purge schedule for the new cache object: a. Click the Purge Schedule tab. b. Set the client cache to purge by taking any of the following actions: •

Click Purge Now to force the removal of client cache data files even though the frequency does not call for the action.

Choose one of the following options from the Frequency list: Never: The client cache is never updated automatically.

394

Performance Tuning for Content Manager


Each Session: Each time the user starts a DB2 Document Manager Desktop session and logs into a repository environment, the client cache is deleted and new files are updated from the Cache server. Interval: Choose the frequency interval and increment (seconds, minutes, hours, days, weeks, or months). Calendar: Choose a calendar interval of Daily, Weekly, or Monthly.

Figure 13-2 Cache configuration: Purge Schedule tab

7. When you are finished, click Apply and Done.

Chapter 13. Case study

395


396

Performance Tuning for Content Manager


A

Appendix A.

Performance tuning guidelines for a Document Manager system A Document Manager system can use Content Manager as its content repository. In this appendix, we provide the performance tuning guidelines for a Document Manager system.

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

435


Introduction The performance tuning guidelines covered here include setting up and configuring the following Document Manager components and services:        

Desktop templates Views Standard search templates Item type configuration Class configuration Power searches Service manager Action and dialogs

We also provide general system consideration. Note: The content provided here is extracted from a white paper written by the Document Manager and Content Manager development team. We include it in this book for your convenience.

Desktop templates The Desktop template configuration contains a configuration setting that can contribute to a significant increase or decrease in performance and responsiveness by the Document Manager Desktop. This setting is located in the Data Lists tab of the Desktop Template configuration. The Data List Retrieval Mode identifies how the Document Manager Desktop updates three objects: Folders, Groups, and Users. In the Desktop template configuration, these objects can be set for retrieval from cache or directly from the library. When the Data List Retrieval Mode is set to access from cache data, the Document Manager Desktop always pulls folder, user, or group information from cache. When the Data List Retrieval Mode is set to access from library data, the Document Manager Desktop performs a live lookup to the Library Server to pull folder, user, or group information. Identify user information needs to determine how these settings should be applied in your environment. These settings can also be different between users and groups. Users and groups that regularly add and change their folders might require the folder data access setting to come from the library. In turn, this setting might mean that it takes a few seconds extra to log into the desktop or refresh a desktop view. Other groups and users whose folder structures are static might want to keep the folder access settings to cache data. This enables faster response time for desktop actions such as log in and desktop refresh.

436

Performance Tuning for Content Manager


Views Document Manager view configurations can have an impact on Document Manager Desktop performance and its responsiveness. A number of elements can combine to affect how quickly the results of a search or the contents of a folder are presented to an end user. The view configuration contains settings to limit the number of items returned at a time as a result of a search or of opening up a folder. Also included is a setting to identify the number of items to return to the user at a time: the return increment. The larger the number either of these is set to, the longer it takes for the desktop to display the results of a search or the contents of a folder. By using small, manageable numbers, the desktop responsiveness will improve. When using smaller return increments and maximum return settings, the user base needs to perform well-defined searches to bring back the documents they are looking for. Another view configuration setting that affects desktop performance is the number of attributed included in a single view. Within the view configuration, the number attributes are assigned to a view template can be adjusted to return fewer attributes to the desktop. The desktop view configuration also allows configuration of advanced settings to display the organizational structure of a document to include compound documents and life cycle information. Enabling these settings within the view configuration increases the response time for the results of a search or the contents of a document to be displayed to the user.

Standard search templates Settings in search templates can contribute to desktop performance enhancements. Providing users with key attributes that can help them quickly locate documents and data will enable them to perform well-defined searches. When assigning attributes to be displayed in a search template, include those attributes that best define your data model and are optimized for performance. Additionally, using default operators such as equals provides the best possible response time when executing a search. Operators such as contains increases desktop response time for returning search results. You can define several different search templates that are aligned with different document classes or business processes so that users can quickly identify the attribute information that helps them to locate their data. Additionally, any

Appendix A. Performance tuning guidelines for a Document Manager system

437


attributes included on a search dialog should be indexed for optimum database efficiency. Document Manager systems working with Content Manager contain additional configuration settings within the search template definition that determines how the search is performed in the database. On the Library tab of the Search Configuration dialog, you can specify whether a search is performed across all item types in the library or within a specific item type. This setting affects how quickly the search results are returned to the user because searching across multiple item types requires a database search that is run across multiple tables. The same search run within a single item type returns information much faster because the search is run in a single database table. Another configuration setting within search template configuration enables you to identify whether the search is case insensitive. Using this setting to perform case-insensitive searches increases the amount of time it takes to return results to the user because this type of search requires the database search to run a full table scan instead of utilizing database indexes.

Item type configuration Content Manager item type settings can affect performance of the desktop and the overall system. Options such as attribute indexes and text searching can optimize the systemâ&#x20AC;&#x2122;s searching performance. The item-type definition includes the ability to enable full text indexing for the item type. Although enabling text searching does not have a significant impact on desktop performance, it can inhibit users wanting to perform cross-item type searches. If you allow users to perform content searches across all item types, all item types must be set to text searchable. Content Manager item types also enable the creation of attribute indexes. Indexing can help optimize database searching, which results in better desktop responsiveness and overall system optimization. We recommend creating single attribute indexes for all attributes included in an item type that are being used as search criteria. These include all attributes included on all user dialogs, attributes used to identify uniqueness, and any attributes that are used for locating documents by the Document Manager services such as document class, document state, and revision number.

438

Performance Tuning for Content Manager


Class configuration The document class definition allows the ability to identify an attribute or combination of attributes that the system uses to check document uniqueness. Uniqueness can be checked at several different points in the life cycle: a document add, a document check-in, and the document revise commands can all check for uniqueness. When uniqueness is checked, the system reads the information set in the Properties tab of the class configuration. Using three or more attributes as uniqueness criteria can increase the amount of time it takes to check uniqueness. For best performance, use the fewest attributes possible to meet your business and configuration needs; not checking for uniqueness at all will provide the best performance possible. If you do have the system check for uniqueness, it is important to consider creating attribute indexes for all properties used within the properties tab. You can achieve additional server performance improvements through the settings in the Triggers tab of the document class configuration dialog. These triggers are all processed by the Document Manager Lifecycle Manager whenever a document action is performed. Although this setting does not affect desktop performance, reducing the number of triggers that have to be processed by Lifecycle Manager helps to alleviate unnecessary resource use on the server.

Power searches The power search command is an advanced level utility that can be made available to users to build complex search requests. The power search was not created to provide highly optimized searches, but it gives users the ability to choose any combination of attributes, operators, and library settings when they run a search. We recommend making the Power Search command available to power users or more advanced users with the knowledge of how to build a complex search. Used correctly, refined complex searches can be created to return specific documents and data to the user quickly. User training and education can help provide knowledge and skills around building complex searches. Using operators such as equals and choosing attributes that are indexed help to improve search results response time. Gain additional performance through the library settings in the Power Search dialog. Similar to the standard search, a Power Search can be performed across

Appendix A. Performance tuning guidelines for a Document Manager system

439


multiple item types or within a single item types. The Library tab in the Search Configuration dialog enables the user to identify whether the search will be performed across all item types in the library or against a specific item type. As with the standard search, searching across multiple item types requires the search to be performed against multiple tables in the database. In contrast, when the search is performed within a single item type, the database search is run against a single table, resulting in increased responsiveness to the end user. Using the case-insensitive search setting within the power search also results in increased search results responsiveness. Performing case-insensitive searches requires the database query to complete a full table scan instead of using the database indexes. Use of database indexes improves response performance for the search being performed.

Service manager Document Manager services can be set to minimize resource utilization on the Document Manager server. Each Document Manager service has a login frequency setting. This setting determines how often the service wakes up and checks for work. The installation default for this setting is to check every 30 seconds. The interval at which each service is set to process should be evaluated as part of the overall optimization of the Document Manager server. If a service such as Lifecycle or Rendition is not being utilized often or if there is no need to see immediate action, then the frequency interval can be set higher. Blackout periods should be set for times when system backups are performed. Managing the services through the processing intervals and setting blackout times contributes to system stability and responsiveness. Setting processing intervals to longer durations frees up resources on the Document Manager server for other operations and system needs. Configuring blackouts to be in effect during database outage times (such as when the database is being backed up) eliminate the effort and processing put forth on the part of the server during those times to connect to a system that is unavailable. Blackouts also contribute to system stability because the Document Manager server will not encounter errors in trying to connect to an unavailable database. All Document Manager services should be managed (stopped and started) by Document Manager Service Manager. The service manager is designed to allow the Document Manager services to complete any in-process work when a service is stopped.

440

Performance Tuning for Content Manager


Additional optimization can be achieved through advanced configuration of the Document Manager services. Lifecycle, Rendition, Print, and Notification service can be configured to manage a specific subset of classes or configurations. The system can be configured to handle specific loads of documents by class to help balance workload and prioritize processing of designated document classes.

Action and dialogs Options that are available in the actions and dialogs can increase performance time for actions such as add and checkin. Scanning for compound documents, incrementing revision number, checking for uniqueness, and applying unique document numbers can all add a significant amount of time to these processes. Consider these options and their impact before including them in your system design. If these options are not required as part of your system, they should be disabled to eliminate additional processing time for document operations.

General system considerations Additional system configurations should be evaluated to keep your Document Manager server running at optimum performance. Sufficient drive space is required for the Document Manager server. If the Document Manager serverâ&#x20AC;&#x2122;s hard drive space is reaching its capacity, Windows virtual memory will be swapped more frequently. This can contribute to a sluggish response by the Document Manager server. Keeping system trace files to verbose logging settings can contribute to the utilization of free disk space. Although system trace files are useful in solving and identifying system problems, provisions should be made to prevent these files from growing too large. A high degree of drive fragmentation impedes performance. Virus scanning programs can impact performance. Every document moved either to or from the library repository is temporarily written to the Document Manager server. Every one of those documents may be scanned by virus software depending upon your scan interval settings. Performance might be improved by the physical locations of the Document Manager server, Library Server, and Resource Manager. Close proximity of these systems can significantly improve performance. Co-locating these systems geographically, on the same Windows domain, or on the same subnet will improve system performance.

Appendix A. Performance tuning guidelines for a Document Manager system

441


A DNS server can affect performance if it becomes unavailable or experiences problems. If the DNS is used to reference machine names, the Document Manager server will have problems locating systems if the DNS server is unavailable. ODBC drivers on the Document Manager server should be up to date. Document Manager services are often configured to run as either local or domain Windows accounts. Working with your IT group to ensure their understanding of these accounts and their security requirements helps prevent problems such as disabled or expired accounts.

442

Performance Tuning for Content Manager


1

Chapter 1.

Overview and concepts In this chapter, we provide an overview of the IBM DB2 Content Manager OnDemand (OnDemand) system. We describe how OnDemand manages reports and index data. We also provide information to help you better understand how OnDemand works. In this chapter, we discuss the following topics:    

Overview of OnDemand Concepts Supported environments Document access and distribution possibilities

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

1


1.1 Overview of OnDemand OnDemand supports any organization that can benefit from hard copy or microfiche replacement and instant access to information. An OnDemand system can support small office environment and large enterprise installations with hundreds of system users. It can dramatically improve productivity and customer service in many businesses by providing fast access to information stored in the system. OnDemand processes the print output of application programs, extracts index fields from the data, stores the index information in a relational database, and stores one or more copies of the data in the system. With OnDemand, you can archive newly created and frequently accessed reports on high speed, disk storage volumes and automatically migrate them to other types of storage volumes as they age. OnDemand fully integrates the capabilities of Advanced Function Presentationâ&#x201E;˘ (AFPâ&#x201E;˘), including management of resources, indexes, and annotations. It supports full fidelity reprinting and faxing of documents to devices attached to a PC, OnDemand server, or other type of server in the network. OnDemand provides administrators with tools to manage OnDemand servers, authorize users to access OnDemand servers and data stored in the system, and back up the database and data storage. OnDemand gives you the ability to view documents; print, send, and fax copies of documents; and attach electronic notes to documents. OnDemand offers several advantages allowing you to:  Easily locate data without specifying the exact report  Retrieve the pages of the report that you require without processing the entire report  View selected data from within a report OnDemand provides you with an information management tool that can increase your effectiveness when working with customers. It supports the following capabilities:  Integrates data created by application programs into an online, electronic information archive and retrieval system  Provides controlled and reliable access to all reports of an organization  Retrieves data that you need when you need it  Provides a standard, intuitive client with features such as thumbnails, bookmarks, notes, and shortcuts

2

Content Manager OnDemand Guide


These features mean that OnDemand can help you quickly retrieve the specific page of a report that you require to provide fast customer service. An OnDemand system consists of:  Client programs and server programs that communicate over a network running the TCP/IP communications protocol  A database manager that maintains index data and server control information  Storage managers that maintain documents on various types of storage devices Figure 1-1 presents an overview of the OnDemand system.

Figure 1-1 OnDemand system overview

OnDemand client programs run on PCs and terminals attached to the network and communicate with the OnDemand servers. The OnDemand library server manages a database of information about the users of the system and the reports stored on the system. The OnDemand object server manages the reports on disk, optical, and tape storage devices. An OnDemand system has one library server and one or more object servers. An object server can operate on the same server or node as the library server or on a different server or node than the library server. OnDemand client programs operate on personal computers running on Windows. Using the client program, users can search for and retrieve reports stored on the system. Specifically, users can construct queries and search for

Chapter 1. Overview and concepts

3


reports, retrieve documents from OnDemand, view, print, and fax copies or pages of documents, and attach electronic notes to pages of a document. OnDemand servers manage control information and index data, store and retrieve documents and resource group files, and process query requests from OnDemand client programs. The documents can reside on disk, optical, and tape storage volumes. New reports can be loaded into OnDemand every day. This way, OnDemand can retrieve the latest information generated by application programs. OnDemand client programs and servers communicate over a computer network supported by TCP/IP. When a user submits a query, the client program sends a search request to the OnDemand library server. The library server returns a list of the documents that match the query to the user. When the user selects a document for viewing, the client program retrieves a copy of the document from the object server where the document is stored, opens a viewing window, and displays the document.

1.2 Concepts In this section, we examine some of the basic concepts of OnDemand:  Report and document  Application, application group, and folder

1.2.1 Report and document OnDemand documents represent indexed groups of pages. Typically an OnDemand document is a logical section of a larger report, such as an individual customer statement within a report of thousands of statements. An OnDemand document can also represent a portion of a larger report. For reports that do not contain logical groups of pages, such as transaction logs, OnDemand can divide the report into groups of pages. The groups of pages are individually indexed and can be retrieved much more efficiently than the entire report. Documents are usually identified by date, with one or more other fields, such as customer name, customer number, or transaction number. A date is optional but highly recommended for optimizing document search performance. If there is no date field, the load ID looks similar to this example, 5179-1-0-1FAA-0-0, where the 0-0 on the end means that no date was used. Figure 1-2 illustrates OnDemand applications and documents. An administrator can define the BILLS application for a report that contains logical items, such as customer statements. The BILLS application uses the document indexing

4

Content Manager OnDemand Guide


method to divide the report into documents. Each statement in the report becomes a document in OnDemand. Users can retrieve a statement by specifying the date and any combination of name and number. An administrator can define the TRANS application for a report that contains lines of sorted transaction data. The TRANS application uses the report indexing method to divide the report into documents. Each group of 100 pages in the report becomes a document in OnDemand. Each group is indexed using the first and last sorted transaction values that occur in the group. Users can retrieve the group of pages that contains a specific transaction number by specifying the date and the transaction number. OnDemand retrieves the group that contains the value entered by the user.

Figure 1-2 Applications and documents

1.2.2 Application, application group, and folder The terms application, application group, and folder represent how OnDemand stores, manages, retrieves, views, and prints reports and index data. When defining a new report or type of data to OnDemand, an administrator must create an application and assign the application to an application group. Note: The administrator must first create an application group if one does not exist.

Chapter 1. Overview and concepts

5


Before users can search for and retrieve documents, an administrator must create or update a folder to use the application group and application.

Application An application describes the physical characteristics of a report to OnDemand. Typically you define an application for each program that produces output to be stored in OnDemand. The application includes information about the format of the data, the orientation of data on the page, the paper size, the record length, and the code page of the data. The application also includes parameters that the indexing program uses to locate and extract index data and processing instructions that OnDemand uses to load index data in the database and documents on storage volumes.

Application group An application group contains the storage management attributes of and index fields for the data that you load into OnDemand. When you load a report into OnDemand, you must identify the application group where OnDemand will load the index data and store the documents. An application group is a collection of one or more OnDemand applications with common indexing and storage management attributes. You typically group several reports in an application group so that users can access the information contained in the reports with a single query. All of the applications in the application group must be indexed on at least one of the same fields, for example, customer name, account number, and date.

Folder A folder is the userâ&#x20AC;&#x2122;s way to query and retrieve data stored in OnDemand. A folder provides users with a convenient way to find related information stored in OnDemand, regardless of the source of the information or how the data was prepared. A folder allows an administrator to set up a common query screen for several application groups that might use different indexing schemes, so that a user can retrieve the data with a single query. For example, a folder called â&#x20AC;&#x153;Student Informationâ&#x20AC;? might contain transcripts, bills, and grades, which represent information stored in different application groups, defined in different applications, and created by different programs.

6

Content Manager OnDemand Guide


Figure 1-3 illustrates the concepts described in this section.

Figure 1-3 The concepts of folders, application groups, and applications

Chapter 1. Overview and concepts

7


Figure 1-4 shows some examples for the described concepts.

Figure 1-4 Examples of folders, application groups, and applications

1.2.3 Indexing methods OnDemand provides two methods of indexing data:  Document indexing  Report indexing

Document indexing Document indexing is used for reports that contain logical items such as policies, and statements. Each of the items in a report can be individually indexed on values such as account number, customer name, and balance. OnDemand supports up to 32 index values per item. With document indexing, the user is not

8

Content Manager OnDemand Guide


necessarily required to know about reports or report cycles to retrieve a document from OnDemand.

Report indexing Report indexing is used for reports that contain many pages of the same kind of data, such as a transaction log. Each line in the report usually identifies a specific transaction, and it is not cost effective to index each line. OnDemand stores the report as groups of pages and indexes each group. When reports include a sorted transaction value (for example, invoice number), OnDemand can index the data on the transaction value. This is done by extracting the beginning and ending transaction values for each group of pages and storing the values in the database. This type of indexing lets users retrieve a specific transaction value directly.

1.2.4 OnDemand server and its components The OnDemand server environment includes a library server and one or more object servers that reside on one or more nodes connected to a TCP/IP network.

Library server and object server An OnDemand library server maintains a central database about the reports stored in OnDemand. The database also contains information about the objects defined to the system, such as users, groups, printers, application groups, applications, folders, and storage sets. The database manager provides the database engine and utilities to administer the database. The library server processes client logons, queries, and print requests and updates to the database. The major functions that run on the library server are the request manager, the database manager, and the server print manager. An OnDemand object server maintains documents on cache storage volumes and, optionally, works with an archive storage manager to maintain documents on archive media, such as optical and tape storage libraries. An object server loads data, retrieves documents, and expires documents. The major functions that run on an object server are the cache storage manager, OnDemand data loading and maintenance programs, and optionally, the archive storage manager. The basic OnDemand configuration is a library server and an object server on the same physical system or node. This single library or object server configuration supports the database functions and cache storage on one system. You can add an archive storage manager to the single library or object server configuration, to maintain documents on archive media.

Chapter 1. Overview and concepts

9


You can also configure your OnDemand system with a library server on one node and one or more object servers on different nodes. This configuration is known as a distributed library/object server system. The distributed library or object server configuration supports caching of documents on different servers. You can add an archive storage manager to one or more of the object servers to maintain documents on archive media attached to different servers.

OnDemand server component An OnDemand server environment contains several components:  A request manager provides client, network, and operating system services, security, and accounting. The request manager resides on the library server.  A database manager maintains the index data for the reports that you store on the system. The database manager is a relational database management product, such as DB2. The database manager resides on the library server.  Database control information is information about the users, groups, application groups, applications, folders, storage sets, and printers that you define on the system. The control information determines who can access the system, the folders that a user can open, and the application group data that a user can query and retrieve. The database resides on the library server.  A cache storage manager maintains documents in cache storage. Cache storage is for high-speed access to the most frequently used documents.  An archive storage manager is an optional part of the system. The archive storage manager is for the long-term storage of one or more copies of documents on archive media, such as optical and tape storage libraries.  A download facility automatically transfers spool files to a server at high speed. We recommend that you use Download for OS/390®, a licensed feature of Print Services Facility™ (PSF) for OS/390. Download provides the automatic, high-speed download of JES spool files from an OS/390 system to OnDemand servers. The download facility is not applicable to the iSeries server.  Data indexing and conversion programs can create index data, collect required resources, and optionally convert line data reports to AFP data. OnDemand provides several indexing programs: – The AFP Conversion and Indexing Facility (ACIF) can be used to index S/390® line data, ASCII data, and AFP files, collect resources necessary to view the reports, and convert line data files to AFP data. – The OS/400® Indexer can be used to index a variety of data types and is the most common OnDemand indexer for OS/400 spooled files. – The OnDemand PDF Indexer can be used to create index data for Adobe Acrobat Portable Document File (PDF) files. The OnDemand Generic

10

Content Manager OnDemand Guide


Indexer can be used to create index data for almost any other type of data such as HTML documents, Lotus速 WordPro documents, TIFF files.  Data loading programs can be set up to automatically store report data into application groups and update the database. The data loading programs can run on any OnDemand server.  Archived reports and resources.  A server print facility allows users to reprint a large volume of documents at high speed. OnDemand uses Infoprint速, which must be purchased separately, to manage the server print devices.  OnDemand management programs maintain the OnDemand database and documents in cache storage.  A system logging facility provides administrators with tools to monitor server activity and respond to specific events as they occur. The interface to the system logging facility is through the system log folder and the system log user exit. The following sections provide additional information for:      

The OnDemand request manager The OnDemand database manager The OnDemand storage manager Download facility Data indexing and loading OnDemand management programs

Request manager The request manager processes search requests from OnDemand client programs. When a user enters a query, the client program sends a request over the network to the request manager. The request manager works with the database manager to compile a list of the items that match the query and returns the list to the client program. When the user selects an item for viewing, the request manager sends a retrieval request to the cache storage manager, if the document resides in cache storage, or to the archive storage manager, if the document resides in archive storage. The storage manager retrieves the document and, optionally, the resources associated with the item. The OnDemand client program decompresses and displays the document. OnDemand management programs include utilities that maintain the database and cache storage, including the ability to automatically migrate data from the database and cache storage to archive storage. These programs use the services of the request manager to manage index data, documents, and resource files.

Chapter 1. Overview and concepts

11


When a user logs on to the system, OnDemand assigns a unique transaction number to that instance of the client program. All activity associated with that instance of the client program contains the same transaction number. The request manager records messages generated by the various OnDemand programs in the system log, for example, logon, query, and print. The messages contain the transaction number, user ID, time stamp, and other information. Administrators can open the system log folder and view the messages. OnDemand also provides a system log user exit so that you can run a user-defined program to process messages. For example, you can design a user-defined program to send an alert to an administrator when certain messages appear in the system log. The messages in the system log can also be used to generate usage and billing reports.

Database manager OnDemand uses a database management product, such as DB2, to maintain the index data for the reports that you load into the system. The database manager also maintains the OnDemand system tables that describe the applications, application groups, storage sets, folders, groups, users, and printers that you define to the system. You should periodically collect statistics on the tables in the database to optimize the operation of the OnDemand database.

Storage manager The OnDemand cache storage manager maintains a copy of documents, usually temporarily, on disk. The cache storage manager uses a list of file systems to determine the devices available to store and maintain documents. You typically define a set of cache storage devices on each object server so that the data loaded on the server can be placed on the fastest devices to provide the most benefit to your users. For multiplatforms and z/OS, the cache storage manager uses the arsmaint command to migrate documents from cache storage to archive media and to remove documents that have passed their life of data period. For iSeries, the Start Disk Storage Management (STRDSMOND) command starts the Disk Storage Management task that manages data from between disk and the archive storage manager. OnDemand also supports an archive storage manager, such as Tivoli Storage Manager for Multiplatforms, object access method (OAM) for z/OS, Virtual Storage Access Method (VSAM) for z/OS, and Archive Storage Manager for iSeries. The archive storage manager maintains one or more copies of documents on archive media, such as optical or tape storage libraries. You decide the types of archive media that your OnDemand system must support, configure the storage devices on the system, and define the storage devices to

12

Content Manager OnDemand Guide


the archive storage manager. To store application group data on archive media, you must assign the application group to a storage set that identifies a storage node that is managed by the archive storage manager.

Download facility The download facility is a licensed feature of PSF for OS/390. It provides the automatic, high-speed download of JES spool files from an OS/390 system to an OnDemand for Multiplatforms server. You can use the download facility to transfer reports created on OS/390 systems to the server, where you can configure OnDemand to automatically index the reports and store the reports and index data on the system. The download facility operates as a JES functional subsystem application (FSA) and can automatically route jobs based on a JES class or destination, reducing the need to modify JCL. The download facility uses TCP/IP protocols to stream data at high speed over a LAN or channel connection from an OS/390 system to the OnDemand server.

Data indexing and loading The reports that you store in OnDemand must be indexed. OnDemand supports several types of index data and indexing programs. For example, you can use ACIF to extract index data from the reports that you want to store on the system. An administrator defines the index fields and other processing parameters that ACIF uses to locate and extract index information from reports. OnDemand data loading programs read the index data generated by ACIF and load it into the OnDemand database. The data loading programs obtain other processing parameters from the OnDemand database, such as parameters used to segment, compress, and store report data in cache storage and on archive media. If you plan to index reports on an OnDemand server, you can define the parameters with the administrative client. The administrative client includes a Report Wizard that lets you create ACIF indexing parameters by visually marking sample report data. OnDemand also provides indexing programs that can be used to generate index data for Adobe Acrobat PDF files and other types of source data, such as TIFF files. For OS/400, the OS/400 Indexer can index a variety of data types for OS/400 spooled files. Refer to the following publications for details about the indexing programs provided with OnDemand for various platforms:  IBM Content Manager OnDemand for Multiplatforms - Indexing Reference, SC18-9235  IBM Content Manager OnDemand for iSeries Common Server - Indexing Reference, SC27-1160  IBM Content Manager OnDemand for z/OS and OS/390 - Indexing Reference, SC27-1375

Chapter 1. Overview and concepts

13


Figure 1-5 shows an overview of the data indexing and loading process.

Figure 1-5 Data indexing and loading process

The OnDemand data loading program first determines whether the report needs to be indexed. If the report needs indexing, the data loading program calls the appropriate indexing program. The indexing program uses the indexing parameters from the OnDemand application to process the report data. The indexing program can extract and generate index data, divide the report into indexed groups, and collect the resources needed to view and reprint the report. After indexing the report, the data loading program processes the index data, the indexed groups, and the resources using other parameters from the application and application group. The data loading program works with the database manager to update the OnDemand database with index data extracted from the report. Depending on the storage management attributes of the application group, the data loading program might work with the cache storage manager to segment, compress, and copy report data to cache storage and the archive storage manager to copy report data to archive storage.

14

Content Manager OnDemand Guide


Management programs OnDemand provides programs to maintain and optimize the database and maintain documents on cache storage. An administrator usually determines the processing parameters for these programs, including the frequency with which the programs should run. When you create an application group, you specify other parameters that these programs use to maintain the report data stored in the application group. For example, when creating an application group, the administrator specifies how long documents should be maintained on the system and whether index data should be migrated from the database to archive media. The programs use the information to migrate documents from cache storage to archive media, delete documents from cache storage, migrate index data from the database to archive media, and delete index data from the database. These functions are useful because OnDemand can reclaim the database and cache storage space released by expired and migrated data. We recommend that you configure your OnDemand system to automatically start these management programs on a regular schedule, usually once every night or week. The archive storage manager deletes data from archive media when it reaches its storage expiration date. An administrator defines management information to the archive storage manager to support the OnDemand data it manages. The management information includes the storage libraries and storage volumes that can contain OnDemand data, the number of copies of a report to maintain, and the amount of time to keep data in the archive management system. OnDemand and the archive storage manager delete data independently of each other. Each uses its own criteria to determine when to remove documents. Each also uses its own utilities and schedules to remove documents. However, for final removal of documents from the system, you should specify the same criteria to OnDemand and the archive storage manager.

1.3 Supported environments OnDemand can be installed on a variety of platforms. Three products are available based on platform:  OnDemand for Multiplatforms  OnDemand for z/OS  OnDemand for iSeries

Chapter 1. Overview and concepts

15


OnDemand for Multiplatforms OnDemand for Multiplatforms supports diverse IBM and non-IBM platforms and diverse operating systems. You can install OnDemand as a stand-alone product on one platform or within a global archive network. Multiplatforms that OnDemand can run include:  AIX V5.1 or later on IBM eServer p5 or later  XP-UX version 11i or later on HP 9000 Server rp2400 Series or later  Sun™ Solaris™ Version 8 on Sun Fire™ B100s Blade Server  Microsoft Windows 2000, 2003 Server  Linux on Intel®-based 1 GHz or greater processor – Red Hat Enterprise Linux (RHEL) AS or ES – SUSE LINUX Enterprise Server (SLES) 8 SP 3 Linux support is new in Content Manager V8.3. With the addition of this support, you can choose a cost-effective platform, while fully leveraging important information across daily business operations. Note: Because many Linux versions are available, make sure to use only the supported versions for OnDemand. If you use a nonsupported version, you might run into problems.  Linux for zSeries® on any IBM eServer zSeries model that supports running Linux – Red Hat Enterprise Linux AS or ES – SUSE LINUX Enterprise Server 8 SP 3 Note: Neither OnDemand PDF indexer nor OnDemand Report Distribution is supported under Linux for zSeries.

OnDemand for z/OS Content Manager OnDemand for z/OS is high performance middleware that capitalizes on strategic IBM hardware and software to provide a highly reliable system for enterprise report management and electronic statement presentment. It works on any zSeries with OS/390 Version 2 Release 8 or later, operational TCP/IP and UNIX® System Services, and DB2 for OS/390 Version 6 or later. Content Manager OnDemand for z/OS includes the capability to mix and match servers to support the architecture and geographic needs of the customers, while allowing the data to be kept closer to the requesting users, minimizing network traffic. For example, a Chicago-based company with an z/OS-based library

16

Content Manager OnDemand Guide


server and object server can add an additional object server in Dallas that can be on any OnDemand platform. Or you can choose to run OnDemand library server on AIX or Microsoft Windows NT®, with the object servers on z/OS.

OnDemand for iSeries Content Manager OnDemand for iSeries is high performance middleware that capitalizes on IBM hardware. With OnDemand, the iSeries or AS/400 becomes an archive server for OS/400 business applications or applications on other host systems. It works on any iSeries with OS/400 V5 or later. Note: The library server and the object server must be on the same iSeries partition. It is not possible to mix servers as is possible with OnDemand for Multiplatforms and OnDemand for z/OS.

1.4 Document access and distribution possibilities Users’ needs vary. They can be different from one company to another. They might even vary inside of the same company. Users can also be the customers or partners of a company. They might have access to a network or require some documents to be available without any network connection. They might need advanced functions such as graphical annotations or audit facilities. Users might have to search for documents when a customer calls in for a particular transaction. They might need to view special monthly reports after the reports are generated. They might access information of different types and from different sources, such as unstructured documents stored either in OnDemand or in some third-party offerings and structured information stored either in DB2 or in third-party database offerings. OnDemand offers a large choice of solutions that cover many of the users’ needs. In this section, we introduce:  Document access methods for OnDemand  Document distribution possibility using OnDemand  Federated search: WebSphere® Information Integrator Content Edition

1.4.1 Document access Users can access archived documents from OnDemand using one of the following two methods:  Connected to the OnDemand server mode  Disconnected mode

Chapter 1. Overview and concepts

17


Connected to the OnDemand server mode A network connection is available between the OnDemand client used by the user and the OnDemand server. The user is able to access all the archived documents including the most recent archives, based on the user’s security rights. The server might be any of the OnDemand server types. The client can be the OnDemand client, a browser client, or a custom client.

Disconnected mode Data is extracted from an OnDemand server and is written to a media that can be easily distributed. Users access the OnDemand data from the media in the same way that they access data that is stored in an OnDemand server. For example, you can store all your invoices of a period in a media and loaded it to a mobile computer. Sales representatives can then access the customer invoices on their mobile computers without any network connections while they are in the field. There are two data distribution options:  CD-ROM - Client Data Distribution (also known as ad-hoc CD-ROM) This option is designed for low-volume, ad-hoc building of a CD-ROM by a user with the OnDemand client. Refer to 15.8, “Ad-hoc CD-ROM mastering” on page 497, for more information.  CD-ROM - Production Data Distribution services offering This option is designed for high-volume, batch processing of input files and documents and the production of multiple copies of CD-ROMs. Refer to 15.9, “OnDemand Production Data Distribution” on page 505, for more information.

1.4.2 Document distribution possibility Users access documents whenever they need them through one of the OnDemand clients. The documents can also be distributed to users whenever the documents are available. OnDemand offers the Report Distribution optional feature, which allows documents to be distributed and shared via e-mail. Report Distribution automatically groups reports and portions of the related reports together, organizes them, and creates bundles that can be sent through e-mail to multiple users or sent to the customer’s printing environment for printing. Refer to Chapter 10, “Report Distribution” on page 311, for more information.

18

Content Manager OnDemand Guide


1.4.3 Federated search using WebSphere Information Integrator Content Edition You might require access to information stored in OnDemand as well as other repositories that contain structured and unstructured data, within and outside of your company. You can conduct a federated search using WebSphere Information Integrator Content Edition. Although this redbook covers OnDemand, the product, it might be useful for you to know that you can use WebSphere Information Integrator Content Edition to perform a federated search against content in different repositories including the OnDemand repository. WebSphere Information Integrator Content Edition V8.3 enables you to access and work with a wide range of information sources. It is a single and bi-directional interface that enables multiple disparate content sources to look and act as one system. This flexible and highly scalable abstraction layer makes applications repository independent. WebSphere Information Integrator Content Edition provides the following functions and services:  Access to the underlying content and workflow systems  Check in, check out, and modify content  View and update metadata, security, and annotations  Create and work with renditions, compound documents, workflow tasks, and queues  View major standard business document formats, including TIFF and MODCA  Monitor content, content folders, workflow items, and queues for changes, and trigger actions when a change occurs  Full read-write function  Ability to organize and work with content assets and workflow items as though they are managed in one system  Metadata mapping, federated search, and single sign-on  A uniform super-set APIs to eliminate programming using multiple APIs from different vendors  Real-time content views of content and workflow accessed in place remove the need to access each repository individually WebSphere Information Integrator Content Edition provides many technology benefits:  Service-oriented architecture  Fully J2EEâ&#x201E;˘-compliant and Web services compatible  Support WebSphere Application Server and BEA WebLogic Server

Chapter 1. Overview and concepts

19


 Support component distribution and load balancing  Provide SOAP interface for Web services applications  Support URL-addressable functions To access data in a different content repository, WebSphere Information Integrator Content Edition provides out-of-box connectors that are available for the following products and systems:                 

IBM DB2 Content Manager IBM DB2 Content Manager OnDemand (both Multiplatforms and z/OS) IBM WebSphere MQ Workflow IBM WebSphere Portal Document Manager (read-only) IBM Lotus Notes® Documentum Content Server FileNet Content Services FileNet Image Services FileNet Image Services Resource Adapter FileNet P8 Content Manager FileNet P8 Business Process Manager Open Text Livelink Microsoft Index Server/NTFS Stellent Content Server Interwoven TeamSite Hummingbird Enterprise DM Read-only access to the following relational database systems: – IBM DB2 Universal Database™ – Oracle – Any database accessible through WebSphere Information Integrator federated data server

WebSphere Information Integrator Content Edition also provides a toolkit for you to develop, configure, and deploy content connectors for additional commercial and proprietary repositories. Sample connectors are provided. For more information about WebSphere Information Integrator Content Edition, refer to the following Web address: http://www.ibm.com/software/data/integration/db2ii/editions_content.html

20

Content Manager OnDemand Guide


9

Chapter 9.

Data conversion In this chapter, we provide information about data conversion. We discuss the reasons for data conversion and describe different ways to convert data. We mainly focus on Advanced Function Presentation (AFP), Portable Document Format (PDF), Hypertext Markup Language (HTML), and Extensible Markup Language (XML) flows. AFP is the leading high volume printing data stream, PDF is a frequently required presentation data stream, and HTML and even more so XML are emerging technologies. We describe two conversion solutions that integrate with OnDemand: the IBM AFP2WEB Services Offerings from the team who created AFP data stream and Xenos d2e from Xenos. We also explain how to index composed AFP documents that are generated without the requisite tags for indexing. In this chapter, we cover the following topics:  Overview of data conversion  IBM AFP2WEB Services Offerings and OnDemand  Xenos and OnDemand

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

251


9.1 Overview of data conversion To work with data conversion, it is important that you understand which data conversions are required, and when and how to convert the data. Perform detailed planning before you begin building your solution to help you to achieve a design that remains efficient for many years to come. In this section, we discuss why to perform data conversion, when to perform it, and how to make the conversion.

9.1.1 Why convert data streams There are many reasons why you may want to convert data streams. Some of the reasons are as follows:  Some data streams, such as Hewlett-Packard (HP) Printer Command Language (PCL) or Xerox metacode, are printer specific and are not displayable. Before archiving or displaying the documents, these data streams must be transformed into a compatible format.  The archived data stream might have to comply with companyâ&#x20AC;&#x2122;s internal rules or regulations. Hence produced data streams must be transformed into the defined required final flow before being archived.  The documents might need to be accessible by people outside the company. The flow should be displayable through standard tools available on any or at least most of the clients, such as an Internet browser or Adobe Acrobat Reader.  The documents might need to be manipulated so that only part of them are displayed in a personalized way. XML flow is an emerging one that allows adaptations and standard exchanges.

9.1.2 When to convert data streams The decision of when to convert data streams relies mainly on the usage of the system. Typically, converting data at load time requires more time to process the print stream file, and converting data at retrieval time causes the user retrieval to be a little slower. The decision might depend on how many documents are retrieved, compared to how many documents are loaded on a daily basis. It might also depend on legal requirements about the format of stored data. We briefly discuss two different data types and the factors that effect this decision:  Xerox metacode  AFP to PDF

252

Content Manager OnDemand Guide


Xerox metacode Xerox metacode is designed in such a way that all the presentation resources are stored on the actual printer. Because the resources exist on the actual printer and not on the computer, the Xerox metacode cannot be displayed on any computer. Xerox metacode can only be reprinted to a printer that contains the original resources for the document. If you want to display Xerox metacode through an OnDemand client, you must convert it to something else. If you intend to use a PC client, you must convert the metacode to AFP or PDF at load time since the thick client does not support retrieval transform. If you intend to use OnDemand to reprint the metacode documents to the original metacode printer, then the documents must be stored as metacode. When the documents are stored as metacode, the only way to view them is to enable the retrieval transform through OnDemand Web Enablement Kit (ODWEK) and convert them to either HTML, PDF, or XML.

AFP to PDF If there is a requirement to present AFP documents in the PDF format over the Web, the best way is to store the documents in their native format and then convert them to PDF at retrieval time. This is because AFP documents are stored much more efficiently than PDF. When multiple AFP documents refer to the same resources, these resources are stored only one time and are shared among the AFP documents. PDF documents are completely opposite. All the resources necessary to present a PDF document are contained within the document. The PDF document is larger than the original AFP, and the entire print stream, when it is divided into separate customer statements, is much larger, because each individual statement holds all the resources. See 7.2, â&#x20AC;&#x153;Indexing issues with PDFâ&#x20AC;? on page 188, for an example of how a 100k PDF document can be indexed as five separate PDF documents, with a total of 860k in the resulting indexed file size. Timing is essential to the decision as well. The amount of time needed to convert the document depends on how large it is and how many resources or fonts are associated with the document.

9.1.3 How to convert the data: integrated solutions with OnDemand Two data conversion solutions can be integrated with OnDemand:  IBM AFP2WEB Services Offerings  Xenos d2e

IBM AFP2WEB solution is a service offering by the team who created AFP, and it focuses mainly on AFP.

Chapter 9. Data conversion

253


Xenos d2e offers the complementary services in addition to AFP such as HP PCL and Xerox metacode support. Here are some considerations for target flows:  HTML might be used with the same intent, but an HTML flow is not always displayed identically depending to the Web browser used. Additional testing that accounts for the needs and the encountered environments might be necessary for validation before the implementation.  PDF might be used as a way to make documents available, through standard and free tools such as Adobe Acrobat Reader. The transformed documents should be displayable, savable, and printable the same way regardless of the environment on which the user is working.  XML is an intermediate text-based data stream that allows for the manipulation of documents, regardless of the source data stream, and displays them totally or partially in a personalized way. The use of XML usually involves additional developments including scripts and style sheets. In the following two sections, we discuss the supported environments, the main functions, and the way these solutions integrate with OnDemand. We also provide some samples.

9.2 IBM AFP2WEB Services Offerings and OnDemand AFP2PDF and AFP2HTML work in a similar manner. The main difference between these two solutions, beside the different target flows are concerned, is that AFP2PDF takes in account the security options offered by the PDF flow. It might be an additional element of choice between PDF and HTML. Figure 9-1 shows the AFP2PDF and AFP2HTML transform process.

Figure 9-1 AFP2PDF and AFP2HTML

254

Content Manager OnDemand Guide


AFP2PDF and AFP2HTML are mutually exclusive. They are both available on Windows NT and 2000, AIX, Sun Solaris, Linux, HP-UX, and z/OS versions. The AIX version also operates in the Qshell environment of the iSeries server. AFP2PDF and AFP2HTML are available on all ODWEK-supported platforms.

9.2.1 AFP2HTML: converting AFP to HTML AFP2HTML converts your AFP documents to HTML text files and GIF image files. You can display the resulting output with a Web browser, such as Firefox or Microsoft Internet Explorer. By default, the documents are displayed with the AFP2HTML applet. However, because the AFP2HTML conversion cannot exactly map AFP format to HTML format, certain limitations and approximations exist when you view the output with the Web browser.

How AFP2HTML works with ODWEK AFP2HTML works with ODWEK through the configurations of the following parameters and files:  The AFPVIEWING parameter in the arswww.ini file  The [AFP2HTML] section in the arswww.ini file  The configuration file, afp2html.ini, specified by the CONFIGFILE parameter in the arswww.ini file  Control files

The AFPVIEWING parameter in arswww.ini file When a user retrieves an AFP document from the OnDemand server, the value of the AFPVIEWING parameter in the arswww.ini file determines what action, if any, ODWEK takes before sending the document to the viewer. AFPVIEWING might have the following values:  PLUGIN This indicates that ODWEK does not convert AFP documents (which is the default) and the plug-in has to be installed on the usersâ&#x20AC;&#x2122; computers.  HTML This indicates that ODWEK converts AFP documents to HTML documents with the AFP2HTML. Additional information can be found in the [AFP2HTML] section of the arswww.ini file.  PDF This indicates that ODWEK converts AFP documents to PDF documents with AFP2PDF. Additional information can found in the [AFP2PDF] section of the arswww.ini file.

Chapter 9. Data conversion

255


Example 9-1 shows the sample lines in arswww.ini file, where AFP is set to convert to HTML. Example 9-1 The arswww.ini file: AFP converted to HTML [DEFAULT BROWSER] AFPVIEWING=HTML

The [AFP2HTML] section in the arswww.ini file The [AFP2HTML] section contains the parameters that are used by AFP2HTML. These parameters are:  CONFIGFILE This parameter specifies the configuration file that contains the options used by AFP2HTML to convert APF documents and resources into HTML data, fonts, and images.  INSTALLDIR This parameter specifies the directory that contains AFP2HTML programs, configurations files, and mapping files.  USEEXECUTABLE This setup determines whether ODWEK starts AFP2HTML by using the shared library or the executable. The default value is 0 and means that ODWEK uses the shared library. This value should assure better performance. When the transform is used in a Java environment, USEEXECUTABLE=1 might be necessary if memory issues are submitted from the Java virtual machine (JVMâ&#x201E;˘). Example 9-2 shows the sample lines of the [AFP2HTML] section in an arswww.ini file, where AFP is set to convert to HTML. Example 9-2 The arswww.ini.file: AFP2HTML section [AFP2HTML] CONFIGFILE=afp2html.ini INSTALLDIR=/usr/lpp/ars/www/bin/afp2html USEEXECUTABLE=0

256

Content Manager OnDemand Guide


The configuration file, afp2html.ini, specified in the arswww.ini file Example 9-3 provides a sample configuration file, afp2html.ini, specified by the CONFIGFILE parameter in the arswww.ini file. Example 9-3 The afp2html.ini file [CREDIT-CREDIT] UseApplet=TRUE ScaleFactor=1.0 CreateGIF=TRUE FontMapFile=creditFontMap.cfg ImageMapFile=creditImageMap.cfg [default] ScaleFactor=1.0 CreateGIF=TRUE FontMapFile=fontmap.cfg ImageMapFile=imagemap.cfg

The structure of the afp2html.ini file is similar to a Windows INI file. It contains one section for each AFP application and a default section. The title line of the section identifies the application group and application. For example, the title line [CREDIT-CREDIT] identifies the CREDIT application group and the CREDIT application. Use the â&#x20AC;&#x201C; (dash) character to separate the names in the title line. The names must match the application group and application names defined to the OnDemand server. If the application group contains more than one application, then create one section for each application. The options in the [default] section are used by AFP2HTML to process documents for AFP applications that are not identified in the AFP2HTML.INI file. The defaults are also used if an AFP application section does not include one of the options. The UseApplet option is a directive to ODWEK. It determines whether the AFP2HTML applet will be used to view the output from the AFP2WEB transform. The default value is TRUE. If you specify FALSE (the AFP2HTML applet is not used to view the output), the output is formatted and displayed by the Web browser.

Chapter 9. Data conversion

257


Note: The Java AFP2HTML Viewer is an applet that enables users to view the output generated by AFP2HTML. AFP2HTML converts AFP documents and resources into HTML documents. The Java AFP2HTML Viewer provides a toolbar with controls that can help users to work with documents, such as pagination and annotation, including controls for large objects. One advantage of the applets is that users never have to install or upgrade software on the PC to use them. When using the applets and viewers that are provided by IBM, the documents that are retrieved from an OnDemand server remain compressed until they reached the viewer. The viewer uncompresses the documents and displays the pages in a Web browser window. If a document is stored in OnDemand as a large object, the viewer retrieves and uncompresses segments of the document, as needed, when the user moves through the pages of the document. The AllObjects parameter determines how ODWEK processes documents that are stored as large objects in OnDemand. The default value is 0, and it means that ODWEK retrieves only the first segment of a document. If you specify a value of 1, then ODWEK retrieves all of the segments and converts them before sending the document to the viewer. Note: If you enable Large Object support for very large documents and specify 1, then your users might experience a significant delay before they can view the document. The ScaleFactor parameter scales the output with the given scale factor. The default value is 1.0. For example, specifying a value of ScaleFactor=2.0 scales the output to be twice as large as the default size; specifying a value of ScaleFactor=0.5 scales the output to one half of the default size. The default size is derived from the Zoom setting on the Logical Views page in the OnDemand application. The SuppressFonts parameter determines whether the AFP text strings are transformed. If you specify SuppressFonts=TRUE, any text that uses a font listed in the Font Map file is not transformed. The default value is FALSE, which means that all of the AFP text strings are transformed. The Font Map file is identified with the FontMapFile option. The FontMapFile parameter identifies the full path name of the Font Map file. The Font Map file contains a list of fonts that require special processing. The default Font Map file is named imagfont.cfg and resides in the directory that

258

Content Manager OnDemand Guide


contains the AFP2HTML programs. See the AFP2WEB transform documentation for details about the Font Map file. The ImageMapFile parameter identifies the image mapping file. The image mapping file can be used to remove images from the output, improve the look of shaded images, and substitute existing images for images created by the AFP2HTML transform. Mapping images that are common across your AFP documents (for example, a company logo) reduces the time required to transform documents. If specified, the image mapping file must exist in the directory that contains the AFP2HTML programs. See the AFP2WEB transform documentation for details about the image mapping file.

Control files Three control files are used by AFP2HTML:  The font map file, by default imagfont.cfg, is listed in the afp2html.ini file. See the AFP2WEB transform documentation for details about this file.  The image map file is listed in the afp2html.ini file. We present some samples about the way to use it in “Mapping AFP images” on page 259.  The transform profile file by default, is afp2web.ini, with parameters to control settings for AFP2HTML: – ResourceDataPath specifies the directories that the transform uses to search for AFP resources. – FontDataPath specifies the base directory that the transform uses to search for the font configuration files. – PageSegExt sets the file extension to be used when searching for a page segment file in a resource directory. For example, if all the page segment resource files had the file extension of .PSG, you can set it as: PageSegExt=*.PSG

– OverlayExt sets the file extension to be used when searching for an overlay file in a resource directory. For example, if all of the overlay resource files had the file extension of .OLY, you can set it as: OverlayExt=*.OLY

Mapping AFP images When an AFP document is transformed, images are identified using parameters that specify the page segment name (if available), the position, and the size of each image. If an AFP page contains images, AFP2HTML creates image information entries as comments in the HTML file (AFP2HTML). The HTML comments can be copied into an image map configuration file to define and map a particular image.

Chapter 9. Data conversion

259


Mapping images gives you the option of handling AFP images in different ways during the transform process, including:  Removing images You can remove all or some of the images from your transformed output.  Substituting existing images You can substitute all or some of the images with previously generated Internet images in the HTML output, such as Graphics Interchange Format (GIF), Joint Photographic Experts Group (JPEG), Portable Network Graphics (PNG), or other images.  Substituting AFP shaded images with colored areas You can substitute all or some of the images with a solid-colored rectangle. This is especially useful for improving the look of the shaded areas that are defined as images in the AFP data stream.  Adding an image that is not in the AFP data You can add an image, which is not part of the AFP data, to the HTML or PDF display that models the use of a preprinted form used during printing. The configuration file handles all the transform processing for the images. For example, when the transform program is run against an AFP document and an image is encountered, the program looks for a matching image entry in the configuration file. If an entry is defined that matches the name, position, size, or a combination of the three, the information in the configuration file is used for the transform of the image.

Creating the image map file To map images for your AFP files, you must create an image map configuration file. The best way to do this is to transform a sample AFP document that represents all documents that will use the configuration file. You then identify the image entries that are created during the transform and define them in the configuration file. Run the afp2web command using the afpdoc AFP document: afp2web c:\documents\afpdoc.afp

260

Content Manager OnDemand Guide


We get the c:\documents\afpdoc.html file, which contains HTML comments for each image as shown in Example 9-4. Example 9-4 Image map file <!-- IMAGE position:(5.250in,0.613in) / (567px,66px) size:(0.667in,0.800in / (72px,86px) --> <!-- IMAGE_END --> <!-- IMAGE position:(0.863in,8.483in) / (93px,916px) size:(2.400in,0.667in) / (259px,72px) --> <!-- IMAGE_END --> <!-- IMAGE position:(3.596in,8.550in) / (388px,923px) size:(2.633in,0.700in) / (284px,76px) --> <!-- IMAGE_END --> <!-- IMAGE name:(S1PSEG01) position:(6.162in,8.483in) / (666px,916px) size:(2.067in,0.604in) / (223px,65px) --> <!-- IMAGE_END -->

Now it is possible to remove, substitute, or add images.

Removing images If no other information is given as part of an entry in the image map configuration file, such as extra lines between the image position and size definitions and the IMAGE_END tag, the entry is considered empty. Then the image is removed from the transformed GIF files. The image information that the transform program generates is empty by default.

Substituting existing images To use an existing image, add IMAGE definition parameters between the starting <!-- IMAGE ...--> and ending <!-- IMAGE_END --> lines of an image information entry in the configuration file. For example, with Example 9-5, when the first image is encountered, it is substituted with logo1.gif, and when the second image is encountered, it is substituted with logo2.gif. Example 9-5 Substituting images <!-- IMAGE position:(5.250in,0.613in) / (567px,66px) size:(0.667in,0.800in / (72px,86px) --> IMAGE XPos=0 YPos=0 XSize=900 YSize=200 URL=”http://www.abc.com/images/logo1.gif” ZIndex=1 <!-- IMAGE_END --> <!-- IMAGE position:(0.863in,8.483in) / (93px,916px) size:(2.400in,0.667in) / (259px,72px) --> IMAGE XPos=0 YPos=0 XSize=500 YSize=300 URL=images/logo2.gif ZIndex=2 <!-- IMAGE_END --> <!-- IMAGE position:(3.596in,8.550in) / (388px,923px) size:(2.633in,0.700in) / (284px,76px) --> <!-- IMAGE_END -->

Chapter 9. Data conversion

261


<!-- IMAGE name:(S1PSEG01) position:(6.162in,8.483in) / (666px,916px) size:(2.067in,0.604in) / (223px,65px) --> <!-- IMAGE_END -->

Substituting AFP shaded images with colored areas Many AFP documents contain areas on the page that are shaded with a gray box. This is accomplished in the AFP data stream by defining an image that has pixels laid out in a regular checker-board-like pattern to create a gray shading effect. When attempting to display this type of image, however, it often becomes distorted due to the scale factor and resolution of the display hardware. To avoid this problem, you can define a colored area to use instead of the shaded image. To substitute a shaded image with a color area, add COLORED_AREA definition parameters between the starting <!-- IMAGE ... --> and ending <!-IMAGE_END --> lines of an image information entry in the configuration file. For example with Example 9-6, when the first image is encountered, it is substituted with a red, 86x72 pixel area positioned at (567,66). Example 9-6 Substituting shaded area <!-- IMAGE position:(5.250in,0.613in) / (567px,66px) size:(0.667in,0.800in / (72px,86px) --> COLORED_AREA XPos=567 YPos=66 XSize=72 YSize=86 Color=red <!-- IMAGE_END --> <!-- IMAGE position:(0.863in,8.483in) / (93px,916px) size:(2.400in,0.667in) / (259px,72px) --> <!-- IMAGE_END --> <!-- IMAGE position:(3.596in,8.550in) / (388px,923px) size:(2.633in,0.700in) / (284px,76px) --> <!-- IMAGE_END --> <!-- IMAGE name:(S1PSEG01) position:(6.162in,8.483in) / (666px,916px) size:(2.067in,0.604in) / (223px,65px) --> <!-- IMAGE_END -->

Adding an image that is not in the AFP data Occasionally, preprinted forms are used during the printing process. These preprinted forms might have a company logo, a table, or grid that is filled in with the print data. The transforms enable an image, which emulates the preprinted form, to be included in the transformed output. This image can be in color and can be included on all the pages or only the first page created. To include an image that emulates a preprinted form, add the STATIC_IMG definition parameters between the <!-- IMAGE --> and <!-- IMAGE_END --> lines in the configuration file. These starting and ending lines are special in that they do not include any name, position, or size information. You must manually add these starting and ending lines as well as the STATIC_IMG definition to the

262

Content Manager OnDemand Guide


configuration file. For example, as shown in Example 9-7, the form1.gif GIF image is included on all pages in the HTML output. Example 9-7 Emulating a preprinted form <!-- IMAGE --> STATIC_IMG XPos=0 YPos=0 XSize=72 YSize=722 URL==”http://www.abc.com/images/logo1.gif” Type=ALL ZIndex=1 <!-- IMAGE_END --> <!-- IMAGE position:(0.863in,8.483in) / (93px,916px) size:(2.400in,0.667in) / (259px,72px) --> <!-- IMAGE_END --> <!-- IMAGE position:(3.596in,8.550in) / (388px,923px) size:(2.633in,0.700in) / (284px,76px) --> <!-- IMAGE_END --> <!-- IMAGE name:(S1PSEG01) position:(6.162in,8.483in) / (666px,916px) size:(2.067in,0.604in) / (223px,65px) --> <!-- IMAGE_END -->

AFP2HTML command As seen before, the conversion function is called automatically from OnDemand. However for different purposes such as creating the Image Map File or testing the conversion result, the afp2web command might be used. See the AFP2WEB transform documentation for details about the afp2web command parameters.

9.2.2 AFP2PDF: converting AFP to PDF AFP2PDF converts your AFP documents to the Adobe PDF format so you can view them with Adobe Acrobat. If the Adobe Acrobat plug-in is installed with the Web browser, you can view and print these documents within the browser application. AFP2PDF exactly maps AFP format to PDF format, making it a more robust solution than AFP2HTML. However, to display this data, the Adobe Acrobat software must be installed on the client workstation.

How AFP2PDF works with ODWEK AFP2PDF works with ODWEK through the configurations of the following parameters and files:  The AFPVIEWING parameter in the arswww.ini file  The [AFP2PDF] section in the arswww.ini file  The configuration file, afp2pdf.ini, specified by the CONFIGFILE parameter in the arswww.ini file  Control files

Chapter 9. Data conversion

263


The AFPVIEWING parameter in arswww.ini file When a user retrieves an AFP document from the OnDemand server, the value of the AFPVIEWING parameter in the arswww.ini file determines what action, if any, ODWEK takes before sending the document to the viewer.  PLUGIN This value indicates that ODWEK does not convert AFP documents (the default) and the plug-in must be installed on the usersâ&#x20AC;&#x2122; computers.  HTML This value indicates that ODWEK converts AFP documents to HTML documents with the AFP2HTML. You can find additional information in the [AFP2HTML] section of the arswww.ini file.  PDF This means that ODWEK converts AFP documents to PDF documents with AFP2PDF. You can find additional information in the [AFP2PDF] section of the arswww.ini file. Example 9-8 shows the sample lines in the arswww.ini file, where AFP is set to convert to PDF. Example 9-8 The arswww.ini file: AFP converted to PDF [DEFAULT BROWSER] AFPVIEWING=PDF

The [AFP2PDF] section in the arswww.ini file The [AFP2PDF] section contains the parameters that are used by either AFP2PDF. These parameters are:  CONFIGFILE This parameter specifies the configuration file that contains the options used by AFP2PDF to convert APF documents and resources into PDF documents.  INSTALLDIR This parameter specifies the directory that contains AFP2PDF programs, configurations files, and mapping files.  USEEXECUTABLE This parameter determines whether ODWEK starts AFP2PDF by using the shared library or the executable. The default value is 0, and it means that ODWEK uses the shared library. This value should assure better performance. When the transform is used in a Java environment, USEEXECUTABLE=1 might be necessary if memory issues are submitted from JVM.

264

Content Manager OnDemand Guide


Example 9-9 shows the sample lines of the [AFP2PDF] section in the arswww.ini file, where AFP is set to convert to PDF. Example 9-9 The arswww.ini.file: AFP2PDF section [AFP2PDF] CONFIGFILE=afp2pdf.ini INSTALLDIR=/usr/lpp/ars/www/bin/afp2pdf USEEXECUTABLE=0

The configuration file, afp2pdf.ini, specified in the arswww.ini file Example 9-10 provides a sample configuration file, afp2pdf.ini, specified by the CONFIGFILE parameter in the arswww.ini file. Example 9-10 The afp2pdf.ini file [CREDIT-CREDIT] OptionsFile= ImageMapFile=creditImageMap.cfg [default] OptionsFile= ImageMapFile=imagemap.cfg AllObjects=0

The structure of the file is similar to a Windows INI file. It contains one section for each AFP application and a default section. The title line of the section identifies the application group and application. For example, the title line, [CREDIT-CREDIT] identifies the CREDIT application group and the CREDIT application. Use the â&#x20AC;&#x201C; (dash) character to separate the names in the title line. The names must match the application group and the application names defined to the OnDemand server. If the application group contains more than one application, then create one section for each application. The parameters that you specify in the [default] section are used by the AFP2PDF transform to process documents for AFP applications that are not identified in the afp2pdf.ini file. The default parameters are also used if an AFP application section does not include one of the specified parameters. The OptionsFile parameter identifies the full path name of the file that contains the transform options used by the AFP2PDF transform. The transform options are used for AFP documents that require special processing. The ImageMapFile parameter identifies the image mapping file. The image mapping file can be used to remove images from the output, improve the look of shaded images, and substitute existing images for images created by

Chapter 9. Data conversion

265


the AFP2PDF transform. Mapping images that are common in most of your AFP documents (such as a company logo) reduce the time required to transform documents. If specified, the image mapping file must exist in the directory that contains the AFP2PDF transform programs. The AllObjects parameter determines how ODWEK processes documents that are stored as large objects in OnDemand. The default value is 0, and it means that ODWEK retrieves only the first segment of a document. If you specify a value of 1, then ODWEK retrieves all of the segments and converts them before sending the document to the viewer. Note: If you enable Large Object support for very large documents and specify a value of 1, then your users might experience a significant delay before they can view the document.

Control files Two control files, listed in the afp2pdf.ini file, are used by AFP2PDF:  The image map file: We present some samples for using it in “Mapping AFP images” on page 266.  The options file, by default a2pxopts.ini, with parameters to control settings for AFP2HTML: We present some of the most important parameters in “Option file” on page 271.

Mapping AFP images When an AFP document is transformed, images are identified using parameters that specify the page segment name (if available), the position, and the size of each image. If an AFP page contains images, AFP2PDF creates image information entries in an output file (AFP2PDF). The output file entries can be copied into an image map configuration file to define and map a particular image. Mapping images gives you the option of handling AFP images in different ways during the transform process, including:  Removing images You can remove all or some of the images from your transformed output.  Substituting existing images You can substitute all or some of the images with previously generated images in the PDF output with JPEG images.

266

Content Manager OnDemand Guide


 Substituting AFP shaded images with colored areas You can substitute all or some of the images with a solid-colored rectangle. This is especially useful for improving the look of the shaded areas that are defined as images in the AFP data stream.  Adding an image not in the AFP data You can add an image that is not part of the AFP data to the PDF display that models the use of a preprinted form used during printing.  Caching frequently used images You can cache frequently used images to reduce the size of a PDF file. The configuration file handles all the transform processing for the images. For example, when the transform program is run against an AFP document and an image is encountered, the program looks for a matching image entry in the configuration file. If an entry is defined that matches the name, position, size, or a combination of the three, the information in the configuration file is used for the transform of the image.

Creating the image map file To map images for your AFP files, you must create an image map configuration file. The best way to do this is to transform a sample AFP document that represents all documents that will use the configuration file. You then identify the image entries created during the transform and define them in the configuration file. Run the afp2pdf command using the afpdoc AFP document: afp2pdf c:\documents\afpdoc.afp

We get two files:  c:\documents\afpdoc.pdf (PDF file for the AFP document)  c:\imagemap.out (a file with image information for the AFP document; see Example 9-11) Note: The image information file is generated according the ImageMapEntries_File parameter in AFP2PDF Options File. Refer to â&#x20AC;&#x153;Control filesâ&#x20AC;? on page 266.

Chapter 9. Data conversion

267


Example 9-11 Image map file <IMAGE position:(5.250in,0.613in) size:(0.667in,0.800in)> <IMAGE_END> <IMAGE position:(0.863in,8.483in) size:(2.400in,0.667in)> <IMAGE_END> <IMAGE position:(3.596in,8.550in) size:(2.633in,0.700in)> <IMAGE_END> <IMAGE name:(S1PSEG01) position:(6.162in,8.483in) size:(2.067in,0.604in)> <IMAGE_END>

The image information for each image is in pairs. The first line contains the page-segment resource name (only if available), the position value in inches, and size values in inches. The second line ends the entry for the image. The first value for the position and size gives the horizontal dimension and the second gives the vertical dimension. The position measurements are for the upper, left-hand corner of the image relative to the upper, left-hand corner of the page. The copy of the lines in the output file (imagemap.out in this example) is used to create the image-map configuration file (imagemap.cfg by default). The image information in the configuration file is used to identify the images in the AFP document during the transform process. Each IMAGE tag along with its corresponding IMAGE_END tag defines a single image information entry in the configuration file. Now it is possible to remove, substitute, or add images.

Removing images If no other information is given as part of an entry in the image map configuration file, such as extra lines between the image position and size definitions and the IMAGE_END tag, the entry is considered empty and the image is removed from the transformed PDF files. The image information that the transform program generates is empty by default.

Substituting existing images To use an existing image, add IMAGE definition parameters between the starting <IMAGE ...> and ending <IMAGE_END> lines of an image information entry in the configuration file. For example, with Example 9-12, when the first image is encountered, it is substituted with logo1.jpg, and when the second image is encountered, it is substituted with logo2.jpg.

268

Content Manager OnDemand Guide


Example 9-12 Substituting images <IMAGE position:(5.250in,0.613in) / (567px,66px) size:(0.667in,0.800in / (72px,86px)> IMAGE XPos=0 YPos=0 XSize=900 YSize=200 Filename=”/images/logo1.jpg” <IMAGE_END> <IMAGE position:(0.863in,8.483in) / (93px,916px) size:(2.400in,0.667in) / (259px,72px)> IMAGE XPos=0 YPos=0 XSize=500 YSize=300 Filename=”/images/logo2.jpg” <IMAGE_END> <IMAGE position:(3.596in,8.550in) / (388px,923px) size:(2.633in,0.700in) / (284px,76px)> <IMAGE_END> <IMAGE name:(S1PSEG01) position:(6.162in,8.483in) / (666px,916px) size:(2.067in,0.604in) / (223px,65px)> <IMAGE_END>

Substituting AFP shaded images with colored areas Many AFP documents contain areas on the page that are shaded with a gray box. This is accomplished in the AFP data stream by defining an image that has pixels laid out in a regular checker-board-like pattern to create a gray shading effect. When attempting to display this type of image, however, it often becomes distorted due to the scale factor and resolution of the display hardware. To avoid this problem, you can define a colored area to be used instead of the shaded image. To substitute a shaded image with a color area, add SHADED_AREA definition parameters between the starting <IMAGE ...> and ending <IMAGE_END> lines of an image information entry in the configuration file. For example, with Example 9-13, when the first image is encountered, it is substituted with a red, 0.667in x 0.8in area positioned at (5.25in,0.613in). Example 9-13 Substituting shaded area <IMAGE position:(5.250in,0.613in) SHADED_AREA XPos=5.250 YPos=0.613 Shade_G=0.0 Shade_B=0.0 <IMAGE_END> <IMAGE position:(0.863in,8.483in) <IMAGE_END> <IMAGE position:(3.596in,8.550in) <IMAGE_END> <IMAGE position:(6.162in,8.483in) <IMAGE_END>

size:(0.667in,0.800in)> XSize=0.667 YSize=0.800 Shade_R=1.0

size:(2.400in,0.667in)> size:(2.633in,0.700in)> size:(2.067in,0.604in)>

Chapter 9. Data conversion

269


Adding an image not in the AFP data Occasionally, preprinted forms are used during the printing process. These preprinted forms might have a company logo, a table, or grid that is filled in with the print data. The transforms let an image, which emulates the preprinted form, be included in the transformed output. This image can be in color and can be included on all the pages or only the first page created. To include an image that emulates a preprinted form, add the STATIC_IMG definition parameters between the <IMAGE ...> and ending <IMAGE_END> lines in the configuration file. These starting and ending lines are special in that they do not include position or size information. You must manually add these starting and ending lines as well as the STATIC_IMG definition to the configuration file. In Example 9-14, the form1.jpg JPEG image is included on all pages in the PDF output. Example 9-14 Emulating a preprinted form < IMAGE> STATIC_IMG XPos=0 YPos=0 XSize=72 Type=ALL < IMAGE_END> <IMAGE position:(5.250in,0.613in) <IMAGE_END> <IMAGE position:(0.863in,8.483in) <IMAGE_END> <IMAGE position:(3.596in,8.550in) <IMAGE_END> <IMAGE position:(6.162in,8.483in) <IMAGE_END>.

YSize=722 Filename=â&#x20AC;?c:\images\form1.jpgâ&#x20AC;?

size:(0.667in,0.800in)> size:(2.400in,0.667in)> size:(2.633in,0.700in)> size:(2.067in,0.604in)>

Caching frequently used images In many AFP documents, the same image is used many times throughout the document, such as a company logo that appears on each page of a document. It is possible to cache this image so that the image is stored only once and referenced each time it is used. Caching frequently used images reduces the size of the resulting PDF file. To cache an image, add a CACHE_IMG definition parameter between the starting <IMAGE ...> and ending <IMAGE_END> lines of an image information entry in the configuration file. Example 9-15 shows how to cache frequently used images. When the first image is encountered, it is cached with the name IMG0.

270

Content Manager OnDemand Guide


Example 9-15 Caching an image <IMAGE position:(5.250in,0.613in) CACHE_IMG NAME=IMG0 <IMAGE_END> <IMAGE position:(0.863in,8.483in) <IMAGE_END> <IMAGE position:(3.596in,8.550in) <IMAGE_END> <IMAGE position:(6.162in,8.483in) <IMAGE_END>

size:(0.667in,0.800in)>

size:(2.400in,0.667in)> size:(2.633in,0.700in)> size:(2.067in,0.604in)>

Option file An option file might be associated to specific OnDemand application group and application or defaulted. We present part of the options that are specific to the PDF flow. These might concern either information made available to the user through options in the PDF viewer or security that limits the user in the actions that can be taken with the document. See the AFP2PDF transform documentation for details about the option file parameters.

Information associated with a PDF document Options associated with a PDF document include:  Append_PDF_file This parameter appends the listed PDF file to the generated PDF file. For example, the following line appends the file C:\term.pdf to the beginning of the generated PDF file: Append_PDF_File=”C:\\term.pdf”,0

The following line appends the file c:\term.pdf to the end of the generated PDF file: Append_PDF_File=”C:\\term.pdf”,1

 Author This parameter specifies the text for the Author entry in the Info Dictionary of the output PDF file. This information is displayed to the user if the “general document info” function is selected in Adobe Acrobat.  Creator This parameter specifies the text for the Creator entry in the Info Dictionary of the output PDF file. This information is displayed to the user if the “general document info” function is selected in Adobe Acrobat.

Chapter 9. Data conversion

271


 Disable_Bookmark_Generation If page level indexing information is available in the input AFP document, PDF bookmarks are automatically generated. Setting this option to FALSE, the bookmarks are not created.  Keywords This parameter sets the text for the Keywords entry in the Info Dictionary of the output PDF file. This information is displayed to the user if the “general document info” function is selected in Adobe Acrobat.  Show_Outline If an AFP document contains index data, the transform converts this index information into PDF outline and bookmark functions. If the output PDF file contains any bookmark information, the outline window is always displayed when viewed with Adobe Acrobat. By setting this parameter value to FALSE, the outline window is not displayed.  Subject This parameter sets the text for the Subject entry in the Info Dictionary of the output PDF file. This information is displayed to the user if the “general document info” function is selected in Adobe Acrobat.  Title This parameter sets the text for the Title entry in the Info Dictionary of the output PDF file. This information is displayed to the user if the “general document info” function is selected in Adobe Acrobat.

Security Protecting the content of the PDF document is accomplished with encryption. This PDF security feature is supported by the AFP2PDF transform and follows the password features supported within the Adobe Acrobat product. A PDF document can have two kinds of passwords, a document open password and a Permissions password. When a document open password (also known as a user password) is used, any user who attempts to open the PDF document is required to supply the correct password. The encrypted PDF in the transform is also tied to disabling certain operations when displayed in Adobe Acrobat. Using letter codes, any combination of the following operations listed can be disabled: c s a p

272

Modifying the document's contents Copying text and graphics from the document Adding or modifying text annotations and interactive form fields Printing the document

Content Manager OnDemand Guide


If any of these operations is disabled, a permissions password (also known as an owner or master password) must also be specified. Any user who needs to override a restricted operation must supply the correct permissions password. Both types of passwords can be set for a document. If the PDF document has both types of passwords, a user can open it with either password. The Adobe products enforce the restrictions set by the permissions, owner, or master password. Note: Not all products that process PDF files from companies other than Adobe fully support and respect these settings. Users of these programs might be able to bypass some of the restrictions set. The associated parameters in the Option file are:  Default_Encryption_Permissions This parameter sets the default encryption permissions to be used when encrypting the PDF output. A list of the permission flags include: p c s a

Do not print the document from Acrobat. Changing the document is denied in Acrobat. Selection and copying of text and graphics are denied. Adding or changing annotations or form fields is denied.

The following flags are defined for 128-bit encryption (PDF 1.4, Acrobat 5.0): i e d q

Disable editing of form fields. Disable extraction of text and graphics. Disable document assembly. Disable high quality printing.

A flag of 5 can be used in combination with one of the â&#x20AC;&#x153;oldâ&#x20AC;? flags to force 128-bit encryption without setting any of the i, e, d, or q flags. Using any of these Acrobat 5 related flags produces a file that cannot be opened with older versions of Acrobat.  Default_Owner_Password This parameter sets a default owner encryption password.  Default_User_Password This parameter sets a default user encryption password.

Chapter 9. Data conversion

273


AFP2PDF command As seen before, the conversion function is called automatically from OnDemand. For different purposes, such as creating the image map file or testing the conversion result, you might use the afp2pdf command. See the AFP2PDF transform documentation for details about the afp2pdf command parameters.

9.2.3 AFP2XML: converting AFP to XML AFP2XML is an IBM Services Offering that generates XML documents from AFP files. It contains two modules:  buildICT This is a Windows-based GUI used to define XML attribute and value data. Within the interface, the user opens a representative AFP document, selects items on the display, and defines controls on those items. This module is available on Windows.  afp2xml This is a Java executable that generates the XML file from the input AFP file and the control file defined using the buildICT. There is also a user exit available that allows more sophisticated AFP processing. This module is available on Windows, AIX, Sun Solaris, and z/OS. The Linux and HP-UX versions will be available before the end of 2006. (Contact your IBM representative for more up-to-date information). The AFP2XML GUI displays the document similarly to its print output. It allows the user to select Triggers and Attributes and define parsing rules without requiring any specific AFP knowledge. You can extract the significant data from your AFP file and place it into an XML format, which is a more interchangeable format. The AFP document can be retrieved from OnDemand using one of the ODWEK APIs and then converted to XML using the Java or C API. The XML file can be manipulated for any use of the available information in this interchangeable format.

274

Content Manager OnDemand Guide


Consider an example where a company that archives their invoices in OnDemand wants to develop an electronic and bill presentation application, and that their customers can check their bills online and have the opportunity to pay online. The company wants to extract some information from invoices, such as the Amount Due, the Account Number and the Statement Date, to include it in the new online electronic and bill presentation application (see Figure 9-2). The information is extracted from the archived AFP documents and placed in an XML file so that it can be integrated into the electronic and bill presentation application.

Figure 9-2 AFP invoice

Chapter 9. Data conversion

275


An ICT file is defined using the buildICT client. This control file is used with the afp2xml module to automatically extract information from the invoices and place them into an XML file. Figure 9-3 shows how to define an XML ICT file for AFP invoices.

Figure 9-3 Defining an XML ICT file for AFP invoices

276

Content Manager OnDemand Guide


The structure of the XML file can be displayed from the buildICT client for control. See Figure 9-4.

Figure 9-4 Invoice XML file

9.2.4 AFPIndexer: indexing composed AFP files It might happen that composed AFP files are generated without the tags that allow the indexation by OnDemand. It is then necessary to process these file in order to insert the requisite tags according to the indexing needs. AFPIndexer is an IBM Services Offering that generates AFP documents with Page Level Tag Logical Elements (TLE) and Group Level Tag Logical Elements from AFP files and control files. AFPIndexer contains these modules:  buildICT This Windows-based GUI is used to define Page and Group Level indexing controls. Within the interface, the user opens a representative AFP document, selects items on the display, and defines controls on those items. This module is available on Windows.

Chapter 9. Data conversion

277


 indexAfp module This executable generates the indexed AFP file from the input AFP file and the control file. It generates Page and Group level TLEs in the output file and creates an AFP Document Index file and an AFP Resource Group file. This module is available on Windows, AIX, Sun Solaris, and z/OS. Linux and HP-UX versions will be available before the end of 2006. (Contact your IBM representative for up-to-date information.) AFPIndexer generates IBM OnDemand compatible output, an indexed AFP file, AFP Document Index files, and AFP Resource. OnDemand waits for the *.out, *.ind, and *.res files to use them as input for archiving. The AFPIndexer GUI displays the document in a similar manner to how it displays its print output. It allows the user to select Triggers and Indexes and define parsing rules without requiring any specific AFP knowledge. Consider an example where a company generates composed AFP invoice files and wants to archive them. The indexes, in this example, should be the Amount Due, the Account Number, and the Statement Date. See Figure 9-5.

Figure 9-5 AFP invoice

278

Content Manager OnDemand Guide


An ICT file is defined using the buildICT client. This control file is used with the indexAfp module to insert the required tag and generate the *.out, *.ind and .res files for archiving by OnDemand. Figure 9-6 shows how to define the XML ICT file for an AFP invoice.

Figure 9-6 Defining an XML ICT file for AFP invoices

Chapter 9. Data conversion

279


The tagged AFP file can be controlled from the buildICT client for validation of the ICT file. See Figure 9-7.

Figure 9-7 AFP document tagged with TLEs for OnDemand indexation

9.3 Xenos and OnDemand The Xenos d2e platform is a unique component-based solution that transforms legacy data and documents into e-content industry standard formats. The Xenos d2e platform is officially supported with OnDemand Version 7.1 and allows OnDemand to handle data types that were previously unsupported. Xenos d2e allows for the indexing and loading of Xerox metacode (DJDE, LCDS and mixed mode), HP PCL and non-indexable IBM AFP. Additionally, Xenos allows for viewing, through ODWEK, directly through the Web browser. It does this by dynamically converting OnDemand data to e-content formats such as XML, HTML, and PDF. The Xenos transforms are batch application programs that let you process these various input data types by converting the data, indexing on predefined parameters and collecting resources to provide the proper viewing. The Xenos load transform produces the index file, the output file, and the resource file, which

280

Content Manager OnDemand Guide


are then used by the arsload program to load the data into OnDemand. These documents are then available to users to search, retrieve, and view. The Xenos transforms can be run either when loading the input files into the system, or alternatively, dynamically when the documents are retrieved via the OnDemand Web Enablement Kit. If transforming the data at load time, the Xenos transforms listed in Table 9-1 are available. Table 9-1 Available Xenos transforms: transforming data at load time From

To

AFP

PDF

Metacode

AFP

Metacode

PDF

Metacode

Metacode

PCL

PDF

If transforming the data dynamically when it is retrieved via ODWEK, the Xenos transforms listed in Table 9-2 are available. Table 9-2 Available Xenos transforms: transforming data dynamically via ODWEK From

To

AFP

PDF

AFP

HTML

AFP

XML

Metacode

AFP

Metacode

HTML

Metacode

PDF

Metacode

XML

Figure 9-8 shows, at a high level, how the Xenos transforms fit into the OnDemand environment. It shows the resources and the AFP, metacode, or PCL printstream being fed into the ARSLOAD program. When ARSLOAD runs, it checks to see what indexer to use. If the application specifies Xenos, then the Xenos transforms are called and run with a predefined parameter and script files.

Chapter 9. Data conversion

281


The output files produced by Xenos are handed back to ARSLOAD and the indexes and documents are loaded into the database and storage accordingly. Alternatively, if ODWEK is configured to convert documents at retrieval, then the Xenos transforms are called from ODWEK before presenting the document to the user.

Your Application

Print Data

Resources Xenos Transform (Conversion and Indexing)

Processing Parameters

Processing Parameters

Index File Document File Resource File

ARSLOAD Program

Data Loading

Index Data Document Resources

Viewing and Printing

Processing Parameters

Xenos Transform (Optional)

Viewing

Printing

Figure 9-8 How the Xenos transforms fit into the OnDemand environment

282

Content Manager OnDemand Guide


9.3.1 Converting AFP to XML through ODWEK The AFP to XML transform is used when retrieving AFP documents from an OnDemand server through the use of ODWEK. The AFP to XML transform allows data from the input document to be inserted in a predefined template file using the Xenos Template Merger facility. This template file can be a standard XML tagged file format and can be used to display information in different layout than the printed page. The XML file can then be used for online applications such as an online payment system. Multiple templates can be used in a single application to suit the variety of information that is found from page to page in the input document. In this section, we discuss an example of converting an AFP document to an XML document via the Xenos AFP2XML transform and ODWEK. We walk through the steps that make this transform work.

AFP2XML example For our example, we use an AFP customer billing statement that is stored in OnDemand. We want our customers to be able to retrieve this document in a Web-ready format without having to rely on the AFP Web Viewer. Our Web developers have stated that if they can extract the pertinent pieces of information in a standard XML format, they can use XSL or CSS to format the document. See Figure 9-9 to view the AFP statement as retrieved from the OnDemand PC client. The fields that we want to extract are highlighted with a box. We want to extract the account ID, the amount due, the start and the end dates, and the usage. We also want to extract the current and previous bill sections and all the charges this are included in these sections. Because Xenos is doing the transform at the document retrieval time, no additional processing is required on the load side. The document has been defined into an OnDemand application group called xenos-xml, with a data type of AFP. The document has been loaded with a pagedef, formdef, overlay, and several custom fonts. The data has only two indexes associated with it, AccountID and EndDate. These indexes do not have to coincide in any way with the Xenos fields that we are extracting. However, keep in mind, the fields that are presented to the user in the ODWEK hit list are the fields that are defined in OnDemand, not Xenos. To ensure that the document was loaded correctly, we viewed it both through a PC client and through ODWEK before we added the Xenos processing.

Chapter 9. Data conversion

283


Figure 9-9 AFP billing statement stored in OnDemand

284

Content Manager OnDemand Guide


As previously mentioned, we want to extract several fields from the AFP document. After we extract these fields from the AFP document, we want to format them into an XML tagged document with the following format, where xxx is the value from the document (see Example 9-16). Example 9-16 Coding example for extracting and reformatting fields <XML> <BILLFILE> <BILL> <BILLSUMMARY> <ACCTID>xxx</ACCTID> <AMTDUE>XXX</AMTDUE> <STARTDATE>XXX</STARTDATE> <ENDDATE>XXX</ENDDATE> <USAGE>XXX</USAGE> </BILLSUMMARY> <CURRENT> <ITEM>XXX</ITEM> <AMOUNT>XXX</AMOUNT> <ITEM>XXX</ITEM> <AMOUNT>XXX</AMOUNT> ... </CURRENT> <PREVIOUS> <ITEM>XXX</ITEM> <AMOUNT>XXX</AMOUNT> <ITEM>XXX</ITEM> <AMOUNT>XXX</AMOUNT> ... </PREVIOUS> </BILL> </BILLFILE> </XML>

In the following sections, we explain how to implement this example.

Chapter 9. Data conversion

285


Xenos job parameter file To configure ODWEK to run the Xenos transforms, two input and configuration files are necessary, the job parameter file (.par) and the document manipulation script (.dms). The Xenos parameter file is a text file that contains parser and generator required and optional parameters. Examples of these parameters are the names of input and output files and the location of all document resources, such as fonts, pagdefs and formdefs. Also included in this parameter file is a list of the fields to be pulled from the document and where these fields are located. Five types of job-related parameters can be defined in the parameter file. They are organized by type and begin with a section heading. The five sections are Job Script, Parser, Generator, Display List, and Distributor. Each of these sections contains many required and optional fields depending on the data type that is being parsed and generated. Refer to Xenos d2e Platform User Guide, which comes with the Xenos offering by Xenos Group Incorporated, for a full description of this file. Table 9-3 provides a parameter file summary, with a description for each parameter section that is applicable to our AFP2XML conversion. Table 9-3 Parameter file summary

286

Parameter section

Description

Job Script (JS:)

This section indicates the names and locations of the Xenos d2e Script Library. The Dmsl.lib library is required, and the conversion fails if this library is not defined. This section also defines the variables to be used in the d2e script.

Parser (AFPPARSER:)

This section gives information about the incoming document and how to process it. This section defines where the AFP resources (fonts, pagedefs, formdefs, overlays, and page segments) are located. It also defines font correlation table.

Generator (TMerge:)

This section controls how the new document is generated. The XML generator uses the Template Merge Facility, which scans the template for variable names and then replaces them with variable values from the document. In our parameter file, the PREFIX and SUFFIX parameter tell the template merger the characters that define the beginning and ending of the variable in the template file.

Display List (DLIST:)

This section controls how special features, such as book marking and URL links, and fields are generated. Our display list parameters tell d2e where to locate each field in the input AFP file.

Content Manager OnDemand Guide


Example 9-17 shows the full contents of our parameter file. Example 9-17 AFPS2XML_rb.par file contents JS: FDDMSLIB = ‘D:\Program Files\xenosd2e\d2ebin\dmsl.lib’ SCRIPTVAR = (‘project_path’,’D:\d2eDevStudio\AFP2XML\’) SCRIPTVAR = (‘project_resource_path’,’D:\d2eDevStudio\AFP2XML\resources\’) AFPPARSER: CC = YES FDAFPFONTS = ‘D:\d2eDevStudio\AFP2XML\Resources\%s.fnt’ FDFORMDEFS = ‘D:\d2eDevStudio\AFP2XML\Resources\%s.fde’ FDMFCT = ‘D:\d2eDevStudio\AFP2XML\AFP2XML.tab’ FDOVERLAYS = ‘D:\d2eDevStudio\AFP2XML\Resources\%s.ovr’ FDPAGEDEFS = ‘D:\d2eDevStudio\AFP2XML\Resources\%s.pde’ FDPAGESEGS = ‘D:\d2eDevStudio\AFP2XML\Resources\%s.psg’ FORMDEF = ‘f1mbibi0’ PAGEDEF = ‘p1mbibi0’ POSITION = WORD TMERGE: PREFIX = ‘&&’ SUFFIX = ‘.’ DLIST: PARMDPI = 100 PAGEFILTER = ALL FIELDNAME = ‘PAGE’ FIELDWORD = %30 FIELDPHRASE = %400 FIELDLOCATE = (‘CurrentBill’,’BILLING INFORMATION’) FIELDLOCATE = (‘EndCurrent’,’CURRENT GAS BILLING’) FIELDLOCATE = (‘EndPrevious’,’PREVIOUS BALANCE’) FIELDNAME = ‘ACCTID’ FIELDBOX = (367,78,519,98) FIELDWORD = %20 FIELDPHRASE = %500 FIELDNAME = ‘AMTDUE’ FIELDBOX = (714,530,816,578) FIELDWORD = %20 FIELDPHRASE = %500 FIELDNAME = ‘STARTDATE’ FIELDBOX = (585,81,641,99) FIELDWORD = %20 FIELDPHRASE = %500

Chapter 9. Data conversion

287


FIELDNAME = ‘ENDDATE’ FIELDBOX = (652,81,714,100) FIELDWORD = %20 FIELDPHRASE = %500 FIELDNAME = ‘USAGE’ FIELDBOX = (730,130,769,148) FIELDWORD = %20 FIELDPHRASE = %500 FIELDNAME = FIELDBASE = FIELDWORD = FIELDPHRASE FIELDTABS =

‘CURRBILL’ (‘’,41,’’,220,’’,800,’EndCurrent’,30) %20 = %5000 (0,450)

FIELDNAME = FIELDBASE = FIELDWORD = FIELDPHRASE FIELDTABS =

‘PREVIOUSBILL’ (‘EndCurrent’,-40,’EndCurrent’,40,’’,800,’EndCurrent’,200) %20 = %5000 (0,450)

FIELDNAME = ‘Field’ FIELDBOX = (176,119,263,163) FIELDWORD = %20 FIELDPHRASE = %500

The job parameter file can either be typed in manually using any ASCII editor or created graphically using the d2e Developer Studio. It most cases, it is a combination of both. Developer Studio has a graphical AFP parser that can render the input file as a bitmap. This allows for the graphical selection of field locations for data extraction. We used the Developer Studio wizard to create the parameter file and locate the fields, and then updated this file manually for the resource locations and the script variables.

Xenos document manipulation script The document manipulation script, or dms, is the second file that ODWEK needs to run the Xenos transforms. The dms is a REXX program that is used to customize and enhance the capabilities of the d2e transform program. The script file is used in conjunction with the job parameter file to determine how a particular job is processed. The parameters defined in the script override any defined in the parameter file. Refer to Xenos d2e Platform Scripting Reference, which comes with the Xenos offering by Xenos Group Incorporated, for a complete understanding of this script.

288

Content Manager OnDemand Guide


In our AFP2XML transform, the dms script calls the AFP parser to extract the applicable fields from the document. It then calls the template merge function using a set of predefined template files. The output of this transform is an XML tagged text file with the information converted dynamically from the AFP document that is stored in OnDemand. See Example 9-18 for our complete dms script. Example 9-18 AFP2XML_rb.dms script file contents /* Document Breaking Script */ /* d2e Developer Studio v5.1.63 */ /* Project : AFP2XML */ /* Initialize IDC engine */ CALL dm_Initialize /* variables */ TRUE = 1 FALSE = 0 doc_open = FALSE /* Start parsers and Generators */ AFP_Parser_h = dm_StartParser(“AFP”) TMerge1_h = dm_StartGenerator(“TMerge”) /* Define input and output to ODWEK */ rc = dm_SetParm(AFP_Parser_h, ‘fdinput’, inputfile); rc = dm_SetParm(TMerge1_h, ‘fdoutput’, outputfile);

/* open generator documents */ rc = dm_TMergeOpen(TMERGE1_h, outputfile, project_resource_path || “startfile.tpl”)

SAY “Finished starting up process “ /* get page */ dlpage_h = dm_GetDLPage(AFP_Parser_h) SAY “Parsed page “ dlpage_h

DO WHILE(dlpage_h <> ‘EOF’)

Chapter 9. Data conversion

289


/* Get field values */ ACCTID = dm_GetText(dlpage_h,”ACCTID”,1) AMTDUE = dm_GetText(dlpage_h,”AMTDUE”,1) STARTDATE = dm_GetText(dlpage_h,”STARTDATE”,1) ENDDATE = dm_GetText(dlpage_h,”ENDDATE”,1) USAGE = dm_GetText(dlpage_h,”USAGE”,1) /* test for break and begin new XML Bill*/ IF ACCTID <> previous_ACCTID THEN DO IF doc_open = TRUE THEN DO rc = dm_TMergeWrite(TMERGE1_h, project_resource_path || “endbill.tpl”) END previous_ACCTID = ACCTID doc_open = TRUE END /* write out Bill Summary Section */ IF doc_open = TRUE THEN DO

rc = dm_TMergeWrite(TMERGE1_h, project_resource_path || “bill.tpl”) /* This section parses through the current charges section detail lines and */ /* and loads them into the xml templast as the detail ITEM and detail NUMBER */ rc = dm_GetMultiText(dlpage_h,”currbill”,curr_cnt) DO i = 1 to curr_cnt.0 PARSE var curr_cnt.i.dm_textdata item ‘09’x amount /*Merge with Template*/ rc = dm_TMergeWrite(TMERGE1_h, project_resource_path || “itemizedline.tpl”) END /*

write out end of current section and beginning of previous section */

rc = dm_TMergeWrite(TMERGE1_h, project_resource_path || “endcur.tpl”) /* This section parses through the previous charges section detail lines and */ /* and loads them into the xml templast as the detail ITEM and detail NUMBER */ rc = dm_GetMultiText(dlpage_h,”previousbill”,prev_cnt) DO i = 1 to prev_cnt.0 PARSE var prev_cnt.i.dm_textdata item ‘09’x amount rc = dm_TMergeWrite(TMERGE1_h, project_resource_path || “itemizedline.tpl”) END END /* get page */

290

Content Manager OnDemand Guide


dlpage_h = dm_GetDLPage(AFP_Parser_h) SAY “Parsed page “ dlpage_h END SAY “Finished processing pages, now closing...” /* End last bill and file template */ rc = dm_TMergeWrite(TMERGE1_h, project_resource_path || “endfile.tpl”) /* Close and Finish */ rc = dm_TMergeClose(TMERGE1_h) RETURN

The dms script in Example 9-18 refers to several template (.tpl) files. These are the files that are used to create the XML output file. See Example 9-19 for the bill.tpl file that is referenced. Each && begins a variable to be replaced with the actual value from the AFP file before it is inserted into the output XML file. Example 9-19 Template file bill.tpl <BILLSUMMARY> <ACCTID>&&ACCTID.</ACCTID> <AMTDUE>&&AMTDUE.</AMTDUE> <STARTDATE>&&STARTDATE.</STARTDATE> <ENDDATE>&&ENDDATE.</ENDDATE> <USAGE>&&USAGE.</USAGE> </BILLSUMMARY> <CURRENT>

By creating templates with as much variable information as practical, the application runs more efficiently. You can create an XML application where each line in the XML corresponds to a different template. However, the speed of the transform is slowed by many more calls to dm_TmergeWrite than are necessary. By making each template write more lines in the output XML file, the amount of time for the transform is reduced. With careful template design, the d2e platform applications might speed up considerably, as much as 40% faster.

Configuring ODWEK After the transform is set up on the Xenos side, you must configure ODWEK to run the transform. You must make the changes explained in the following sections.

Chapter 9. Data conversion

291


Update the arswww.ini file The [xenos] section contains two parameters, InstallDir and ConfigFile. The InstallDir parameter specifies where the Xenos d2e code is installed. The ConfigFile parameter specifies the location of the configuration file used by the Xenos transforms. Our [xenos] section is as follows: [xenos] InstallDir=D:\Program Files\xenosd2e\d2ebin ConfigDir=C:\IBM HTTP Server\cgi-bin\arsxenos.ini

You must also update the AfpViewing parameter in the [default browser] section. When ODWEK retrieves an AFP document from the OnDemand server, the value of this parameter determines what action, if any, that ODWEK takes before sending the document to a Web browser. To convert AFP documents to HTML, PDF, or XML output with the Xenos transform, specify AfpViewing=Xenos so that ODWEK calls the Xenos transform to convert the AFP document before sending it to a Web browser. The type of output that is generated is determined by the value of the OUTPUTTYPE parameter in the ARSXENOS.INI file. Note: This change affects all AFP data on the system; it is not limited to an application group or folder. If you want to override this, you may specify the _afp parameter of the Retrieve Document API. Our updated afpviewing parameter is as follows: [default browser] AfpViewing=xenos

Configuring the ARSXENOS.INI file The ARSXENOS.INI file provides configuration options for the Xenos transform. You typically configure the ARSXENOS.INI file with options for specific OnDemand applications. However, you can also provide a set of default options. The Xenos transform uses the default options when it converts documents from applications that are not identified elsewhere in the file. Example 9-20 shows our ARSXENOS.INI file. The first section specifies the application group and the application, separated by a dash. Our application group and application are both named xenos-xml.

292

Content Manager OnDemand Guide


Example 9-20 The ARSXENOS.INI file [xenos-xml-xenos-xml] ParmFile=D:\Xenos\AFP2XML_rb.par ScriptFile=D:\Xenos\AFP2XML_rb.dms LicenseFile=D:\Program Files\xenosd2e\licenses\dmlic.txt OutputType=xml WarningLevel=4

[default] ParmFile=D:\Xenos\afp2pdf\sample.par ScriptFile=D:\Xenos\noindex.dms LicenseFile=D:\Program Files\xenosd2e\licenses\dmlic.txt OutputType=pdf WarningLevel=8 AllObjects=1

The ParmFile parameter identifies the full path name of the file that contains the parameters that are used by the Xenos transform to convert the document. This points to the afp2xml_rb.par file discussed earlier. The ScriptFile parameter identifies the full path name of the file that contains the script statements that are used by the Xenos transform to create the output file. This points to the afp2xml_rb.dms script discussed earlier. The LicenseFile parameter identifies the full path name of a valid license file obtained from Xenos. The OutputType parameter determines the type of conversion that the Xenos transform performs. If the input document is AFP, you can set this parameter to HTML, PDF, or XML. If the input document is metacode, you can set this parameter to AFP, HTML, PDF, or XML. Since we are converting from AFP to XML, our parameter is XML. The AllObjects parameter determines how ODWEK processes documents that are stored as large objects in OnDemand. If you specify a value of 0, then ODWEK retrieves only the first segment of a document. If you specify a value of 1, then ODWEK retrieves all of the segments and converts them before it sends the document to the viewer. Note: If you enable large object support for very large documents, then your users might experience a significant delay before they can view the document.

Chapter 9. Data conversion

293


The WarningLevel parameter determines how ODWEK handles return codes from the Xenos transform. The Xenos transform sets a return code after each document is converted. Use this parameter to specify the maximum return code that ODWEK considers to be good and sends the converted document to the viewer.

Viewing the XML document After the parameter file, the dms script, and all the configuration files are updated, you can select a document and see the Xenos output. When we log on to ODWEK and open the xenos-xml folder, we see a hit list of all the AFP documents. When we select a document, ODWEK recognizes the document as AFP and checks the ARSWWW.INI file to see how to present the AFP data. The afpviewing parameter tells ODWEK to execute a Xenos transform, so ODWEK checks the [xenos] section to determine the location of the ARSXENOS.INI file. ODWEK then looks at the ARSXENOS.INI file to determine the parameter and script files to use. The Xenos AFP2XML transform is then invoked and the XML document is sent to the browser. You can see the XML output in Figure 9-10.

294

Content Manager OnDemand Guide


Figure 9-10 AFP document presented in the XML format sample

Chapter 9. Data conversion

295


Now, we change the afpviewing parameter in the ARSWWW.INI file back to plug-in. With this change, the AFP document is now presented to the browser in its native format and displayed with the AFP Web Viewer (see Figure 9-11).

Figure 9-11 AFP document displayed in its native format with AFP Web Viewer

This XML document is presented with standard tags and can be displayed using a variety of XML display methods. We created a simple cascading style sheet

296

Content Manager OnDemand Guide


(CSS) definition file, and with a few minor changes to the template, this XML file can now be presented to a browser in a Web-ready format. We created a CSS definition file that takes each parameter from the XML tagged file and presents it to a browser. This file is named disp_bill.css. To call this CSS file and tell the browser to associate it with the XML file, we had to make a change to the template file that is called from our dms script. To tell the browser to use the disp_bill.css file to present the XML file, we added two lines to the startfile.tpl template file as follows: <?xml version='1.0'?> <?xml-stylesheet type="text/css" href="D:\Xenos\disp_bill.css"?> <BILLFILE> <BILL>

Now when we click the document in the hit list, the browser sees the first two lines in the XML file that tell the browser to present the document with the style sheet. The document now appears as shown in Figure 9-12.

Figure 9-12 Sample document output using XML and CCS

Chapter 9. Data conversion

297


Example 9-21 is the disp_bill.css file that we are using to display the document. Example 9-21 The disp_bill.css CSS file ACCTID {display: block; background-image: url(acctid.gif); background-repeat: no-repeat; margin-left: 100px; margin-right:600px; background-color: #CCCCCC; font-size: 16pt; font-weight: bold; border: thin ridge; padding-left: 120px } AMTDUE {display: block; background-image: url(amtdue.gif); background-repeat: no-repeat; margin-left: 100px; margin-right:600px; background-color: #CCCCCC; font-size: 16pt; font-weight: bold; border: thin ridge; padding-left: 200px } STARTDATE {display: block; background-image: url(strtdate.gif); background-repeat: no-repeat; margin-left: 100px; margin-right:600px; background-color: #CCCCCC; font-size: 16pt; font-weight: bold; border: thin ridge; padding-left: 220px } ENDDATE {display: block; background-image: url(enddate.gif); background-repeat: no-repeat; margin-left: 100px; margin-right:600px; background-color: #CCCCCC; font-size: 16pt;

298

Content Manager OnDemand Guide


font-weight: bold; border: thin ridge; padding-left: 220px } USAGE {display: block; background-image: url(usage.gif); background-repeat: no-repeat; margin-left: 100px; margin-right:600px; background-color: #CCCCCC; font-size: 16pt; font-weight: bold; border: thin ridge; padding-left: 245px } CURRENT { display: block; margin-top: 80px; border-top: 2px groove blue; border-bottom: 2px groove blue; margin-bottom: 30px; background-image: url(paybill.gif); background-repeat: no-repeat; background-position: 50% 100%} CURRENT ITEM {display: block; font-weight: bold} CURRENT AMOUNT{ display: inline} PREVIOUS ITEM,PREVIOUS AMOUNT { display: inline; font-weight: bold; padding-right: 20px; padding-left: 30px }

Troubleshooting ODWEK and Xenos By enabling debugging mode in ODWEK, you can view any error or success message that come from ODWEK or Xenos. To turn on logging, make the following changes to the ARSWWW.INI file. [DEBUG] log=1 logdir=D:\odwek\logging

Chapter 9. Data conversion

299


In this example, log=1 turns debugging on (log=0 turns it off) and logdir is the directory where the log file is created. The log file is named arswww.log.

9.3.2 Using the AFP2PDF transform with ARSLOAD In the previous example, we discussed a transform that occurred on the retrieval side. Xenos can also perform the transform during ARSLOAD, before the print file gets loaded to OnDemand. In addition to transforming the data, Xenos can also provide indexing and resource collection capabilities as well as print file segmentation into logical documents or statements. To enable the Xenos transforms on the load side, you must specify the indexer to be Xenos in the OnDemand application. When ARSLOAD sees an indexer of Xenos, it calls the Xenos transform with the parameter and script files that are specified in the application. Xenos creates three files to be sent back to ARSLOAD, the index file, the output file, and a resource file. ARSLOAD uses these files to update the database and object server. When using Xenos to parse the input printstream file, it is possible to allow the transform to pull indexes from the file and to split the file into logical documents. When doing this, it is important to define the same database fields in both Xenos and OnDemand. Xenos provides an .ind file that contains each field and the value. If OnDemand receives too many indexes or not enough indexes, or if the indexes are a different data type, the load fails. This example discusses the conversion of an AFP document to an Adobe PDF format before loading it into OnDemand. This PDF format can be viewed through the OnDemand client and the ODWEK client with the Adobe Acrobat software. We first used Xenos Developer Studio to select field locations from the document and define document splitting rules. We also created a script file (Example 9-22 on page 302) and parameter file (Example 9-23 on page 306). The parameter file defines the parser and generator that we use. It also contains the locations of the AFP resources to be used to convert the file, and the rules for creating the PDF file. Finally, the script defines the data to be used as the index for the document. In this example, we only define one index field, Name. We created an application and application group called xenos-pdf. We are loading AFP data, but because Xenos converts it to PDF before it is loaded, we define the View Information data type to be PDF. In the Indexer Information page, the indexer is defined as Xenos; this directs OnDemand to call the Xenos transforms. When you use Xenos as the indexer, there are only four parameters in the indexing details: Xenos parameter file, Xenos script file, Xenos License file, and warning level. See our application as set up in Figure 9-13.

300

Content Manager OnDemand Guide


Figure 9-13 Application indexing parameters for Xenos

When defining the Xenos parameter script files in the application, there are two methods: the details method and the filename method. In our example (Figure 9-13), we used the filename method, where we point to the full path of the files. Using the filename method is a way to reuse the same files between multiple applications and is a better method of separating the OnDemand and Xenos administration. Optionally, you can paste the entire script and parameter file directly into the indexing parameters screen. Using the detail method allows the OnDemand administrator to view the Xenos parameters without the need to access any other system. It is also a way to manage multiple versions of scripts and applications. There is never any question about which application is using which script files. If the printstream data changes, a new application can be created with a new script and parameter file included. When using the detail method, the parameter file details must be included between an OnDemandXenosParmBegin tag and an

Chapter 9. Data conversion

301


OnDemand XenosParmEnd tag. The script details must be included within the OnDemandXenosScriptBegin tag and the OnDemandXenosScriptEnd tag. Either of these methods works. When calling the transform from ARSLOAD, be sure not to have any input or output file names hardcoded in the script or parameter file. If you have an input file listed in the fdinput or inputfile parameters, the Xenos transform runs with a return code of 0. It is not processing the data that ARSLOAD is sending. If you have the output files defined in fdoutput, outputfile, indexfile or resourcefile parameters, Xenos also runs fine, but ARSLOAD shows the message, â&#x20AC;&#x153;Output/Indexer file was not created Indexing Failed.â&#x20AC;? Any error messages that come from the Xenos transforms are populated into the system log and can viewed in message number 87, failed load. All success details that come from the Xenos transform can be viewed in message number 88, successful load. Example 9-22 shows a sample AFP2PDF script file. Example 9-22 The AFP2PDF_sample.dms script TRUE = 1; FALSE = 0; call dm_Initialize par_h = dm_StartParser(Parser); gen_h = dm_StartGenerator(Generator); rc = dm_SetParm(par_h, 'fdinput', inputfile); /* start the DASD Distriubtors */ dasd_h = dm_StartDistributor("DASD"); index_h = dm_StartDistributor("DASD"); /* open output and index files */ rc = dm_DASDOpen(dasd_h, '{GROUPFILENAME}'outputfile); rc = dm_DASDOpen(index_h, indexfile ); /* initialize */ do i = 1 to NumberOfFields fieldvaluesave.i = "" if Break.i \= "no" & Break.i \= "NO" then do Break.i = "yes" end end file_open = FALSE

302

Content Manager OnDemand Guide


save_BytesWritten = 0 crlf = '0d0a'X /* rc rc dt ts rc rc rc

write preamble to the index file */ = dm_DASDWrite(index_h, "COMMENT: OnDemand Generic Index File Format"); = dm_DASDWrite(index_h, "COMMENT: This file has been generated by the xenos process"); = DATE('N'); = TIME('N'); = dm_DASDWrite(index_h, "COMMENT:" dt ts ); = dm_DASDWrite(index_h, "COMMENT:" ); = dm_DASDWrite(index_h, "CODEPAGE:819"||crlf );

dlpage = dm_GetDLPage(par_h); do while(dlpage \= 'EOF') if file_open = FALSE then do select when generator = 'PDF' then rc = dm_PDFGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile); when generator = 'AFP' then rc = dm_AFPGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile); when generator = 'META' then rc = dm_METAGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile); otherwise do say 'Invalid generator' return 12 end end if rc = 0 then do file_open = TRUE end end do i = 1 to NumberOfFields fieldvalue.i = dm_GetText( dlpage, field.i, First ) end docbreak = 0 do i = 1 to NumberOfFields if fieldvalue.i \= "" then do /* if there is no previous value, save the current value */ if fieldvaluesave.i = "" then do fieldvaluesave.i = fieldvalue.i end else /* if there is a previous value, see if the new value is different */ if fieldvaluesave.i \= fieldvalue.i then do if Break.i = "yes" then docbreak = 1

Chapter 9. Data conversion

303


end end end if docbreak = 1 then select when generator = when generator = when generator = end file_open = FALSE

do 'PDF' then rc = dm_PDFGenClose( gen_h ) 'AFP' then rc = dm_AFPGenClose( gen_h ) 'META' then rc = dm_METAGenClose( gen_h )

/* write out index values to the index file */ do i = 1 to NumberOfFields field_name = "GROUP_FIELD_NAME:"||field.i rc = dm_DASDWrite( index_h, field_name ) field_value = "GROUP_FIELD_VALUE:"||fieldvaluesave.i rc = dm_DASDWrite( index_h, field_value ) end /* replace index values with the new values */ do i = 1 to NumberOfFields if fieldvalue.i \= "" then do fieldvaluesave.i = fieldvalue.i end end rc = dm_DASDSize(dasd_h) BytesWritten = dm_size length = BytesWritten - save_BytesWritten offset = BytesWritten - length save_BytesWritten = BytesWritten group_offset = "GROUP_OFFSET:"||offset rc = dm_DASDWrite( index_h, group_offset ) group_length = "GROUP_LENGTH:"||length rc = dm_DASDWrite( index_h, group_length ) group_filename = "GROUP_FILENAME:"||outputfile rc = dm_DASDWrite( index_h, group_filename||crlf ) select when generator = 'PDF' then rc = dm_PDFGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile); when generator = 'AFP' then rc = dm_AFPGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile); when generator = 'META' then rc = dm_METAGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile); end if rc = 0 then do

304

Content Manager OnDemand Guide


file_open = TRUE end end /* end docbreak = 1 */ select when generator = 'PDF' then rc = dm_PDFGenWrite(gen_h, dlpage ); when generator = 'AFP' then rc = dm_AFPGenWrite(gen_h, dlpage ); when generator = 'META' then rc = dm_METAGenWrite(gen_h, dlpage ); end dlpage = dm_GetDLPage(par_h); end if file_open = TRUE then do select when generator = 'PDF' then rc = dm_PDFGenClose( gen_h ) when generator = 'AFP' then rc = dm_AFPGenClose( gen_h ) when generator = 'META' then rc = dm_METAGenClose( gen_h ) end end /* write out final index values to the index file */ do i = 1 to NumberOfFields field_name = "GROUP_FIELD_NAME:"||field.i rc = dm_DASDWrite( index_h, field_name ) field_value = "GROUP_FIELD_VALUE:"||fieldvaluesave.i rc = dm_DASDWrite( index_h, field_value ) end rc = dm_DASDSize(dasd_h) BytesWritten = dm_size length = BytesWritten - save_BytesWritten offset = BytesWritten - length save_BytesWritten = BytesWritten group_offset = "GROUP_OFFSET:"||offset rc = dm_DASDWrite( index_h, group_offset ) group_length = "GROUP_LENGTH:"||length rc = dm_DASDWrite( index_h, group_length ) group_filename = "GROUP_FILENAME:"||outputfile rc = dm_DASDWrite( index_h, group_filename ) rc = dm_DASDClose( dasd_h ) rc = dm_DASDClose( index_h ) return;

Chapter 9. Data conversion

305


Example 9-23 shows the corresponding parameter file that is used for Xenos transforms. Example 9-23 The AFP2PDF_sample.par parameter file /* Xenos Job Parameter file */ JS: /* DM Script Library - XG supplied functions */ fddmslib = ‘d:\program files\xenosd2e\d2ebin\dmsl.lib’ scriptvar=(‘Parser’, ‘AFP’) scriptvar=(‘Generator’, ‘PDF’) scriptvar=(‘NumberOfFields’, 1) scriptvar=(‘Field.1’,’Name’)

AFPDL-AFPP: /* AFP Parser Options */ formdef = f1a10111 pagedef = p1a06462 CC = on trc = off startpage = 0 stoppage = 0 native = no position = word /* File Defs */ FDpagesegs = ‘D:\xenos\afp2pdf\Resources\%s.psg’ FDafpfonts = ‘D:\xenos\afp2pdf\Resources\%s.fnt’ FDpagedefs = ‘D:\xenos\afp2pdf\Resources\%s.pde’ FDformdefs = ‘D:\xenos\afp2pdf\Resources\%s.fde’ FDoverlays = ‘D:\xenos\afp2pdf\Resources\%s.ovr’ FDfontcor = ‘D:\xenos\afp2pdf\Resources\master.tab’ FDResGrpOut = ‘D:\xenos\afp2pdf\Output\sample.res’ ResGrpOption = (FormDefs, PageSegs, Overlays) PDFGEN-PDFOUT: /* PDF Out Generator Options */ /* output file name being set in the script */ offset = (0,0) scaleby = 100 border = NONE Compress = (NONE,NONE,NONE) orient = AUTO

306

Content Manager OnDemand Guide


PDFAuthor = ‘Xenos Group’ PDFOpenAct = ‘/FitH 800’ BMOrder = (AsIs,AsIs,AsIs)

AFPDL-DLIST: parmdpi = 300 pagefilter = all resfilter = all FieldName = FieldWord = FieldPhrase FieldPara =

(PAGE) (20, and, %20) = (%100) (%500)

/* extract name */ FieldLocate = (‘InsName’, ‘Insured’) FieldName = (‘Name’) FieldBase = (‘InsName’, +275, ‘=’, -35, ‘=’, +800, ‘=’, +30) FieldWord = (20, and, %20) FieldPhrase = (%500, OneSpace) FieldPara = (%500)

9.3.3 Job Supervisor program The Job Supervisor program is the main Xenos transform program. There are three methods for calling this program. The first method allows the ARSLOAD program to call Job Supervisor to transform a complete data file at load time, as discussed in the previous section. This method uses the input file from the arsload command and sends the output back to arsload when the transform has completed, at which time, OnDemand loads the data. The second method allows the Web Enablement Kit to call the Job Supervisor program when a document is requested from ODWEK. This calls Job Supervisor with one specific document, allows the document to be transformed, and then sends the transformed data back to the browser. This method is discussed in 9.3.1, “Converting AFP to XML through ODWEK” on page 283. The third method of calling the Job Supervisor program is from a command line. This might be a useful troubleshooting technique, because it runs Xenos without any connection to OnDemand and allows you to pinpoint any problems. The Job Supervisor program can also be used to print the locations of text strings found

Chapter 9. Data conversion

307


on the pages of an input file. These text locations are necessary for defining indexes and locating extraction data. Using Developer Studio is a graphical method of defining data locations and is a much simpler technique. Developer Studio: Developer Studio allows you to define fields from the document to be used for indexes. Two types of fields are of interest, the absolute field and the relative field. The absolute field is defined by the x and y coordinates of a box around the text of interest. The relative field is useful for extracting data that has different positions in a page, but can be found in relation to another piece of text on the page. For more information about using Developer Studio, refer to Xenos d2e Developer Studio User Guide, which comes with the Xenos offering by Xenos Group Incorporated. Figure 9-14 shows the syntax when running Job Supervisor from a command line to convert data.

Figure 9-14 Job Supervisor program syntax

Here, -parms is a file that contains the Job Supervisor parameter (.par) file and the Job Supervisor script (.dms) file. All the parameters are required, but you may place the inputfile, outputfile, and resourcefile parameters in the .par or the .dms file instead of in the command line. We ran the Job Supervisor program from the command line with our parameter file and script file from the previous AFP2PDF example to ensure the index file was being created correctly. We did this before setting up the PDF application group to ensure that Xenos was working correctly before wrapping the ARSLOAD around it. ARSLOAD runs the same command with the same syntax. We ran the command as shown in Example 9-24. Example 9-24 Running the Job Supervisor program from a command line js -parms=”D:\Xenos\afp2pdf\parms_afp2pdf” -report=”D:\Xenos\afp2pdf\output\sample.rep” -scriptvar=inputfile=”D:\Xenos\afp2pdf\input\afp2pdf_sample.afp” -scriptvar=indexfile=”D:\Xenos\afp2pdf\output\afp2pdf_sample.ind” -scriptvar=outputfile=”D:\Xenos\afp2pdf\output\afp2pdf_sample.out” -scriptvar=resourcefile=”D:\Xenos\afp2pdf\output\afp2pdf_sample.res” -licensefile=D:\Program Files\xenosd2e\licenses\dmlic.txt

308

Content Manager OnDemand Guide


Here, our parms_afp2pdf (Example 9-25) contained the .par and .dms file names. Example 9-25 The parms_afp2pdf file fdjobparm=’D:\xenos\afp2pdf\afp2pdf_sample.par’ fddmsscript=’D:\xenos\afp2pdf\afp2pdf_sample.dms’

This transform creates an .ind file in the format of the Generic Indexer parameter file. This file can then be loaded into OnDemand with the ARSLOAD command.

Chapter 9. Data conversion

309


310

Content Manager OnDemand Guide


10

Chapter 10.

Report Distribution In this chapter, we provide information about Report Distribution, an optionally priced feature of OnDemand for Multiplatforms. Report Distribution provides an easy way to automatically group reports and portions of related reports together, organize them, convert the report data into different formats, and send them through e-mail to multiple users or make them available for printing. In this chapter, we cover the following topics:  Introduction to Report Distribution  Defining the distribution  Hints and tips

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

311


10.1 Introduction to Report Distribution Report Distribution provides many of the same functions as other parts of the OnDemand system such as querying the database for documents, retrieving the documents from various types of storage media, and providing the ability to print them on a server printer. If these functions are already available in OnDemand, why would you want to use Report Distribution? The answer is simple: automation. Report Distribution automates the process of querying and retrieving documents as well as sending the documents to a printer or to one or more users via e-mail. Not only is the process automated, you can specify when the documents will be delivered. Another benefit to using Report Distribution is that you can select and combine documents from different reports and organize them by defining their order and separating them using banner pages. Normally when you think of an archival/retrieval system, you think of the large numbers of documents that are stored, but a small number of documents are retrieved. What benefit does automation and scheduling provide? The biggest benefit is that as reports are loaded into OnDemand on a regular basis, they can be delivered automatically to one or more users soon after they are loaded. Also, after the distribution is set up, no other changes are required such as changing the document selection criteria to identify the latest data that is loaded. For example, a company creates monthly sales reports and archives them in OnDemand. The reports are needed by sales managers to analyze the results and to plan for future sales and inventory. By using Report Distribution, the delivery of the monthly sales report can be automated so that the sales managers receive the report via e-mail once a month as soon as the report is available in OnDemand. Other examples include auditing that is performed on a periodic basis and workflow items such as processing overdue accounts for credit cards, utility bills, or doctorsâ&#x20AC;&#x2122; bills. The applications for using Report Distribution are endless but the basis for using it is the same; namely documents are loaded on a regular basis and are needed by one or more users as they become available in OnDemand. Let us look at a specific example. Acme Art Inc. is a company that sells artwork and art supplies. Each month a sales report is created for each region and is archived in OnDemand. After the reports are archived, the regional sales managers need a copy of the reports for planning purposes and for restocking inventory.

312

Content Manager OnDemand Guide


The Windows client in Figure 10-1 shows a list of monthly sales reports that have been archived in OnDemand. There are two regional sales reports (Midwest and Northwest) for the months of July, August, September, and October.

Figure 10-1 List of monthly sales reports for Acme Art Inc.

In this example, even though there are separate regional sales reports per month, they are loaded at the same time so there is only one load per month. This information is important when you are determining the best way to set up the distribution. Before a distribution is set up, you should ask yourself the following four W questions:    

What documents are needed? Who will receive the documents? When will the documents be retrieved and delivered? Where will they be delivered?

Chapter 10. Report Distribution

313


10.1.1 What documents are needed? For the example, the documents that are needed are the regional sales reports. How do you identify the regional sales reports that you need from the hundreds of thousands of documents stored in OnDemand? In general, you identify the documents by creating a database query using index fields and values that uniquely identify the documents you want to retrieve. For Report Distribution, another method can be used to identify the documents that you want to retrieve. Instead of querying the database, you can simply retrieve all or some of the documents as they are being loaded. To illustrate this, let us look at the example again. Once a month, regional sales reports are loaded into OnDemand. Since the load contains all the documents that are needed, we can identify and retrieve all of the documents from the load. Later, when we explain how to set up a distribution using the administrative client, we go into more detail about how to define a set of documents that will be delivered.

10.1.2 Who will receive the documents? The regional sales managers need a copy of the regional sales reports every month. To identify the sales managers to OnDemand, an OnDemand user must be created for each sales manager. Depending on how the documents will be delivered, an e-mail address or a server printer must be specified in the user definition.

10.1.3 When will the documents be retrieved and delivered? Each month, the regional sales managers want to get the regional sales reports as soon as they are available in OnDemand. In Report Distribution, documents can be scheduled for delivery on a daily, weekly, or monthly basis as well as delivering the documents once or delivering the documents a short period of time after they are loaded into OnDemand. For this example, delivery is scheduled based on the availability of the documents after they are loaded. You might ask why this type of schedule is used rather than a monthly schedule since the reports are loaded on a monthly basis. When a load-based schedule is used, the extraction and delivery of the documents is triggered when the data is loaded. Report Distribution periodically looks to see if data has been loaded. If it has, then the documents are extracted and delivered. When a monthly schedule is used, the extraction and delivery process are performed on a specified day of the month. For some reason, if the data is not loaded by the specified day, the delivery will fail.

314

Content Manager OnDemand Guide


The other reason for using a load-based schedule is that the delivery of the documents might be more timely since they are delivered a short time after they are loaded. Depending on when data is loaded and the day of the month that is used for the monthly schedule, there can be several days between the time that the documents are loaded and the time that the documents are delivered rather than a matter of hours or minutes.

10.1.4 Where will they be delivered? The regional sales reports can be delivered to the regional sales managers either via e-mail or to a server printer. In this example, the reports are delivered via e-mail. By using this delivery method, the managers can either view the reports or print them at a later time.

10.2 Defining the distribution You define report distribution from the Report Distribution component of the administrative client. Figure 10-2 shows an example of an OnDemand server with the Report Distribution component.

Figure 10-2 OnDemand Administrator Client with Report Distribution

Chapter 10. Report Distribution

315


The Report Distribution component contains five subsections:  Reports A report provides a way to identify which documents will be extracted from OnDemand. You can identify the documents by creating a database query using index fields and values that uniquely identify the set of documents that you want to retrieve. Alternatively, instead of querying the database, you can retrieve some or all of the documents shortly after they are loaded into OnDemand.  Banners A banner is an informational page that is created in the same format as the report data, for example, line, Advanced Function Presentation (AFP), and Portable Document Format (PDF). The content of the banner page is customizable and can include information such as the report name, the report description, and the name of the user that is receiving the distribution. Three different types of banner pages can be used, a header banner, a separator banner, and a trailer banner. The header banner is the first page in the distribution. The trailer banner follows the last page of the last report in the distribution. The separator banner precedes the first page of every report in the distribution.  Bundles A bundle is used to specify which reports will be delivered, the order in which the reports will be included in the bundle, the format of the report data, and if banner pages are used, which banner pages will be used. See Figure 10-3 for an example of the contents of a bundle.  Schedules A schedule is used to initiate the extraction and delivery of the reports. Reports can be scheduled for delivery on a daily, weekly, or monthly basis as well as one time, or shortly after they are loaded into OnDemand.  Distributions A distribution is used to identify the bundle of reports that will be delivered, who will receive the reports, when the reports will be delivered, and where they will be delivered.

316

Content Manager OnDemand Guide


Figure 10-3 Bundle contents

To set up a distribution, you need the following items, at a minimum:     

A user A report A bundle A schedule A distribution

This assumes that you have already loaded data into OnDemand so an application, application group, and folder have already been defined. For the example we mentioned earlier, we will include banner pages and multiple reports; the reports will be sent to multiple users. The logical order to define the distribution objects is to follow the order of the four W questions (who, what, when, and where). Based on this, the order of defining the distribution objects is: 1. Defining users/groups 2. Defining reports 3. Defining banners

Chapter 10. Report Distribution

317


4. Defining bundle 5. Defining schedule 6. Defining distribution Some of these objects might already be defined, so they might also be available for use. For example, users and groups are already used in OnDemand, so you may not have to define these objects. Also, if this is not the first distribution that you are defining, you can use existing banners, bundles, schedules, and reports if appropriate. For the example, we presume that the users have already been defined and that all sales managers who will receive the monthly regional sales reports already have e-mail addresses specified for their user IDs since they will receive the reports via e-mail. If you do not have to create new user IDs, make sure that the e-mail address or the server printer is specified for each user. Figure 10-4 shows an example of a user that has an e-mail address and a server printer specified.

Figure 10-4 User with an e-mail address and server printer specified

318

Content Manager OnDemand Guide


10.2.1 Adding a report The next step is to define the reports to OnDemand. There are three report types: Load, SQL, and Named Query. The report type determines the method used to identify which documents will be retrieved:  Load Some or all of the documents are retrieved shortly after they are loaded into OnDemand. When documents are loaded into OnDemand, the load identifier is stored in a database table and a list of documents that have been loaded during the load is created and saved along with the data. The load identifier and the document list are used to retrieve all of the documents for the load. If several loads are processed, there are multiple load identifiers and multiple lists of documents to process for the report. If you do not want to retrieve all of the documents that were loaded, an SQL query can be used to limit the number of documents to retrieve within the loads that are being processed. When distributions are set up to be load-driven (for example, reports use the Load report type and the distribution is scheduled using a load-based schedule), Report Distribution periodically checks the load identifier database table to see if there are any load identifiers in the table (meaning data has been loaded). If there is, than Report Distribution begins extracting documents for the report.  SQL An SQL query string is used to query the database. The SQL query string consists of index fields and values that uniquely identify the documents that you want to retrieve.  Named Query A public named query is used to query the database. It contains the database query information that uniquely identifies the documents that you want to retrieve. To use this method, the named query must have been created prior to defining the report, and it must be a public named query rather than a private named query. A public named query is created from the Windows client.

Chapter 10. Report Distribution

319


For the example, two reports are defined, one for the Northwest regional sales report and one for the Midwest regional sales report. Figure 10-5 shows the Northwest regional sales report definition.

Figure 10-5 Report window for the Northwest regional sales report

For the example, the load report type with an SQL query is used. Since the regional sales reports are in the same load, an SQL query is not necessary. However, by using the SQL query, the reports can be retrieved separately and placed in the bundle in any order. Separator banner pages can also be added to identify each report in the bundle. Another reason for creating separate reports is that it gives you the flexibility to deliver the regional sales report to the appropriate regional sales manager rather than sending both reports. This requires creating a separate bundle and distribution for each report. Along with selecting the report type, you must select an application group where the data is loaded. The application group must be selected before the SQL query

320

Content Manager OnDemand Guide


can be defined since the application group database field names are needed to build the SQL query. In Figure 10-6, an SQL query string is defined for the Northwest region by selecting application group database fields and operators to create the SQL query string. A segment date field can also be specified so that the query is limited to a smaller number of database tables.

Figure 10-6 SQL Query window for the Northwest regional sales report

After all of the information is entered for the report, click OK to save the report. The second report can be created in a similar way. In 10.2.4, â&#x20AC;&#x153;Adding a bundleâ&#x20AC;? on page 324, we explain how to create a bundle.

Chapter 10. Report Distribution

321


10.2.2 Adding a banner You can add header banner, separator banner, and trailer banner to your report. The difference between the various banner pages is their location in a bundle and the information that is contained on the page. Banner pages, regardless of type, are created in the same way. For the example, a header banner, a separator banner, and a trailer banner are used. Figure 10-7 shows the banner window for a separator banner.

Figure 10-7 Add a Banner window

You must enter the banner name and a description, and then select a banner type. For Banner Type, select the kind of banner that you are creating. After you select the banner type, the list of information that can be included on the banner page is displayed in the Available Information list. You can add one or more lines of information to the banner page. The information can be placed in an order.

322

Content Manager OnDemand Guide


The order is defined by the order that you place the information in the Selected Information list. Click OK to save the banner. Figure 10-8 shows an example of a separator banner for the Northwest regional sales report.

Figure 10-8 Example of a separator banner

10.2.3 Adding a schedule There are five different schedule types: once, daily, weekly, monthly, and load-based. Documents can be scheduled for delivery on a daily, weekly, or monthly basis, or they can be delivered once. Load-based schedules can only be used with reports that are defined using a report type of load. For the example, a load-based schedule must be used since the reports are defined using a report type of load. Figure 10-9 shows the schedule window for a load-based schedule. As part of the schedule definition, a start date, end date, and delivery time are specified. In the case of a load-based schedule, the Start Date field specifies the first day that you want the Report Distribution to start looking for documents that have been loaded into OnDemand beginning at the time specified in the Delivery Time field. For example, starting on 19 January 2004 at 10:00 AM, Report Distribution periodically queries the load identifier database table to see if any regional sales reports have been loaded. A Report Distribution parameter determines how frequently it looks for schedules to process. See Figure 10-16 on page 331 for an example of the Report Distribution Parameters window. In the case of load-based schedules, this refers to how often the load identifier database table is queried. After the processing of the load-based schedule begins, it is processed until midnight. At that point, processing of the schedule stops until the next day and starts again at the specified delivery time. If you know what time your data is usually loaded, you can set the delivery time to the same time or a time shortly after that so that Report Distribution does not look for the data before it is loaded. Click OK to save the schedule.

Chapter 10. Report Distribution

323


Figure 10-9 Add a Schedule window

10.2.4 Adding a bundle Figure 10-10 shows the General tab for the Add a Bundle window. This is where you specify the output format of the data. For the example, the format of the regional sales reports is ASCII line data so Line Data is selected. If you include more than one report in the bundle, the format of every report must be the same and must match the output format. If the input format is different than the output format, you must use a transform program to convert the report data to the output format. For example, you can use the AFP2PDF transform program to convert AFP reports to PDF. In this case, you specify an output format of PDF. E-mail notification messages can be sent to one or more users during bundle processing. The message types are error messages, warning messages, progress messages, and completion messages. If you want to notify more than one user, a group can be used. A progress message is sent after the bundle has been processed for each recipient in the distribution. A completion message is sent after the bundle has been processed for all of the recipients.

324

Content Manager OnDemand Guide


Figure 10-10 General tab of the Add a Bundle window

Figure 10-11 shows the Bundle Contents tab of the Add a Bundle window. In this window, you decide which reports to include in the bundle. If banner pages are used, you also specify which banner pages to use. For the example, the bundle contains a header banner, a separator banner, and a trailer banner as well as the two reports that were created earlier. The first report in the bundle is MidWest Monthly Sales Report and the second report is NorthWest Monthly Sales Report. The reports can be included in the bundle in any order; it is determined by how you add them to the Bundle Contents list. You can change the order of the export by moving them up and down in the list.

Chapter 10. Report Distribution

325


Figure 10-11 Bundle Contents tab of the Add a Bundle window

The inclusion of a manifest in the bundle is optional. The manifest is similar to a banner page in that it contains information about the bundle. Specifically, it contains the name of the distribution, the time the distribution is processed, and a list of files containing report data that are included in the bundle. If the manifest is included, it is always added to the end of the bundle. The field titles on the banner pages and manifest can be created in many different languages. The choices are Arabic, Chinese (Simplified), Chinese (Traditional), Danish, Dutch, English, Finnish, French, French Canadian, German, Italian, Japanese, Korean, Norwegian, Portuguese (Brazil), Spanish, and Swedish. When selecting a banner language, you should consider that the banner pages are converted to the code page of the data. With this in mind,

326

Content Manager OnDemand Guide


selecting a language, such as Korean, to use with data that is in a single byte code page will not work correctly. After you make all of the selections for the bundle, click OK to save the it.

10.2.5 Adding a distribution After you create the report, banners, bundle, and schedule, you must put them all together by creating a distribution. Figure 10-12 shows the General tab of the Add a Distribution window. The available Delivery Options are e-mail and server printer. For the example, the delivery method is e-mail. E-mail notification messages can be sent to one or more users during distribution processing. The message types are error messages, warning messages, progress messages, and completion messages. If you want to notify more than one user, a group can be used. Progress messages are sent after each recipient in the distribution has been processed. A completion message is sent after the distribution has been processed for all of the recipients.

Figure 10-12 General tab of the Add a Distribution window

Chapter 10. Report Distribution

327


Figure 10-13 shows the Bundle tab of the Add a Distribution window. Only one bundle can be selected for the Distribution Bundle. After the bundle is added, it cannot be changed. For the example, the bundle that was created previously, called Monthly Sales Reports, is selected.

Figure 10-13 Bundle tab of the Add a Distribution window

Figure 10-14 shows the Schedule tab of the Add a Distribution window. When you create a distribution, a schedule does not have to be selected or one can be selected but not activated. Of course, if the distribution does not have an activated schedule, the reports in the selected bundle are not delivered. For the example, we set the Distribution Schedule to Load Based Schedule that was created earlier and activate it. Normally, when a schedule is selected for the distribution, a different schedule cannot be selected. However, if the schedule has expired (for example, todayâ&#x20AC;&#x2122;s date is greater than the end date defined in the schedule), Report Distribution removes the schedule from any distributions that use the schedule. If this occurs,

328

Content Manager OnDemand Guide


a new schedule can be selected for the distribution. The schedule can be deactivated for the distribution at any time.

Figure 10-14 Schedule tab of the Add a Distribution window

Figure 10-15 shows the Recipients tab of the Add a Distribution window. In this window, you select the recipients that are going to receive the reports. For the example, the regional sales managers will receive the reports so the user IDs of the managers have been added to the Selected Recipients list. If there are several users that require the same set of reports, you can choose to add the users to a group and add the group to the Selected Recipients list. The two regional sales managers could have been added to a group and then the group would have been used instead of the individual user IDs. The check mark next to the user ID of each recipient in the list indicates that the recipient is active for the distribution. If the check mark is removed from the box, the recipient is deactivated and will not receive the reports. You can use this feature to temporarily deactivate the recipient, if for example, the recipient is on vacation and does not need the reports. If there is only one recipient for the

Chapter 10. Report Distribution

329


distribution, you cannot deactivate this recipient. Also, there must be at least one activated recipient in the distribution and all of the recipients must have e-mail addresses defined if the delivery method is e-mail. If the reports are going to be sent to a server printer, all of the recipients must have a default server printer defined.

Figure 10-15 Recipients tab of the Add a Distribution window

After all of the selections are made for the Add a Distribution window, click OK to save the distribution. All of the steps required to set up the extraction, bundling, and delivery of the regional sales reports are now completed. Start the Report Distribution program on the server, and you are ready to begin receiving the regional sales reports after they are loaded into OnDemand.

330

Content Manager OnDemand Guide


10.2.6 Report Distribution parameters There are several parameters that control the Report Distribution process. Figure 10-16 shows the Report Distribution Parameters window. To display the window, right-click the Report Distribution icon and select Parameters.

Figure 10-16 Report Distribution Parameters window

One of the Operational Parameters is the Activate Report Distribution. By clearing the check box for this option, Report Distribution can be deactivated so that it temporarily stops processing distributions. Distributions that are in progress will be completed, but no new distributions will be processed until Report Distribution is activated again. Other options that you can specify include how often Report Distribution looks for distributions that are ready to be processed (Number of minutes between schedule searches) and the number of times an operation should be retried

Chapter 10. Report Distribution

331


before the operation is marked as failed (Maximum number of times to retry an operation). Messages that are generated during the extraction, bundling, and delivery stages of Report Distribution can optionally be logged. They are viewable using the Windows client by opening one of the folders that was created during Report Distribution installation. If you use the e-mail delivery option or need to send e-mail notification messages, you must specify an Simple Mail Transfer Protocol (SMTP) server address that processes the e-mail messages that are generated by Report Distribution. You can optionally specify address information that is specific to your company such as a return e-mail address and an e-mail address to use for correspondence. You can also use a file that contains company-specific information or other types of information that you want to include with the delivery of the documents. If a global attachment file is specified, it is attached as a separate file to the e-mail message that contains the documents that were extracted.

10.3 Hints and tips To help you successfully work with Report Distribution, we provide the following hints and tips:  A load-based schedule can only be used with reports that are defined using the load report type and vice versa. The load-based method can only be used for data that was loaded using IBM DB2 Content Manager OnDemand for Multiplatforms Version 7.1.1 or later. Data that was loaded with a prior version can be retrieved using the time-based method.  After documents are delivered using a load-based schedule, they cannot be delivered again using a load-based schedule. For example, if the January monthly sales report was delivered using a load-based schedule, it cannot be delivered again by a load-based schedule. If for some reason you need to deliver the January sales report again, you can use a schedule with a schedule type of once.  Consider that you loaded several months of data and now want to start delivering the documents using a load-based schedule starting with the latest month. In this case, you can add a date to the query so that documents from prior months will not be delivered the first time the schedule is processed. For example, the sales report has been loaded for the last four months and now you want to set up the distribution of the sales report starting with the latest month. You can include information in the SQL query that will start the

332

Content Manager OnDemand Guide


search for the documents with the latest month. From this point on, reports from new loads will be processed.  If a daily, weekly, or monthly schedule is used, use the Named Query type reports if new data is loaded on a regular basis and retrieved. A named query lets you set up a date range based on the current date, where an SQL query requires a specific date or date range in the query string. If an SQL query string is used, the dates in the SQL query string must be changed so that only the latest data is retrieved. For example, if data is loaded on a monthly basis and it is loaded by the third of the month, a monthly schedule can be set up to run on the third of the month and the Named Query specifies a date range of “t - 3d” to “t” for the report date. “t - 3d” means today’s date minus three days. This means that Report Distribution looks for reports that have a report date for the first, second, or third day of the month. For example, to extract the report for the month of January, the query matches documents that have a report date of January 1, January 2, or January 3. Assuming the date in the report data is January 1, the report data for the month of January is extracted and delivered.  If a distribution is scheduled for delivery using a once schedule and the starting date is in the past, the distribution is delivered immediately (that is, the next time Report Distribution scans for active schedules). For example, if the starting date of the schedule is 22 March 2006 and the current date is 21 April 2006, the distribution is processed immediately.  Normally the only way an OnDemand system definition (such as user, group, report, and bundle) is updated is when a user uses one of the administrative programs to update the definition. An exception to this is that, when a schedule expires and it is used in one or more distributions, Report Distribution automatically removes the schedule from all of the distributions that are using the schedule. The schedule is no longer selected in the distribution.  To determine which distributions are scheduled for delivery, you can use the search option. You can select the search option by right-clicking the Distribution icon in the tree view of the administrative client. When the Search for Distributions window opens, clear the No Schedule and Disabled Schedule check boxes and click OK. Only the distributions that are scheduled for delivery are displayed in the list of distributions. Figure 10-17 shows an example of how to search for scheduled distributions.

Chapter 10. Report Distribution

333


Figure 10-17 Search for Distributions window

 Since the distribution definitions can be updated by the Report Distribution program, you might want to update the distribution information before you search for scheduled distributions. To refresh the list, select the Distribution icon in the tree view of the administrative client and then select View â&#x2020;&#x2019; Refresh List. You can also press the F5 key after you select the Distribution icon. The Search for Distributions window also provides a way to identify which distributions use a specific schedule or a specific bundle. You can also identify which distributions will be delivered to a specific recipient.  The search option is available for the other Report Distribution areas. For bundles, you can use the search option to determine which bundles contain specific reports or banners. You can use the banner search option to search for banners with a specified banner type, the report search option to search for reports with a specified report type, and the schedule search option to search for schedules with a specified schedule type.

334

Content Manager OnDemand Guide


15

Chapter 15.

Did you know? In this chapter, we offer various tidbits of information that was gathered in the course of writing this redbook. Each section discusses a different feature of OnDemand that might be useful in administering a production environment. We also provide some program samples, which you can download from the IBM Redbooks Web site. The samples offer practical ways to educate yourself on using OnDemand APIs or as base for more advanced development. In this chapter, we cover the following diverse topics:              

Using the Document Audit Facility Related Documents feature Store OnDemand OnDemandToolbox Batch OnDemand commands in z/OS Testing PDF indexer in z/OS Date range search tip for users Ad-hoc CD-ROM mastering OnDemand Production Data Distribution Customizing the About window Modifying client behavior through the registry Negative numbers in decimal fields handling Message of the day OnDemand bulletins

Š Copyright IBM Corp. 2003, 2006. All rights reserved.

475


15.1 Using the Document Audit Facility OnDemand has incorporated a feature called the Document Audit Facility (DAF). This function allows for basic approval routing of a document. To use the DAF, you must first define the reports to be audited to OnDemand and create an audit control file. An administrator can define the default status for a document, and users with the appropriate permissions have the ability to click a button on the client to change the status of the document.

Background for our example In our example, a company scans vendor invoices as they are received. Each invoice must be reviewed and approved before payment is made to the vendor. An administrator sets up an index to the invoices that can be one of four values: Hold, Accept, Reject, or Escalate. When invoices are scanned, they are loaded with a default of Hold status. The only users who have permission to view these Hold invoices are the auditors or managers. After the auditor reviews the invoice, they can click a button to set the document to either Accept, Reject, or Escalate status:  Accepted invoices should be paid.  Rejected invoices should not be paid due to problems with the invoice.  Escalated invoices should be reviewed by managers to determine if they should be paid. This action changes the value of that index. Permissions for the Accounts Payable user group are set up in such a way that they can only view invoices that have the Accept status. Purchasing can view invoices with the Reject status to determine why they were rejected and contact the vendor to correct the problem. Auditors and managers can view invoices that have the Escalate status.

476

Content Manager OnDemand Guide


15.1.1 Creating a status field in the application group Add a field to the application group that is called audit or status. This field must be a one-character string with Case set to Upper and the Type set to Fixed. In the Mapping section of the application group, add database values and displayed values for each of the required status. See Figure 15-1 for adding status field values to the application group. We added four values for status: Hold (H), Accept (A), Reject (R), or Escalate (E).

Figure 15-1 Adding status field to the application group

Chapter 15. Did you know?

477


15.1.2 Setting a default in the application Decide what you want the default value for this index to be and set this default in the application. For our example, invoices should be loaded to the system with Hold status, so we set the default to H. See Figure 15-2 for our example of setting the default status in the application on the Load Information tab.

Figure 15-2 Setting default in application

478

Content Manager OnDemand Guide


15.1.3 Updating permissions for various users Update the permissions for your users or user groups to restrict them to documents with specific status. In our example, Accounts Payable should only be allowed to view statements after they are audited or assigned an Accept (A) status. See Figure 15-3 for how to specify query restrictions.

Figure 15-3 Query restriction

Query restrictions can be added at the user or user group level.

Chapter 15. Did you know?

479


We also have a group of users named Auditors. These are people who are allowed to change the status of documents. This group has no query restriction and must have update authority to the documents, as shown in Figure 15-4.

Figure 15-4 Application group permission: no query restriction

15.1.4 Creating folders for each user type In our example, we created separate folders for the accounts payable view and the auditors view so that the status field is not visible from the accounts payable folder. The folder created for the auditor contained all the fields including the status of each document. Note: To reduce administration, it might be advantageous to have a single folder, because it is possible to configure a folder to have different field views for different users.

15.1.5 Creating the Document Audit Facility control file The Document Audit Facility is controlled by a file named ARSGUI.CFG, which you must create and store in the Windows client program directory (\Program Files\IBM\OnDemand32). The DAF file has the same format and syntax as a standard Windows INI file. This file contains a section named AUDIT, which identifies one or more folder sections. Each folder section identifies a folder that can be audited. See Figure 15-5 for our DAF control file.

480

Content Manager OnDemand Guide


[AUDIT] Folders=Invoices [Invoices] FOLDER=Invoices - Auditor AUDIT_FIELD=Status TEXT1=Accept TEXT2=Reject TEXT3=Escalate VALUE1=A VALUE2=R VALUE3=E Figure 15-5 ARSGUI.CFG

AUDIT section The AUDIT section contains one record, the FOLDERS record. The FOLDERS record contains a comma-separated list of folder section names. You must create an additional section in the DAF file for each folder section named in the FOLDERS record. Important: The total number of characters in the FOLDERS record must not

exceed 255.

FOLDER section Each FOLDER section contains the following records:  FOLDER specifies the name of the folder, exactly as it appears in OnDemand. The FOLDER record is required.  AUDIT_FIELD specifies the name of the folder field used to audit documents, exactly as it appears in OnDemand. The AUDIT_FIELD record is required.  TEXTx is the caption that appears on the command button used to change the status of the document. Up to eight TEXT settings are permitted.  VALUEx is the value that is stored in the database when the corresponding TEXTx button is clicked. This value is stored in the application group field and must match one of the mapped field values. One VALUE record is required for each TEXT record. Up to eight VALUE settings are permitted. Note: You must restart the OnDemand client after you create this file.

Chapter 15. Did you know?

481


15.1.6 Viewing the folders Now when the auditor logs into the Invoices - Auditor folder, three buttons (on the bottom right) allow for updating of the status field. See Figure 15-6.

Figure 15-6 Auditors view of the folder

When Accounts Payable users query the server, they are limited in what they can see. They do not see the status buttons or the status of the documents. Accounts Payable only sees the statements that are accepted by the auditor (Figure 15-7).

Figure 15-7 Customer view of the folder

482

Content Manager OnDemand Guide


This document audit facility might be a useful way of auditing documents in OnDemand. We only set up four status field values in our example. There is a limit of eight status buttons per folder. To take this example further, it is possible to set up multiple folders each with their own distinct status buttons. By doing so, it is possible to route a document through a series of auditing. You can define various users, each with a different auditing responsibility. For example, a particular user is responsible for pulling up all failed documents and placing them in some other status. To do this, each status must be mapped in the application group, and each folder must be specified within the appropriate userâ&#x20AC;&#x2122;s arsgui.cfg file. It is also important to define each user with the correct search and update permissions in the application group.

15.1.7 Tracking changes to invoice status You might want to track who is changing the status of the invoices. This is often a requirement of financial auditors in a company. This is done in the Application Group by selecting to log document updates and to log specific field values. On the Application Group, the Message Logging tab, verify that Index Update is selected. Selecting this option causes system log message 80 to be logged every time an invoice index is updated. See Figure 15-8.

Figure 15-8 Message logging setup

Chapter 15. Did you know?

483


In the Update an Application Group window, on the Field Information tab, select the Log check box for each field whose value you want captured when the invoice status is updated. See Figure 15-9.

Figure 15-9 Field update logging setup

In our example, we log three field values so that we can uniquely identify the invoice as accepted or rejected. We log the purchase order number, invoice number, and status. To check who is updating the invoice status, use the system log to review message number 80, which contains the information about Application Group document updates. System log message 80 includes the date and time the update was made, the user ID making the update, and message text of the update. The following example shows a complete system log message 80: 01/09/2006 00:18:02DBRYANT 40376InfoN/A Name(Invoices) Agid(5395) OrigFlds() UpdFlds()

80ApplGroup DocUpdate:

If no fields are selected for logging, the system log 80 message contains blanks, as shown in the following example: ApplGroup DocUpdate: Name(Invoices) Agid(5395) OrigFlds() UpdFlds()

If only the status field is selected for logging, the system log 80 message contains only the before and after status values. This is not sufficient information to identify the exact document rejected, as shown in the following example: ApplGroup DocUpdate: Name(Invoices) Agid(5395) OrigFlds('H') UpdFlds('R')

484

Content Manager OnDemand Guide


In our example the purchase order number, invoice number, and status field are selected for logging. The system log 80 message contains enough information to identify the exact invoice rejected, as shown in the following example: ApplGroup DocUpdate: Name(Invoices) Agid(5395) OrigFlds('A12611','762301','H') UpdFlds('A12611','762301','R')

15.2 Related Documents feature The Related Documents feature is a relatively unknown but useful feature that is available within the standard OnDemand Windows client. This feature give users the ability to retrieve a document and then, based on the content of that document, they can search for other related documents stored in OnDemand with a simple click of a button. For details about how to configure the Windows client for Related Documents, refer to the “Related Documents” section, in the “Windows 32-bit GUI Customization Guide” chapter, in IBM Content Manager OnDemand - Windows Client Customization Guide and Reference, SC27-0837. The following sections contain extracts from this guide along with important tips and practical examples of how to configure this feature.

15.2.1 How it works In principle, the Related Documents feature is designed so that you can load two completely different types of documents in OnDemand and link them together within the OnDemand client. For example, a hypothetical finance company produces quarterly finance reports for each of its clients. In conjunction with these reports, the company produces a summary sheet containing a subset of reference numbers for these financial reports. It is possible using the Related Documents feature to access the financial reports with a single click from within the summary sheet. The way in which this is done is by the user selecting text from within a document and then clicking the Related Documents icon, which is on the task bar when Related Documents is configured for that folder. The text that is selected is then used as the search criteria on another folder. The first document in the hit list from this search is displayed in the client along side the document already open. Using the preceding example, the summary sheet must be a document type that allows text to be selected within the OnDemand client; Related Documents does not work if the summary sheet is either PDF or image data.

Chapter 15. Did you know?

485


Figure 15-10 shows an OnDemand Windows client that is configured to use the Related Documents feature and illustrates a line data document that has been used to retrieve a letter and a credit card statement using the Related Documents icon. The Related Document icon for this example is a star on the right side of the toolbar. You can replace this icon with any icon design that you choose.

Figure 15-10 Related Documents search

15.2.2 Configuring Related Documents To configure an OnDemand Windows client to enable the Related Documents feature, it is necessary to edit the registry of the Windows machine on which the client is installed. For a detailed description of the registry keys and string values that must be added, see IBM Content Manager OnDemand - Windows Client Customization Guide and Reference, SC27-0837. We provide a sample registry file (import.reg) in Example 15-1, which can be edited with your own folder names, fields, and icon values and then imported into the registry.

486

Content Manager OnDemand Guide


Example 15-1 Sample registry import file (import.reg) Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Software\IBM\OnDemand32\Client\RelatedDocs] "Related"="Letters,Baxter" [HKEY_CURRENT_USER\Software\IBM\OnDemand32\Client\RelatedDocs\Baxter] "MenuText"="Related Check" "BitmapDll"="c:\\reldocs\\extaddll.dll" "BitmapResid"="135" "RelatedFolder"="Cheque Images" "Fields"="Amount=eq\\%s" "Arrange"="v" "Folders"="Baxter*\\Credit*" [HKEY_CURRENT_USER\Software\IBM\OnDemand32\Client\RelatedDocs\Letters] "MenuText"="Financial Report" "BitmapDLL"="c:\\reldocs\\extaddll.dll" "BitmapResid"="135" "Folders"="Letters" "RelatedFolder"="Financial Report" "Fields"="Reference Number=eq\\%s" "Arrange"="v"

To import a file into the registry: 1. Create a registry file using the sample in Example 15-1. The file should have the extension of .reg. 2. Click Start → Run.... 3. Type regedit into the box that is displayed. 4. In the register editor tool, click Registry → Import Registry File. 5. Select the registry file that you created in step 1 and click Open.

Chapter 15. Did you know?

487


After you import the registry keys and string values, the structure of this part of the registry should look similar to the example in Figure 15-11.

Figure 15-11 Registry structure for Related Documents feature

15.3 Store OnDemand Store OnDemand is a graphical application that allows a user to index and store any PC document into an existing OnDemand server. It is designed for the storing of single documents rather than the main design point of OnDemand, which is to load large quantities of report files in batch. With the multipurpose store capabilities of Store OnDemand, a user can quickly and easily archive any document that is then instantly available to the rest of the enterprise. Store OnDemand stores documents to OnDemand servers on UNIX, Windows, z/OS, and iSeries servers. It operates as a thick client to access OnDemand (V7.1) and runs on the Microsoft Windows platforms. It requires a DB2 Universal Database client to be installed on the PC and a database-to-database connection to the server. Store OnDemand is available as sample with the source code. It is intended to provide customers and partners examples of how to code APIs while providing a useful as-is tool to meet customer needs.

488

Content Manager OnDemand Guide


Note: We provide the Store OnDemand executable and the source code as an example to show you what you can implement with the APIs. They are strictly to be used “as is.” The executables and the source code are not supported by the IBM United Kingdom developers who developed the code, the IBM DB2 Content Manager support team, or the ITSO redbook team. Refrain from contacting the developers and the support personnel about these items. To download the executables and the source code, refer to Appendix A, “Additional material” on page 521.

15.3.1 Why it is needed Previously, to load a single document into OnDemand, it was necessary to load it via the Generic Indexer. The basic process to follow is to produce an index file similar to Example 15-2. Then with the index file and the document file, the arsload program is used to load the document into OnDemand. Typically, the only place that the arsload program can run is on the server. Therefore users might send their files that require loading into OnDemand to an administrator who initiates a daily batch load of these documents. With the Store OnDemand services offering, you no longer need to rely on a system administrator to store a single document. The following section explains what the service offering provides. Example 15-2 Sample Generic Index file COMMENT: OnDemand Generic Index File Format COMMENT: This file has been generated by the arsdoc command COMMENT: February 25, 2002 09:04:04 COMMENT: CODEPAGE:5348 COMMENT: IGNORE_PREPROCESSING:1 COMMENT: GROUP_FIELD_NAME:name GROUP_FIELD_VALUE:SMITH CYCLERY CO GROUP_FIELD_NAME:account GROUP_FIELD_VALUE:000-000-000 GROUP_FIELD_NAME:crd_date GROUP_FIELD_VALUE:19950303 GROUP_FIELD_NAME:balance GROUP_FIELD_VALUE:1058.110000 GROUP_OFFSET:0 GROUP_LENGTH:2800 GROUP_FILENAME:c:\temp\credit.1.Baxter Bay Credit.Baxter Bay Credit.out

Chapter 15. Did you know?

489


15.3.2 What it does Store OnDemand is a client-based application that allows users to initiate the storing of documents into OnDemand directly rather than going through an administrator. To store a document in OnDemand: 1. Launch the Store OnDemand application. The user is required to logon using an OnDemand user ID and password. 2. In the Store OnDemand window (Figure 15-12) that opens, complete the following steps: a. Locate the document that the user want to store into OnDemand. Either click Locate to find the document, or look in a Windows Explorer window and drag it onto the Store OnDemand window.

Figure 15-12 Store OnDemand screen display

b. From the Application group and Application lists, select the application group and application in which the document should be stored.

490

Content Manager OnDemand Guide


c. When the application group is selected, the index fields required for this document are displayed. Either enter the index fields or click View and cut and paste the index values from the document to reduce the opportunity for user error. d. Click Store Document to load the document into OnDemand. Note: Data verification functions within Store OnDemand ensure that data entered is in the correct format and type to further reduce the possibility of entering incorrect indexes. The Store Document bottom is unavailable until this validation criteria is satisfied. Otherwise, storing is not possible. Guidance regarding this criteria is provided in the Store OnDemand window.

15.4 OnDemandToolbox IBM Germany developed a set of sample programs for use with OnDemand Common Server. These programs are intended to provide customers and partners examples of how to code client APIs while providing a useful as-is tool to meet customer needs. The sample toolbox code and documentation are available on the redbook Web site. Refer to Appendix A, â&#x20AC;&#x153;Additional materialâ&#x20AC;? on page 521, for download instructions. The components of the toolbox are:    

OD Update OD Delete OD Store OD File System Monitor

Each component is a single program, working as stand-alone application. All programs are written in Visual Basic.net and are provided as ready-to-use binaries and as source code under the IBM Public License. Installation of the OnDemand client is a prerequisite for the installation of the OnDemand Toolbox. The Microsoft .NET Framework must be installed during the setup of the toolbox.

Chapter 15. Did you know?

491


OD Update The OD Update application provides an easy-to-use interface to update the index (key) values of documents archived in OnDemand. The key features are:  Easy update of all documents  No need for any client configuration before using it  Fits into the OnDemand security concept The user must have update authority to the application group, which is configurable using the OnDemand Administrator.

OD Delete The OD Delete application is used for deleting documents archived in OnDemand. The key features are:  Easy deletion of single or multiple documents  No need for any client configuration before using it  Fits into the OnDemand security concept The user must have delete authority to the application group, which is configurable using the OnDemand Administrator. Both, OD Update and OD Delete applications provide a user interface that is similar to the OnDemand client, making them easy to use.

OD Store The OD Store application is used for archiving PC files directly into OnDemand. The key features are:  Easy interface for archiving single documents into OnDemand  Integrates with the Windows Explorer  Can handle all files types supported by OnDemand  Requires prior configuration of which file type should be archived using which application, application group, and folder  Configuration can be exported and easily distributed to other users.  Fits into the OnDemand security concept The user must have archive authority to the application group, which is configurable using the OnDemand Administrator.

492

Content Manager OnDemand Guide


OD File System Monitor The OD File System Monitor application is a background application that monitors directories on a PC and automatically archives the files found in those directories using preconfigured field information and information extracted from the files. To use the monitor, configure the following items:  The directories that should be monitored (multiple directories and network drives supported)  The file types that should be archived into OnDemand  What to do after successful or unsuccessful archiving  The file types that should be archived into which application and which application group  What to do with files of other, unknown file types found in the monitored directories  The folder that should be used and the information that should be used for the indexes The key features are:  When configured, the OD File System Monitor runs without any user interaction as a background process.  The documentâ&#x20AC;&#x2122;s values for each index can be set to the fileâ&#x20AC;&#x2122;s meta information such as file size, creation date and time, and more.  The OD File System Monitor creates detailed logs (configurable) about all its activities.  The OD File System Monitor fits into the OnDemand security concept. The configured user account must have archive authority to the application group, which is configurable using the OnDemand Administrator.  Configuration of the OD File System Monitor is saved as an XML file, and therefore, can be distributed among other installations.  The source code can easily be modified to extract index information from other sources such as the fileâ&#x20AC;&#x2122;s content.

Chapter 15. Did you know?

493


15.5 Batch OnDemand commands in z/OS The OnDemand commands are designed to be entered from a UNIX or Windows NT command line. The ARSLOAD and ARSADMIN commands can be called as a program with the proper job control language (JCL) provided and the output is written to DD sysout. Other commands, such as ARSDATE and ARSADM, can be run as a batch job using the BPXBATCH program. The example in Figure 15-13 shows how to run the ARSDATE command as a batch job. The executable file ARSDATE resides in the /usr/lpp/ars/bin directory in the hierarchical file system (HFS) of the UNIX System Services. The output is written to the STDOUT statement, which points to an outputfile in the HFS directory /tmp/arssockt.std. This file must be accessible or the user running the BPXBATCH program must have proper authority to create this file.

Figure 15-13 BPXBATCH sample program

15.6 Testing PDF indexer in z/OS A good way to test the PDF indexer is to run the indexer as a batch job without loading the data to the database with the ARSPDOCI program. The ARSPDOCI is shipped with the basic code. Your instance ARSSOCKD must not be up. The JCL sample in Figure 15-14 shows how to run the ARSPDOCI program.

494

Content Manager OnDemand Guide


Figure 15-14 ARSPDOCI sample JCL

The PAR DD file contains the parameter for the indexer (Example 15-3). You can cut and paste this information from the Application panel indexer information. Example 15-3 Parameter file for ARSPDOCI program COORDINATES=IN TRIGGER1=UL(5.67,0.85),LR(6.18,1.09),*,'Page 1' FIELD1=UL(4.68,0.85),LR(5.71,1.06),0,(TRIGGER=1,BASE=0) FIELD2=UL(4.17,1.22),LR(7.39,1.47),1,(TRIGGER=1,BASE=0) FIELD3=UL(5.61,1.43),LR(6.49,1.71),1,(TRIGGER=1,BASE=0) FIELD4=UL(5.14,1.46),LR(5.61,1.64),1,(TRIGGER=1,BASE=0) INDEX1='rdate',FIELD1,(TYPE=GROUP) INDEX2='rname',FIELD2,(TYPE=GROUP) INDEX3='rplan',FIELD3,(TYPE=GROUP) INDEX4='rrefcode',FIELD4,(TYPE=GROUP) INDEXSTARTBY=10 INDEXDD=hfs:/tmp/pdf.ind INPUTDD=DDN:INPUT

Chapter 15. Did you know?

495


15.7 Date range search tip for users Instead of keying specific dates when searching for documents within the OnDemand client, it is sometimes easier to use the T date search option. This option lets you search for documents based on the days, months, or years that are relative to the current system date of the PC. Type the letter T in the date search field and set the search operator to Equal To. When you run the search, OnDemand retrieves the documents that contain todayâ&#x20AC;&#x2122;s date. The T date search option might be used with the search operator set to Between or set to Equal To. You can also use the following patterns when you use the T date search option: T

{ + or - }

#

{ D or M or Y }

The braces denote groups of optional parameters for the T format string; choose one of the symbols in the group. If you leave out the plus sign (+) or the minus sign (-), OnDemand assumes a + sign. If you leave out D, M, or Y, OnDemand assumes D. The T format string is case insensitive, meaning that you can type T or t, D or d, M or m, or Y or y. Table 15-1 describes the T format string. Table 15-1 T format string

496

Symbol

Description

T

Current date (required)

+

Forward from the current date

-

Backward from the current date

#

Represents the number of days, months, or years (required)

D

Days (number of days)

M

Months (number of months)

Y

Year (number of years)

Content Manager OnDemand Guide


Table 15-2 lists examples of using the T format string with the search operator set to Equal To. Table 15-2 T format string examples with the Equal To search operator T string

Meaning

T-6M

6 months prior to the current date

T+30D

30 days forward from the current date

T30

30 days forward from current date (same as above)

T-1Y

1 year prior to the current date

Table 15-3 lists examples of using the T format string with the search operator set to Between. Table 15-3 T format string examples with the Between search operator T string

T string

Meaning

T-6M

and

T

Between 6 months prior to the current date and the current date

T-60D

and

T-30D

Between 30 and 60 days prior to the current date

T-60

and

T-30

Same as previous

T

and

T30

Between the current date and 30 days forward from the current date

15.8 Ad-hoc CD-ROM mastering You can extract data from an OnDemand server and use the OnDemand client to put that data onto a directory on your PC. You can then copy the documents and indexes to a CD or DVD for easy distribution, or leave the data in your PC directory for demonstration purposes. Select Ad-Hoc CDROM Mastering during the client installation (see Figure 15-15). Then follow the steps in the rest of this section.

Figure 15-15 Client installation: selecting the Ad-Hoc CDROM Mastering option

Chapter 15. Did you know?

497


15.8.1 Transferring documents from the OnDemand server to the staging drive Follow these steps to use the CD-ROM mastering option: 1. Launch the OnDemand client, and select File â&#x2020;&#x2019; Set CD-ROM Mastering Options. See Figure 15-16.

Figure 15-16 Set CD-ROM Mastering Options

2. In the Set CD-ROM Mastering Options window (Figure 15-17), under CD-ROM User, enter your user ID and password to the CD-ROM, which is an OnDemand server. The default user ID and password are both cdrom. You must select the staging path. Then click OK.

Figure 15-17 Set CD-ROM Mastering Options: staging path

498

Content Manager OnDemand Guide


Important:  If the Exclude Notes check box is checked, public annotations are not copied.  Only local hard drives are available as staging drives.  You cannot select more than one copy. 3. The CD-ROM Mastering option becomes active on the File menu. 4. Run a search and the document list is displayed. Then select File â&#x2020;&#x2019; CD-ROM Mastering. See Figure 15-18.

Figure 15-18 Starting the CD-ROM Mastering feature

Important: The CD-ROM Mastering option is available only when a document list is displayed. If a document in the list is being viewed, the CD-ROM Mastering option is unavailable.

Chapter 15. Did you know?

499


5. In the CD-ROM Mastering window (Figure 15-19), follow these steps: a. Select the folder that you want to add to CD-ROM. Your current folder is automatically selected, and you can select alternate folders from the list. b. In the CD-ROM folder description field, enter a description. The description is required and is saved in the drop-down list for the user in other mastering sessions. c. You can update the CD-ROM mastering options by choosing the Set Options button. d. Click OK.

Figure 15-19 CD-ROM Mastering: adding folders to CD-ROM

Important:  To transfer the documents to a staging drive, you must keep the document list on the screen. Only one folder can be staged at a time.  All items in the document list are placed on the CD-ROM. 6. The CD-ROM mastering process starts. You should be able to see a window with five options in it (Figure 15-20): – – – – –

500

Clean: Removes all files in the staging directory Setup: Creates the necessary directory structures in the staging directory Fetch: Retrieves the data and resources for the items in the hit list Index: Re-indexes the retrieved data for the CD-ROM Stage: Copies the CD-ROM installation files and the OnDemand client (along with any installed languages) to the staging directory

Content Manager OnDemand Guide


Figure 15-20 CD-ROM Mastering process

7. After the CD-ROM mastering process finishes, you see a message like the example in Figure 15-21. Click Yes to finish the process. Click No if you want to add another folder or stop the CD-ROM mastering process.

Figure 15-21 CD-ROM Mastering: process complete message

8. If you click No in the previous step, you see the message shown in Figure 15-22. Click No to finish the process. Click Yes to return to the previous window to select another folder.

Figure 15-22 CD-ROM Mastering: adding another folder

9. After you finish selecting the folders and staging documents you want, when prompted by the message â&#x20AC;&#x153;Proceed with the mastering of volume xxxnnnnnnnn?â&#x20AC;?, click Yes. The CD-ROM image is finalized and can now be accessed with the OnDemand client or written to a CD-ROM.

Chapter 15. Did you know?

501


OnDemand writes messages into the system log while documents are retrieved for the CD-ROM image. After the CD-ROM image is finalized, a CD-ROM creation manifest is written that contains the user ID, password, and a listing of the folders that are included in the CD-ROM image. The following examples are system log messages created by CD-ROM mastering:  Message 81 ApplGroup ObjRetrieve: Name(CHECKS) Agid(5252) ObjName(4FAAA) NodeName(-CACHE-) Nid(4) Server(-LOCAL-) Off(0) Len(68928) Time(0.001)

 Message 67 (AFP applications only) ApplGroup ResGet: Name(ABINVRX1) Agid(5027) NodeName(-CACHE-) Nid(0) Server(-LOCAL-) Time(0.066)

 Message 90 BULK DOCUMENT RETRIEVAL Application Group Agid Flds->Handle ------------------------------------------------------------------CHECKS 5252 ->4FAAA,0,8154,0,68928,0x4E,0x4F,0,4,0 ->4FAAA,8154,16464,0,68928,0x4E,0x4F,0,4,0 ->4FAAA,24618,12483,0,68928,0x4E,0x4F,0,4,0 ->4FAAA,37101,9862,0,68928,0x4E,0x4F,0,4,0 ->4FAAA,46963,4669,0,68928,0x4E,0x4F,0,4,0 ->4FAAA,51632,3590,0,68928,0x4E,0x4F,0,4,0

 Message 89 (CD-ROM Creation Manifest) CD-ROM Volume AOD00000002 Produced on Friday, December 16, 2005 at 10:52:18 Eastern Standard Time by DBRYANT

502

COPIES

1

USER PASSWORD

cdrom cdrom

FOLDERS

IBM Order Documentation Generic Indexer Images Invoice CHECKS ABINVRX1 AFPLO JOBLOGLO

Content Manager OnDemand Guide


15.8.2 Burning the disk image to the optical media Use the CD-ROM authoring software of your choice to burn the staging drive to a CD or DVD. The staging drive is as specified in the arsmstr32.ini file. Popular CD-ROM authoring software includes Roxio Ez-CD Creator, Nero, and Stomp.

15.8.3 Accessing the CD image from disk To access the CD-ROM image from the local disk drive: 1. Open the OnDemand client. 2. Click Update Servers. 3. In the Update Servers window (Figure 15-23), add a local server.

Figure 15-23 Accessing a CD image from disk: updating servers

4. Log on with the user ID and password specified when the CD-ROM image was created (Figure 15-24).

Figure 15-24 Accessing CD image from disk: log in

Chapter 15. Did you know?

503


5. Use the OnDemand client to access the data in the usual way (Figure 15-25).

Figure 15-25 Accessing CD image from disk: opening a folder

15.8.4 Accessing the CD image from the CD-ROM To access the CD-ROM image from the CD-ROM drive: 1. Run the setup program from the CD: x:\client\windows\win32. 2. Follow the prompts, which are similar to the regular OnDemand client installation. All languages from the original OnDemand client installation are available. IBM OnDemand CDROM has been added to the Windows Start → All Programs menu. To start it, select Start → All Programs → IBM OnDemand32 CDROM → OnDemand32 English. 3. Log on with the user ID and password specified when the CD-ROM image was created (Figure 15-26).

Figure 15-26 Accessing CD image from CD-ROM: logging in

504

Content Manager OnDemand Guide


4. Open a folder and use the OnDemand client to access the data in the usual way (Figure 15-27).

Figure 15-27 Accessing CD image from CD-ROM: opening a folder

A few differences are noted here:  The logical AND/OR radio buttons are not displayed.  The password cannot be changed.  The annotations and named queries are stored on the user's disk drive, not on the CD-ROM.

15.9 OnDemand Production Data Distribution The OnDemand Production Data Distribution (PDD) feature is an optional feature of OnDemand that you can use to distribute information to customers and other people inside and outside your company. To access the information, users mount the CD-ROM that has the OnDemand data and start an OnDemand client program that is included on the CD-ROM. Authorized users can view, reprint, or FAX documents that you distribute on the CD ROM.

15.9.1 Why it is needed The PDD feature is designed to support high-volume, batch processing of input files and documents and production of multiple copies of a distribution image. The usage of the PDD CD-ROM is similar to the ad-hoc CD-ROM feature. The PDD feature complements the existing OnDemand recordable CD-ROM feature, which is designed for low-volume, ad-hoc building of CD-ROMs initiated by a user with OnDemand client programs. The PDD process is initiated from the OnDemand server instead of the client.

Chapter 15. Did you know?

505


15.9.2 How it works The PDD uses the Perfect Image Producer system (Rimage System) by Rimage Corporation to produce an ISO-9660 format disc image and to control the CD recorders, disc transporter, and CD label printer. The Rimage system includes the recorders, transporter and printer in one enclosure. It is also installed with the premastering, recording, and printing software. The content of a distribution image is determined by the system administrator via distribution files and distribution groups. The system administrator can design the label of the CD-ROM as something meaningful because the CD-ROM image produced by the PDD program can be printed with customized color label using the Rimage System. PDD is a services offering. For more information about the Production Data Distribution, contact your local IBM marketing representative.

15.10 Customizing the About window The About window is displayed when the OnDemand client is first started and can be displayed later by selecting Help â&#x2020;&#x2019; About OnDemand. The About window can be customized with user-specific text and graphics. Up to eight lines of text can be added as well as a bitmap and text for the title bar. The reasons to customize the About window include to provide:  Such information as telephone numbers and names of Customer Support for your company  A seamless look to the products that are used by your company The About window can be customized to contain company specific information such as the company name, and company logo. The information that is displayed in the About window is obtained from a text file named product.inf. The file can be created using a text editor such as Notepad. The product.inf must be located in the OnDemand installation directory. The default installation directory is C:\Program Files\IBM\OnDemand32. Example 15-4 shows the contents of a sample product.inf file.

506

Content Manager OnDemand Guide


Example 15-4 Contents of the product.inf file [Product] NAME=Baxter Bay Bank Archive System LOGO_FILE= C:\Program Files\IBM\OnDemand32\baxterbaybank.bmp ABOUT_TITLE=Baxter Bay Bank Customer Support ABOUT_LINE1=To contact a customer support representative call: ABOUT_LINE2=Customer Support Hotline ABOUT_LINE3=1-888-BBB-HELP ABOUT_LINE4=Data Processing Center ABOUT_LINE5=1-888-552-5392 ABOUT_LINE6=For Online help view the web pages: ABOUT_LINE7=http://www.BBB.CSHotline.com ABOUT_LINE8=http://www.BBB.DataCenter.com

The title bar of the OnDemand main window is customized with the name Baxter Bay Bank Archive System from the NAME keyword in the product.inf file (Figure 15-28).

Figure 15-28 Customized About menu

Chapter 15. Did you know?

507


Figure 15-29 shows the customized About window using the sample product.inf shown in Example 15-4.

Figure 15-29 Customized About window

You can also customize how long you want to display the About window through the registry setting. Refer to 15.11.4, â&#x20AC;&#x153;Displaying the OnDemand splash screen or About windowâ&#x20AC;? on page 514, for more details.

15.11 Modifying client behavior through the registry Many aspects of client operation can be modified only through entries in the Windows registry. Some of the more useful registry changes are described in this section. For information about other registry changes, see IBM Content Manager OnDemand - Windows Client Customization Guide and Reference, SC27-0837. All of the registry entries are created in the key HKEY_CURRENT_USER\Software\IBM\OnDemand32\Client\Preferences.

508

Content Manager OnDemand Guide


15.11.1 Single-selection hit list Problem: The Document List (also known as the hit list) on the Folder window allows multiple documents (rows) to be selected simultaneously. This permits several documents to be retrieved with a single click of the View All Selected button. An administrator might want to remove this ability and restrict the selection to a single document, thereby permitting only a single document to be retrieved from the server at one time. This option can be used to control document retrieval load on the server. Solution: For fix pack 7.1.2.4, a registry entry has been added to support this requirement. Create a string value named SINGLE_SEL_DOCLIST. If assigned a value of 1 (Figure 15-30), the Document List allows only a single document to be selected. If a document is already selected and a new selection is made, the previously-selected document is deselected.

Figure 15-30 Registry setting modification: single-selection hit list

Chapter 15. Did you know?

509


15.11.2 Folder List Filter enhanced Problem: The Folder List Filter is used to create a subset list of all folders that available on a server. Users enter folder names and optional wildcard characters to specify the desired subset. Certain users require that a wildcard character be automatically appended to all folder names or that the folder names be automatically converted to uppercase when submitted to the server. Solution: For fix pack 7.1.2.3, several registry entries have been added to support this requirement (Figure 15-31).

Figure 15-31 Registry setting modification: enhanced Folder List Filter option

Create a string value named FILTER_AUTO_APPEND_WILDCARD. If assigned a value of 1 (Figure 15-32), a wildcard character is automatically appended to each folder name submitted by the Folder List Filter.

Figure 15-32 Registry setting modification: FILTER_AUTO_APPEND_WILDCARD

510

Content Manager OnDemand Guide


Create a string value named FILTER_AUTO_UPPERCASE. If assigned a value of 1 (Figure 15-33), each folder name submitted by the Folder List Filter is automatically converted to uppercase before it is sent to the server.

Figure 15-33 Registry setting modification: FILTER_AUTO_UPPERCASE

How this works in the OnDemand client In our example, after setting both registry entries to a value of 1, entering the filter of pay results in a folder list of all folders starting with PAY, as shown in Figure 15-34.

Figure 15-34 Registry setting modification: Folder List Filter changes

Chapter 15. Did you know?

511


15.11.3 Customizing your line data background In the past, printers often printed on green bar paper (paper that alternates green stripes with white stripes) to enable users to better view listings and large accounting reports. The green bar background can be selected in the client or be set as the default by the administrator (Figure 15-35).

Figure 15-35 Line data background customization: Green Bar Alternating Stripes

512

Content Manager OnDemand Guide


The easiest way to determine the Red Green Blue (RGB) values is to use an application such as Microsoft Paint. From the menu bar, click Colors â&#x2020;&#x2019; Edit Colors â&#x2020;&#x2019; Define Custom Colors. Move the cross hairs and color slider until you find a pleasing color (Figure 15-36). Note the RGB values and enter them in the Windows registry entry.

Figure 15-36 Editing colors

Create a string value named GREEN_BAR_COLOR. The value assigned overrides the color of the green bands of the green bar background color. It must be specified as a RGB value. For example, to use light gray bars instead of green, specify 230,230,230 (Figure 15-37). The default value is 128,255,128.

Figure 15-37 Registry setting: GREEN_BAR_COLOR

Chapter 15. Did you know?

513


Figure 15-38 shows the result of this registry entry.

Figure 15-38 Output of registry setting for GREEN_BAR_COLOR

15.11.4 Displaying the OnDemand splash screen or About window When the OnDemand client is first started, an OnDemand splash screen or an About window is displayed for approximately two seconds. By default, the OnDemand splash screen is displayed. However, if the About window has been customized or if the OnDemand splash screen bitmap file, ODSplash.bmp, does not exist in the OnDemand installation directory, the About window opens. The amount of time to display the splash screen or the About window can be changed to a longer or shorter time by adding an entry in the Windows Registry. The display time is specified in seconds. A value of zero can be specified to prevent the splash screen or the About window from being displayed. If you customized the About window to provide Customer Support information, it might be desirable to increase the display time. Alternatively, to provide a uniform look for all of the products used by a company, it might be desirable not to display the OnDemand splash screen so that the OnDemand client appears to be part of a suite of programs used by the company. For more information about customizing the About window, see 15.10, â&#x20AC;&#x153;Customizing the About windowâ&#x20AC;? on page 506.

514

Content Manager OnDemand Guide


Create a string value named SHOWLOGO. Assign a value of zero or more seconds. Figure 15-39 shows a registry file to set a display time of five.

Figure 15-39 Registry setting: SHOW LOGO

After you create this registry entry, the OnDemand splash screen or the About window is displayed for 5 seconds when the OnDemand Client is started.

15.12 Negative numbers in decimal fields handling A decimal number can have a leading negative sign (-13.75) or trailing negative sign (13.75-). The software that creates the report determines the position of the negative sign. The position does not change the meaning of the negative sign. OnDemand processes a leading negative sign without any special steps in the definition of the indexer parameters. Simply define the length of the field long enough to include the negative sign and the largest possible value the field can contain. For example, if the largest possible value is -9,999,999.99, you define 13 as the field length. In the load information, OnDemand removes the leading and trailing blanks, embedded blanks, commas, and periods. In Example 15-5, Field 3 (FIELD3) and Index 3 (INDEX3) contain the decimal numbers. Example 15-5 Indexer parameters FIELD3=0,72,13,(TRIGGER=3,BASE=0) INDEX3=X'819496A495A3',FIELD3,(TYPE=GROUP,BREAK=NO) /* amount */

By default, OnDemand cannot process a trailing negative sign. The arslog command logs error 88 in the system log with text similar to the following example: Row 1: The value '13.75-' cannot be converted to a valid decimal number.

Chapter 15. Did you know?

515


There are special steps you can make when you define the indexer parameters to enable OnDemand to process the trailing negative signs. In summary, you must create two fields in the indexer parameters and use these fields to move the negative sign from a trailing position to a leading position. Here are the special steps: 1. Define two fields. One field contains the numeric portion of the amount. The other field contains the sign portion of the amount. 2. Concatenate the two fields in the index definition, placing the sign portion first, followed by the numeric portion. 3. In the load information, remove leading and trailing blanks, embedded blanks, commas, and periods. In our example as shown in Figure 15-40, Field 3 contains the numeric portion, and Field 4 contains the sign portion.

Figure 15-40 Negative number capture in graphical indexer

Index 3 contains the decimal amount with Field 4 (FIELD4), the sign portion first, and Field 3 (FIELD3). See the resulting indexer parameters set up in Example 15-6. Example 15-6 Indexer parameters FIELD3=0,73,12,(TRIGGER=3,BASE=0) FIELD4=0,85,1,(TRIGGER=3,BASE=0) INDEX3=X'819496A495A3',FIELD4,FIELD3,(TYPE=GROUP,BREAK=NO) /* amount */

From an OnDemand client, the document list displays the amount with a leading negative sign. See Figure 15-41.

Figure 15-41 Negative number displayed in OnDemand client

516

Content Manager OnDemand Guide


15.13 Message of the day The Message of the day is an easy way to inform users about:     

Server upgrades Education sessions New functions that are available Special events Other important information that people should know

Figure 15-42 shows an example of the message of the day.

Figure 15-42 Message of the day

The content of the message file can contain a maximum of 1024 characters of text. The administrative client and the user client show the message after users log on to the server. To close the message box and continue, users click OK. To set up the message of the day, choose one of the following options:  For all OnDemand server platforms except Windows, set the ARS_MESSAGE_OF_THE_DAY parameter to the full path name of a file that contains the message that you want the client to show, in the ARS.CFG file, for example: ARS_MESSAGE_OF_THE_DAY=/opt/ondemand/tmp/message.txt

Chapter 15. Did you know?

517


 For a Windows OnDemand platform, add a String Value in the Windows registry. The String Value name is ARS_MESSAGE_OF_THE_DAY. Set the value to the full path name of a file that contains the message that you want the client to show. For example, see Figure 15-43.

Figure 15-43 Windows registry for Message of the day

Restart the server after you modify the message of the day information.

518

Content Manager OnDemand Guide


15.14 OnDemand bulletins IBM periodically distributes e-mail bulletins with tips, techniques, announcements, and product news related to the OnDemand for iSeries product. Since many of the technical tips also apply to other OnDemand platforms, you might want to subscribe to the bulletin even if you do not work with the iSeries server. Go to the OnDemand for iSeries Support Web site at the following address: http://www.ibm.com/software/data/ondemand/400/support.html

Search on the word bulletin. You can obtain summary bulletins from the last several years. Review them to find such valuable information as: Common problems and solutions Indexing techniques Client command line parameters Enhancements to the end-user client Enhancements to the administrator client ODWEK enhancements How to create an AFP overlay Tips on migration from Spool File Archive to Common Server OnDemand client upgrade considerations How to set up Document Audit Facility Tips on using query restrictions How to use Expression Find in the client How to add your own messages to the System Log How to display a â&#x20AC;&#x153;message of the dayâ&#x20AC;? to an OnDemand client user How to use a public named query with arsdoc to make it easier to delete individual documents or modify index values  How to use a folder list filter in the OnDemand client  And many more tips               

If you want to subscribe to the bulletin, contact Darrell Bryant by sending e-mail to dbryant@us.ibm.com.

Chapter 15. Did you know?

519


520

Content Manager OnDemand Guide


CMS Redbook Overview By Jack Hennessy


CMS Redbook Overview