DirectionIT Magazine Issue 2

Page 1

w w w .d kl .com

fe b rua ry 20 16

Cont 2

w w w .d k l .com

ents Part II: The four hallmarks of an effective CIO p.6

Enterprise Cloud Computing in the New Millennium p.14 Punk’s not dead and neither is the mainframe p.20 The New Era of Mainframe and Mobility p.28 SQL Performance Tuning Basics p.36


Available at

w w w . L E X T E M P U S . c o m

Letter from the Editor

With our feet firmly planted in the New Year—it is hard to believe that we are almost at the end of the first quarter—the speed at which we have managed our business seems to have increased almost daily and at an exponential rate. Though that sentiment may also be my age, I truly believe that the world around us continues to accelerate and that we continue to evolve, proving at every step that we are capable of greater and greater things. It is not unlike that of professional sports: the boundaries and records that are continually broken are proof that we have limitless potential. And far more than just the physical boundaries, we also have unlimited potential when it comes to the intellectual side of the equation. Every day we expand our world through technology, and we do more and more with our tools, devising new ways to continually reach seemingly impossible goals. It is this constant evolution that brings us to this issue of DirectionIT Magazine. As we continue to push our limits, we also push our technology to keep pace. The idea that we follow technology is in fact incorrect, it is technology that follows and supports us in our endeavors to do more, build more, and to succeed beyond our wildest dreams. We continue to evolve and invest in the technological foundations that we have built over the past half century. From the early days of the mainframe to the interconnection between it and the new world of mobility and beyond, we are connected to a past that seeks to press infinitely into the future. It is this limitless determination that drives the reasons why we publish DirectionIT Magazine—a small contribution to those who seek the knowledge that will lead them from the past into a bright and successful future. Happy New Year everyone,

Allan Zander Editor-in-ChiefPublisher


Part II:

Par In Part I of “The four hallmarks of an effective CIO,” we examined two of the four areas in which CIOs must excel: Alignment and Architecture. In Part II, we will examine three and four: Agility and Ability.


w w w .d k l .com


The four hallmarks of an effective

rt II By Wayne Sadin – CIO Advisor


Agility Agility is the set of processes and tools employed by IT to reduce unnecessary complexity and time to market. It’s a relatively new guiding principle for IT, and it’s a necessary counterbalance to all the forces that make traditional IT slow and cumbersome. Be aware that Agility without Architecture is Chaos (but Architecture without Agility is Bureaucracy).

Par Traditional IT processes evolved from engineering project management disciplines. Engineers had developed sound principles over hundreds—if not thousands—of years, and it was a good place to start when IT was young and the pace of change was slow. Building an oil refinery or a bridge takes time, involves many physical processes, and has an “all-or-nothing” focus (half a bridge is of little use). When you’re building a giant ERP system that controls inventory, orders and manufacturing, it’s a lot like building that oil refinery (and half a system is of little value).

Today, IT is less about building huge monolithic systems (sometimes called ‘Systems of Record”) and more about building systems that reach out and touch employees, customers, partners (“Systems of Engagement”). Unlike, say, an order-entry system, these systems often have value even if they’re not 100% complete and 100% perfect. This evolution has come as a big shock to many long-time IT technicians. They have been used to delivering featurecomplete, fully debugged systems for so long that their engineering minds might have trouble adjusting to this changed world. Even more importantly, the processes and tools used to deliver those systems were designed and tuned over the years to work a certain way: slowly and carefully.

An effective CIO understands that different applications have different cost/ time/feature/quality trade-offs and creates processes and standards that allow these tradeoffs to be exploited. This will seem like heresy to IT traditionalists: we have to follow the architecture, fill out the forms, gather all the signatures, perform exhaustive design studies, get the next round of approvals, and only then start coding. This “Waterfall” methodology produces high quality outcomes, but it takes a long time and locks the project into requirements that are—deliberately—hard to modify as the world outside the project changes.


w w w .d k l .com

Whereas traditional Waterfall methods assume delivery will take months or years, Agile methods allow large projects to be broken down into small deliverables, each of which can be designed/coded/approved in days or (at most) weeks. This allows mid-course corrections to be made and for the final product to change quite a bit from inception to completion. (In fact, Agile projects don’t necessarily have a “completion.” Some never end, as delivery and maintenance blend together. And others just sort of taper off, as some initially required features are deemed unnecessary.) The IT community has been working with Agile methodologies for the last 15 years or so, and there are many specific Agile techniques that have worked successfully for many classes of projects. This is not the time to discuss specific techniques. What’s important is that an effective CIO has Agile tools in the toolbox and an appreciation for when and how to use them. We made a big deal about Architecture just a moment ago, so let’s discuss the seeming contradiction between Architecture (do it according to the rules) and Agility (do it now, somehow). Effective IT needs a measure of both. Architecture without Agility is stagnation; Agility without Architecture is Chaos.

rt II: One more thing: there’s a lot of talk in consulting circles lately about “Bimodal IT.” This is the notion that IT needs one “speed” (slow) to maintain the Systems of Record and another speed (fast) to maintain Systems of Engagement. They got the idea correct, but not its execution. Just like a two-speed bicycle would be hard to ride, limiting IT to fast and slow speeds fails to exploit the trade-offs among cost, time, features and quality.


Par 10

w w w .d k l .com


Ability means IT’s responsibility to deliver, around the clock and around the world, a set of complex services to a diverse group of customers. These services include keeping the networks running (voice, data, telemetry, cellular, etc.); providing security (and its opposite, access); ensuring applications work smoothly and without undue delay; making data available (reports, queries, databases); interoperating with counter parties of many types, from financial institutions to customers to suppliers; provisioning new users, locations, applications…and on and on.

rt II: Delivering IT services is like operating a factory. There are many inputs, outputs and products. There are manufacturing processes that must be followed and MRO (Maintanance, Repair, Operations) supplies that are needed. Production lines break down and need repair, and the lines must be retooled for new products and manufacturing process upgrades. The glamorous part of the CIO’s world is innovation, organization transformation, industry disruption. But if the factory isn’t humming along, the glamour is all for nothing. Thomas Edison said, “Vision without Execution is Hallucination,” and that certainly applies to the CIO’s world.

There is nothing magical about operating a factory. It’s about processes, monitoring, quality and cost control. I started my career as a manufacturing engineer, so it seems pretty straightforward to me. But there are countless IT departments that have trouble mastering the skills and therefore produce substandard IT “products.”

My only advice to the CEO/Board is this: if your CIO can’t run a proper IT factory in 2015, get a new CIO. Your organization depends too much on the Ability of IT to produce results to accept less. Organizations come in many shapes and sizes, and there are many types of CIOs delivering results using a variety of techniques. As a CEO/Director you don’t have to understand the inner workings of IT to oversee it effectively. What you do need to do is ensure that your CIO is effective at four things: Alignment, Architecture, Agility and Ability.


Mainframe Software Products for Cost Control and Capacity Planning

Though the elegance of a timepiece may inspire your inner self, it must also transcend emotion—delivering you to a place where form is perfectly intertwined with function. At Lex Tempus, we will help you find the perfect timepiece—one that has function and elegance—a true work of art. Available at

w w w . L E X T E M P U S . c o m

By Neale D’Rozario A recent study found that 70% of enterprises have either applications or infrastructure running in the cloud today—that’s up 12% from 2012. As well, since 2012 cloud investments by large-scale enterprises with over 1,000 employees have increased by 20%—on average spending over $3M. In 2015, it was estimated that 25% of IT budgets would be allocated to cloud solutions—with the highest percentage being allocated to SaaS models. Furthermore, 70% of enterprises have at least one application or a portion of their computing infrastructure in the cloud—up from 60% of enterprises in 2012. Moreover, 20% plan to use cloud-based applications and/or computing infrastructure via the cloud in the next 12 months, and 15% within one to three years. With the growth in cloud computing, a major inhibitor has reared its ugly head, and that is security—and it continues to be the major inhibitor to broad scale cloud adoption. According to a 2012 study, 70% of respondents had “Concerns about the security of cloud computing solutions.” In addition, 30% found security more pressing than the closest challenge or barrier to implementation, which was 40% for “Concerns about access to information.”


w w w .d k l .com

This leads to the question: How do you enjoy the benefits of cloud computing while at the same time addressing security concerns and minimizing risk? The answer is simple: the mainframe. You can enjoy the benefits of cloud computing with the mainframe. When you look at what the mainframe has to offer in terms of security, it makes sense to develop your cloud computing platform on a scalable and secure foundation—the System z Modern Mainframe, which delivers: •

Application Security

Data protection Controls

Hardware and application security

Operating System Integrity

Robust security controls

Secure architecture


Eight reasons why mainframes are relevant to today’s cloud environment

1 2 3 4 Here are eight advantages that the mainframe offers for cloud computing:

Centralized data

Corporate production data still residing on the mainframe is estimated to be up to 70%, meaning that private clouds residing on System z have secure access to crucial information that if necessary is shared with sufficient access controls, encryption, data masking, data security, and integrity.


Support for industry standards, regulatory compliance, and best practices such as strong encryption, secure endpoint communications, segmentation of duties, user auditing, and more.

Consolidated Workloads

When you have optimized the virtual environment you’ll find it fairly easy to consolidate your various workloads on the mainframe while delivering any necessary isolation between virtual systems. You’ll also reduce the licensing fees incurred with distributed systems.

Flexible infrastructure

With System z you have support for different virtualized environments to enable cloud implementation, including the z/VM operating system running virtual servers, logical partitions (LPARs), blade servers, and hypervisors.


w w w .d k l .com

5 6 7 8 Migration

With easy migration of distributed workloads over to the mainframe virtualized environment, you reduce the number of distributed systems that need to be managed.

Reduced Total Cost of Ownership (TCO) A recent IBM study found that the three-year TCO for a System z mainframe cloud can be 75% less than a third-party provider’s public cloud and as much as 49% less than an x86-based private cloud.

Scalable support

One modern mainframe not only can run its own environment, but also can virtualize hundreds or even thousands of more pedestrian servers—all at the same time. It provides an ideal platform for big data analytics, production processing, data warehouses, and web applications while supporting millions of users with exceptional performance.


You have much better control when you implement mainframe private cloud. It provides a high level of security transparency giving you a view across the enterprise. With the enterprise you can automate the monitoring and analysis of potential internal and external threats. Furthermore, with mainframe clouds you can also decrease the security threats that are inherent on public clouds with open networks.


By Larry Poirier


w w w .d k l .com

2 1

n this combinatio ile—perhaps it’s ph of no at ch th te to a genre ic fan as I’m the Punk music as much a mus parisons from m From the early co d. Admittedly, I’m ve aw ol dr ev to y sy for me it has simpl d— ge an e bands that ch ’t to that makes it ea the 1980s, th l, Punk hasn of al r s te nd Af — ba e. m m do ra ted King the mainf ur the Dew Cup York City, to Uni e bands that to th ew w w N ne s no a d 70 d an 19 an , of days X Games w audience und track to the taking on a ne , so ic e br th fa d al ne ci aw so sp d our e has influence the music genr n. tio ra ne ge ery ose meaning for ev is no different. Th me technology the ra nf to ai ck m of ba y en or d simply hark e? The traject m an y ra nf or ai st the hi m e at its th years of r realizing th So what of look at the 50+ ntender—neve ay co m er n w w er po no od g -k m in a he nce as out-of-t ever-build issing its releva to stay with an sm re di , he ys is da , d ck ol ro good like Punk t dead—it too, mainframe isn’ core use. rd ha of y histor talk based on a long I digress, let’s lled System Z? ca nd ba e. m nk ra Pu e mainf I start a d durability of th estion: should th, relevance an So I ask the qu al he ng ui in nt the co technology and

housares its availability, r e w o p m r o A plaetmaifnframe platform is unrivaled.arOeveunr sutherppaasstse50d—yethey are far beyoerinndg

deliv Technically, th d performance come close to bility, security an None of these e. ur ct ru st scalability, relia fra d in ributed or clou that of any dist at’s why: Th . es do e m ra what the mainf e use a mainfram top 100 banks e th of % 92 * s mainframe user rtune 100s are Fo of mes % 90 * aged by mainfra ate data is man or rp co l al the mainframe of re business on * About 80% co r ei th ve ha mpanies Fortune 500 co * 71% of all mainframe retailers use a 25 p to ’s ld or me technologies w ents in mainfra * 23 of the itm m m co ve rs ha ghof the top insure process their hi nce providers * 10 out of 10 ra su in th al he e and p 10 global lif * 9 of the to e mainframe th on ns tio a mainframe volume transac orldwide rely on w ts en m rn ve go to run almost state and local ctors continue se e * Over 225 at iv pr d the public an l workloads in * Most critica es m m center stage mainfra exclusively on me’s position fro ra nf ai m r ei th t moved orations have no * Large corp

ojects r p d l ie f n e Gre ramewould probably assume that a f in a M e h t u and eld project, yo ating a greenfi

empl necessarily. If you are cont expensive. Not o to be e ld ou w for an insuranc mainframe ional” example ct e -fi ur on ct “n ru a st d fra The in ipro devise Oracle’s DBMS. to othenet at Nov ith d C w ha ic d er Er ire t om pa ys st al An the cu el CRM of Oracle Sieb This meant that uld implementation sor Intel setup. of the system co s es ct oc ite pr 4ch ar 20 e a th t, as w ne the e he ck pl ot ba C am y in the ex ses to pa cording to on Oracle licen cle licenses. Ac ra gh ted O ou es 4 en gg 20 d su se t ve ha sa ne purc essors and rther, Cothe oc Fu pr s. M ar IB ye ur e gh fo re tle as th hibiting hi er have chosen mainframes ex emium in as lit pr ow e sh ar t w no rd ha an th e mainfram more often at MTBF would are. that a closer look mmodity hardw co on ilt achines bu m an th es Total Cost of tim up as part of your e im nt It’s w do of cost would win out. to consider the the mainframe ed e, ne im u nt yo w n, do s. st tio of ime co In addi d the costs , on your downt O). If you adde nding, of course pe Ownership (TC de , er id ns hing to co definitely somet


w w w .d k l .com

IOT and the ma inframe

The Internet of Th ings (IoT) connec ts to almost anyth smartphones, smart ing, including PCs, TVs, internet media tablets, devices, smart cars, other words, If you have something ca sensors of any kin pable of IP commun d. In to send or receive ications and sufficien a message, then yo tly smart u will be connected . Strategy Analytics es timates “33 billion co nnected devices in 12 billion predicted the next six years, at the end of this up from year [2014]. That on the planet.” will be four for every person According to Frank DeGilio, Chief Arch itect for Cloud, IBM Group, with today’s Software and Tech IoT “you are addin nology g a very chaotic co are not known.” He mponent. The end states that without points the the mainframe pla mainframe, chaos ys an important rol would ensue. Altho e in IoT, it can’t do ugh System z is to take it alone: “The value a much less controll of the ed environment an z brings the required d bring sense to it.” processing horsepo System wer and scalability the IoT workload. to manage large ch unks of

Mobility and t he mainframe Today, individuals, businesses, and go vernm

They are part of ou ents are immersed r daily lives: we use in mobile technologie them to check our ba s. our next destination, nk balances, to navig to shop and to comm ate to unicate. With the explosion of myriad devices an d innovative new tec reliance on the ma hnologies, there is inframe to manage a greater the mushrooming co seamlessly. In other mplexity of IT envir words, mainframes onments are more relevant tha our anytime, anywhe n ever when you co re approach to life. nsider Consider this: today mainframe process ing happily buzzes more mobile intern along—in real time. et access, the more The mainframe touch po a series of B2B pro ints: to fulfill a mobil cesses are involved e order , and these are dri the use of self-serv ven by the mainfram ice applications ris e. With ing and with the int number of these mo ernet rapidly increa bile orders, then the sing the scalability and reliab more important tha ility of the mainfram n ever and entrenc hes it firmly as a mi e is ssion-critical asset. The BMC Mainfram e Research Report confirmed the mainf is being powered rame’s continued by today’s digital “A growth lways On” world an access to applicatio d by its demand for ns and data at mobil secure e speeds and scale connecting to corpo . With many mobile rate networks, secu devices rity is a huge issue was the largest fac . In BMC’s report, tor for continued inv security estment in the mainf that they see the se rame with 56% res curity strengths of the ponding mainframe platform as an advantage.

2 3

Like Punk the Mainframe has evolved Punk has undergone a myriad of mutations over the years and is as relevant today as it was in the 1970s. As we have seen, the same applies to the mainframe and, if anything, it is even more relevant and required today. Imagine the chaos if the mainframe hadn’t survived. The mainframe was dubbed a dinosaur with its imminent extinction predicted—in the 1990s. In the twenty-first century, the mainframe has become more relevant than ever with the emergence of the internet, electronic commerce, and the enterprise-resource-planning (ERP) software packages that are required to handle critical enterprise transactions. Furthermore, the mainframe supports applications of varying ages, even when they have been upgraded many times, and they continue to work with many combinations of old, new, and emerging software. Add to that, the mainframe’s long-lasting performance as, once installed, mainframe systems work for many years without any major issues or downtime. A fact that has become crucial since the mainframe not only continues to support many mission-critical transactional applications at organizations across industries, but it also with the advent of Big Data technologies offers options for organizations to enable analytics on valuable mainframe data. Today, the mainframe is a perfect fit. And organizations that are prepared with their IBM z/OZS environment, will be poised to handle high volumes of data not only efficiently and reliably, but also cost effectively. The mainframe can no longer be perceived as a dinosaur threatened with extinction, but as a modern, scalable and reliable powerhouse that can manage today’s high-transaction environment, meet the increasing demands of internet and mobile applications, and support business-critical, customer-facing applications in the retail, banking and telecommunications industries.


w w w .d k l .com

2 5

The Swiss Army Knife of Data Integration Tools

The New Era

of Mainframe & M o b i l i ty

H o w p r o v e n t e c h n o l o g y i s t h e o n l y t e c h n o lo gy f o r t h e g lo b a l m o b i l e pa r a d i g m BY




w w w .d k l .com

It’s no secret—the disparaging things we hear behind closed doors and in darkened IT lunch

rooms, the rumors that the mainframe is dead or, at the very least, going the way of the Dodo.

However, nothing could be further from the truth. In fact, as it turns out, the mainframe may be the only resource we have to maintain the upward drive of mobile use and the associated data that accompanies it.

This epic tale of the aging superhero, the mainframe, not only proves it has a new lease on

life, but also proves that it was never actually in peril. In fact, quite the contrary. The computing power of the mainframe leads many to the conclusion that without it sustaining growth and business process within the mobile space would be damn near impossible.

As we know, IBM triumphantly introduced its newest powerhouse in 2015: the z13. The first

new mainframe in several years. What it proved, beyond performance, is that IBM not only still supports the mainframe environment, but it also placed its bets on it, and wholeheartedly invested in its long-term future and success.

Furthermore, with the focus of the IBM mainframe squarely on the usual suspects—banks, insurance companies, airlines and other large organizations—the investment makes sense as these types of organizations continue to invest billions of dollars each year into developing

mobile connectivity for their respective clients. This immediately translates to more people using more apps on more phones with more data transactions, every day.

In fact, according to IBM its z13 will not only help detect fraudulent activities in real time—

mobile devices being a common target for the ethically corrupt—but also IBM boasts that the system can handle literally billions of transactions generated each day by smartphones and other mobile devices such as tablets.

In the past 24 months we have seen mobile applications truly dominate enterprise space.

According to an industry report from the Enterprise Mobility Exchange (EME), mobility spending increased by almost 63% in one year, and as we enter into 2016 that trend shows no sign of slowing down.

Now, in the latest edition of the report, EME found that 35.7% of respondents were now beyond the early implementation stage of mobility solutions, with more than 60% of respondents having

invested in applications over the last 12 to 18 months, substantially more than the 33.9% that invested in mobile device management (MDM) solutions.

The latest edition of the report found that 35.7% of respondents were now beyond the early implementation stage of mobility solutions, with more than 60% of respondents having invested

in applications over the last 12 to 18 months, substantially more than the 33.9% that invested in mobile device management (MDM) solutions.

2 9


w w w .d k l .com

The tipping point has passed—the need for mobility is upon us

With the mid-decade being the perceived tipping point, it’s now evident that companies need—

and I can’t emphasize this enough—to be in the mobile space. In fact, no business can afford not to be.

It’s paramount that enterprises understand this need for an ancillary understanding of the business infrastructure requirement. The need to be perceived as the “new” mobile enterprise leads to the creation of mobile apps to continually reach, impress, and thus win potentially millions of consumers.

However, the real issue for all organizations isn’t whether to build mobile apps as this has been

the case since at least 2010. The real question enterprises, especially those that have lagged

or may be unprepared for a mobile world, must ask themselves today is not whether to invest in building mobile apps, but rather what technologies a business needs to invest in to most effectively get mobile apps out the door.

In addition, as part of the mobile tipping point, most enterprises now find themselves in need of providing greatly increased access to internal and often highly sensitive corporate documents,

which end users typically need to share and collaborate on from the “living in the moment” mobile perspective.

It is for this reason, and this reason alone, that the mainframe becomes the lynchpin to enterprise success in the mobile space—introducing the world reality of real-time fraud detection. The z13 will enable companies to analyze every transaction they process in near real time—try to calculate that on ten fingers.

Thus the mainframe becomes the only way to detect instances of fraud in banking, health insurance and other industries. Pre z13, today’s new-school mainframes only enabled companies to process and analyze about half of their transactions in real time. Now, z13 will

enable real-time analytics on potentially 100 percent of transactions—that’s a lot of scan and pay apps buying a lot of shoes and matching outfits.

At a maximum 5GHz, the z13’s processor is slower in terms of clock speed than the chip in

the z12, but IBM says it more than compensates for that with other improvements. The chip has eight cores compared with six for its predecessor, and it’s manufactured on a newer 22 nanometer process, which should mean smaller, faster transistors.

3 1

The end of the world as we know it

So with the ability to process transactions and analyze them in real time, it’s onwards and

upwards, right? Well ... I’ve always said that the road to Three Mile Island was paved with good intentions. The technology is not what holds us back from this new world order: it’s simply the understanding of where this new world order will and can take us.

For many companies, a lack of understanding leads to a failure to embrace mobility in

its truest form. Furthermore, failing to understand how it can lead to key differentiators in

such a competitive marketplace leads to a failure to invest in the right mobile approaches. Pioneers and early adopters understand that integration into legacy systems is a non issue, data performance and optimization companies can handle that with their digital

eyes closed. It’s really more of a marketing scenario: placing investments into programs

with the greatest end-user bang for the buck that makes a brand what it is. After all, brand is a simple yet deadly two-sided coin, the message you market versus living up to the message you market.

The good news in all of this is that the mainframe can make your brand a winner all around.


w w w .d k l .com

Spring Drive


The DB2 Performance Advisor SQL Performance Tuning Basics By Craig S. Mullins

When asked what is the single most important or stressful aspect of their job, DBAs typically respond “assuring optimal performance.” Indeed, a Forrester Research survey of critical DBA concerns indicates that performance and troubleshooting tops the list of most challenging DBA tasks. With this in mind, let’s take a moment to outline the basic factors that influence the performance of DB2 applications. Even though a proper investigation of DB2 performance issues should probably begin with database design, let’s start off with a discussion of SQL because it impacts more users. Everyone who needs to access or modify data in a DB2 database will use SQL to accomplish the task. As you write SQL statements to access DB2 data, there are certain very simple, yet important rules to follow to encourage efficient SQL. Of course, SQL performance is a complex topic and to understand every nuance of how SQL performs can take a lifetime to master. That being said, adhering to just a few simple rules can put you on the right track to achieving highperforming DB2 applications.


w w w .d k l .com

3 7


w w w .d k l .com

The first rule is to always provide only the exact columns that you need to retrieve in the SELECT-list of each SQL SELECT statement. If you only need three columns, why ask for more? Every column that you request must be accessed and moved by DB2 from database storage to your program. Most DB2 developers have heard the standard advice: “Do not use SELECT *”. This is a common standard in many shops and it is a good one… but it does not go far enough. For those who do not know, SELECT * is a shorthand means of telling DB2 to retrieve all of the columns from the table(s) being accessed. It can save some time for the SQL coder, but it is not a good idea to specify SELECT * in production applications because: •

DB2 tables may need to be changed in the future to include additional columns. SELECT * will retrieve those new columns too, and your program may not be capable of handling the additional data without requiring time-consuming changes. If instead you simply specified only the columns you need, you can add columns whenever you like without impacting production applications.

DB2 will consume additional resources for every column that you request to be returned. If the program does not need the data, it should not ask for it. Even if the program needs every column, it is better to explicitly code each column by name in the SQL statement for clarity and to avoid the previous pitfall.

So SELECT * is fine for quick and dirty queries, but using it is a bad practice for inclusion in application programs. A second rule to keep in mind is that you should not code your SQL to ask for things you already know. This may seem to be simple advice and easy-to-heed, but most programmers violate this rule at one time or another. For a typical example, consider the following SQL statement: SELECT EMPNO, LASTNAME, SALARY FROM EMP WHERE EMPNO = ‘000010’ At first glance this SQL looks fine. It is very simple, and with an index on the EMPNO column the query should perform well. But it is asking for information that you already know and it should be recoded. The problem is that EMPNO is included in the SELECT-list. You already know that EMPNO will be equal to the value ‘000010’ because that is what the WHERE clause tells DB2 to do. There is no possible way for DB2 to return an employee with any other number. But with EMPNO listed in the WHERE clause DB2 will dutifully retrieve that column too. This causes additional overhead to be incurred thereby degrading performance. The overhead may be minimal, but if the same SQL statement is run hundreds, thousands, or even millions of times a day then that minimal performance impact can add up to a significant impact.

3 9

Another guiding principle that you can invoke as the third rule is that you should access DB2 data like it is in a relational database system and not in flat files. A common rookie mistake, especially for old mainframe programmers, is to not think relationally. For example, you should always use the WHERE clause to filter data in the SQL instead of bringing it into your program and filtering it with IF-THEN-ELSE statements. From a performance perspective, it is much better for DB2 to filter the data before returning it to your program. This is so because DB2 uses additional I/O and CPU resources to obtain each row of data. And the fewer rows passed to your program, the more efficient your SQL will be. So, the following SQL SELECT EMPNO, LASTNAME, SALARY FROM EMP WHERE SALARY > 50000.00; …is better than simply reading all of the data without the WHERE clause and then checking each row to see if the SALARY is greater than 50000.00 in your program. Of course, this is a simple example. It is typically in more complex SQL statements with multiple predicates and join clauses that cause programmers to switch to filtering in their COBOL or Java instead of tinkering with an already complex SQL statement. But the proper approach is to filter in the SQL, not in your program. Another way that programmers write SQL to access DB2 tables like flat files is to avoid joins. DB2 is not designed to mimic the old master file processing tactics of QSAM files. By this I mean reading a record from a file and then using a value from that record to drive reads from an existing file. DB2 programmers try to mimic this type of processing using two cursors: one to read a row and the other using a value to drive the next cursor. This is a recipe for poor performance. Instead, code the SQL as a join and let DB2 do the work for you. And finally, the fourth rule to keep in mind is that you should put as much work as possible into the SQL and let DB2 optimize the access paths for you. With appropriate statistics and proper SQL coding, DB2 almost always will formulate more efficient access paths to access the data than you can code into your programs. Yes, the code will be more complex—and you need to factor that into your decision-making process—but from a performance perspective, let SQL do the work!


w w w .d k l .com



The rules and ideas in this article can be used to create a set of philosophical guiding principles for writing SQL to access DB2 data. No, I cannot guarantee that you will not have any performance problems if you follow them, but I can unequivocally assure you that you will be minimizing self-inflicted problems. So these rules, though they are not the be-all / end-all of SQL performance tuning, can set you up on the right path. Additional, in-depth tuning will likely be required at some point. But following the above rules will ensure that you are not making “rookie� mistakes that can kill the performance of your DB2 applications.


w w w .d k l .com

omni-channel customer engagement solutions

Focus Your Business

w w w.d kl .com


w w w .d k l .com

fe brua ry 20 16