Page 1

Exhaust Note

Thought Provoking Discussions with Automotive/Motorcycle Journalist Kevin Cameron

Excerpts from

the

Turbo Diesel Register 1


From the TDR Editor I’m a magazine junkie. I subscribe to all types of enthusiast publications. I’ve noted a trend that has the final page of the magazine addressing the audience with thought provoking commentary or question. I have followed Kevin Cameron ‘s writings since the ‘70s as a columnist for Cycle magazine. As a current motorcycle columnist for Cycle World, I read his prose monthly. Kevin can make a rod bolt interesting (Yes, he recently wrote about rod bolts). His writings will make you marvel at the intricacies of the mechanical world. I contacted Kevin and sent him several TDR magazines. “Kevin, do you know anything about diesels?,” I asked. His response, “I’ll give it a try. Reciprocating mass is reciprocating mass.” Our conversation yielded the “Exhaust Note” column. Each quarter, you’ll find Kevin in the back with his comments on machinery and engineering. Thanks, Kevin. Robert Patton TDR Editor

2


Table of Contents 4

Kevin Cameron Biography

74 Official Cure-Alls

5

Diesels and Turbos

77 What is a Hemi?

6

Winter Fuel, Power, Bearings and Combustion Temperature — Something to Think About!

80 Staying in One Piece

Torque Converters

83 Gas to Liquid—GTL Diesel Fuel

8

82 Issue 48’S Theme – Historical Perspective: China’s Development

10 Torqued Off

84 Turbocharger History

12 Rings and Break-In 14 Engine Lube Oil

86 SCR, Fuel Economy and Two Stroke Diesels

18 TDR – 80/20

88 Toolbox Diary

20 Diesel Combustion

90 Unlimited Energy from Carpet Fluff?

22 TDR – Basics

92 More Harping on an Old Tune

24 The Factory Knows Best –

94 Adding Up Small Gains

24 A Stock Vehicle?

96 Coffee Table Engineering – My Too-Real Experiences

26 Diesel Powered Future?

98 Diesels at Sea

26 Where They Left Off

100 Getting It Right

28 More About Oil

102 Hitting the New Number

32 Apples-to-Apples Baseline and Overkill

105 Legal Force and a Crazy Question

34 Reasons

108 Shooting Up For Diesels?

38 Tires and the Marketing of America

110 Hoping

42 Diesel Developments

112 Disel Alternatives – Making the Choice

46 In the Toolbox

114 Cummins, Chrysler, Fiat

48 Racing Diesels?

116 GTL Revisited

50 Choices

118 Smoke

52 Brakes

120 By Golly, You Don’t Say!

56 Burning It All Up

122 On Hold

58 Through the Cycle

124 More Than One Way

60 Diesel Politics

126 Purely Academic

62 Future of Diesel in the US

128 Diesels in the USA

64 Flame Diffusion and Your Next Diesel

130 Conflicting Interests

67 Invisible Technology

133 Simplicity and Something to Think About

70 It’s a Drag 72 Diesel Review

3


Kevin Cameron Biography Who can account for the interests of very small children? When I asked my mother about car engines, she read to me from a 1942 Britannica—“The Aero Engine,” “The Automobile,” and “Motorboats.” She told me years later that she was sure I understood very little of the information at the time, but I insisted. In desperation she phoned a local garage and a run-out, six-cylinder, flathead Studebaker engine was winched onto our garage floor. With borrowed tools I removed the oil pan and then stared in puzzlement at the lumps of sludged-up metal within. Nothing looked like the clean geometries shown in books. Later, my uncle and I stood at a Briggs & Stratton parts window and received the things we needed to rebuild the family lawnmower. My dad was skeptical but my uncle said, “If it has fuel, spark, compression, and timing, physics makes it run.” And it did. Faster than anyone can think it, the dumb metal repeated its cycle, intake, compression, power, exhaust. At the university I strayed from physics, trudging up a busy avenue to Robert Bentley Publishers, where for $10 I bought Ricardo’s “High Speed Internal Combustion Engines.” It carried me beyond grinding valves with a suction cup on the end of a stick, and cleaning ring grooves with a broken ring. Making things run is good, but I also wanted access to ideas, to have some idea where technology was going. More books followed, on aircraft engines, on two-strokes—and A.W. Judge’s classic, “High Speed Diesel Engines,” with its descriptions of the fascinating Napier ‘Deltic’ and the turbo-compound ‘Nomad’flat-12. I had no money, so motorcycles were what I could afford, and then my friends and I went racing. The books helped me navigate the prejudices of practical men. (“See, the coolant’s goin’ thru the engine so fast, it don’t have time to pick up the heat. You gotta slow it down…,” or “Them twostrokers is like a woman—sometimes they run good, an’ sometimes even if you do everything the same, they go blooey.”)

Years in motorcycle racing followed, building and tuning. I have the usual collection of racing stories, backed by pistons with sagged domes, connecting rods broken and twisted into limbo dancers, needle rollers flattened into little blue pancakes. In 1973 I began to occasionally write for the late CYCLE magazine about racing and motorcycle technology. And there was always more to read—for example Paul Schweitzer’s slim 1949 volume “Scavenging of TwoStroke Cycle Diesel Engines,” which reveals the variety of thinking that has gone into these machines. And Schlaifer and Heron’s “Development of Aircraft Engines and Fuels,” which showed how close aircraft engine manufacturers came to adopting the Diesel cycle around 1930—and why. The more I looked, the clearer it became that everything is connected. In “Liquid Rockets and Propellants” I found a chapter by F.A. Williams with the daunting title “Monodisperse Spray Deflagration.” Williams has devoted his entire working life to the study of how fuel droplets burn in a surrounding oxidizing gas—the diffusion flame problem in Diesel design turns out to be directly related to combustion in rocket engines. Fuel injectors developed for two-strokes end up in gasoline direct injection (GDI) four-stroke engines. Concepts developed in attempts to control sparkignition knock turn out to be useful in reducing Diesel emissions. There is no knowledge that is useless or irrelevant. And it’s all interesting. My family came into being and fullschedule racing became impossible. Motor-journalism took more of my time as racing took less—it turns out there aren’t too many people writing about internal combustion technology. One day Robert Patton phoned. Would I like to write about Diesel engines? Yes please. We’ve had the great fortune of Kevin’s insight into all things mechanical since Issue 17 in the beginning of 1998 when the TDR was a healthy 108 pages.

4


Diesels and Turbos I‘m not sure what a regular contributor to motorcycle and snowmobile magazines is doing here in TDR, but in defense of the idea, let me plead to being a widerang technology enthusiast. Once, on a trip to France kindly provided by Michelin, I visited the Pantheon which, an addition to being about to fall down, has a mysterious subterranean crypt in which all sorts of famous characters are buried. In one of the massive stone boxes were the remains of Sadi Carnot, whose work on thermodynamic cycles inspired Rudolf Diesel’s attempts to develop a more efficient heat engine. No doubt you have read of Diesel’s strange disappearance in 1913, from the deck of a channel steamer operating between England and the continent. Was this an accident or suicide. Was he, as some alleged, eliminated by persons unknown, intent upon preventing his special knowledge from being applied to another nation’s submarines?* The Diesel engine did have a fascination for submarine builders, for its use of lowvolatility oil fuel made it all but immune from the vapor explosions that destroyed many an early sub. Most importantly, because of the Diesel’s high thermal efficiency, the total weight of engine and fuel was less than that of an equally powerful geared steam turbine, then the dominant marine power system. This made Diesels a natural when Germany, limited by the post WW I treaty of Versailles to building vessels of no more than 10,000 ton, chose two-stroke, double-acting Diesels for its “pocket battleships” of 1931 and later. In the automotive field, the Diesel appears overweight as compared with equivalent gasoline engines because; (a) It must be strongly constructed to withstand steady, unthrottled operation at high compression ratio. In a Diesel, the air supply is unthrottled, but the fuel is throttled. (b) It can efficiently react only about 80% of the air charge it draws into its cylinders.

For the creative mind, a problem is just an opportunity, and the power of a Diesel engine is limited only by the air pressure in its intake, and its own physical strength; blow in twice as much air; mix with additional fuel and get twice as much power. The principle of the steam turbine—a high-speed jet of gas spinning a vaned wheel to develop power—suggested using the energy in internal combustion engine exhaust gas to drive a supercharger. The turbocharger was the result. The first Diesels to be turbocharged were slow-turning large two-strokes, which need large volumes of low-pressure air blown through their ports, to scavenge away exhaust gas and refill them for the next power stroke. Dr. Alfred Buchi’s exhaust-driven blower was a natural for this service. Today, the most efficient prime mover in the world remains the large marine turbocharged two-stroke Diesel, typically turning 60-90 rpm, with giant cylinders measuring as large as 36 inch bore by 60 inch stroke. These monsters can convert 55% of the fuel’s available energy into horsepower at the propeller shaft. Compare that with the 20-25% efficiency of the spark-ignition auto engine, or the 30-35% of high-speed Diesels. Economies of scale! Research begun at the end of WW I by Sanford Moss and others led to evaluation of turbochargers as a means of improving the altitude capabilities of aircraft. Gear-driven superchargers had to spin faster at higher altitudes, requiring complex multi-speed drives. Turbos had no such problem, and many thousands were built for American combat aircraft in WW II most notably for the major bombers B 17, B 24, and B 29, plus for high altitude engines such as P 47. In the aircraft case, the turbo continued to evolve after the piston engine had, so to speak, dried up and dropped off, leaving us with the aircraft gas turbine, which is just a fancy turbocharger that generates its own hot gas and creates a propulsive jet.

5

Metals developed for these tough applications have made possible the present-day automotive turbocharger. The power-developing expansion process in its turbine is mirror-imaged by the power-consuming compression process in its compressor. In the turbine, exhaust gas is led into a circular cavity that surrounds the turbine steel. This causes the gas to whirl as it flows radially inward. The whirling motion of the exhaust gas is transferred to the vanes of the turbine wheel, and the gases exit from the center of the wheel, essentially stripped of their energy. Typical turbine efficiencies are in the range of 80%. On the compressor side, air enters the center of the wheel and is flung outward through radial blading, exiting the wheel moving at essentially the wheel’s blade tip speed, which may be 1500 feet per second or more. In a circular housing around the compressor wheel, this high velocity is slowed down, or diffused, being converted into pressure energy that supercharges the engine’s cylinder. Again, compressor efficiency is somewhere near 80% at best. Since efficiencies multiply, the typical turbocharger’s overall efficiency is .80 x .80 = .64, or 60-odd percent. These is a lot better than just letting the energy dump out the exhaust pipe without doing any useful work. Turbochargers are pumps, and as such are cousins of the rocket engine turbopumps that supply fuel and oxidizer to liquid-fueled rocket engines. Each of the Space Shuttle’s main oxy-hydrogen engines has a compact high-pressure fuel turbo-pump, about three feet long, that develops 74,000 horsepower to inject 178 pounds of hydrogen per second at about 8000 psi. Contrast this with a typical automotive turbo, supplying a 6 liter engine with a 10 pound boost, developing in the vicinity of 20 horsepower. Turbochargers, fascinating devices! Turbo Diesel Register Issue 17


Winter Fuel, Power, Bearings and Combustion Temperature — Something to Think About! Else where in this issue, Joe Donnelly refers to chemistry as part of the reason for lighter diesel fuel use in winter. On a per-pound basis, Diesel fuel contains more energy than motor gasoline, and heavier fuels usually contain more specific energy. Winter diesel fuel has to be lighter to resist waxing, so it contains less energy and mileage suffers. Back in the late 1980’s, however, designers of Formula One racing cars got interested in this relationship. Since they had already pushed engine rpm, compression ratio, and breathing ability, they decided to push chemistry equally hard. If, they reasoned, it were possible to get more energy from each cubic foot of fuel-air mixture, their engines would make more power. As Mr. Donnelly notes, nature’s petroleum is backward for our purposes; for the lighter molecules, volatile enough to constitute gasoline, nature has supplied a structure of straight carbon chains that are susceptible to the detonation, or knock, that limits compression ratio in spark-ignition engines. For the heavier molecules found in the less-volatile compression-ignition engines. For the heavier molecules found in the lessvolatile compression-ignition fuels, nature has supplied branched chains and ring structures that would be highly effective in fighting knock—but this is the very opposite of what we need for good diesel ignition. To make diesel fuel ignite promptly, we need simple molecular structures that are easily knocked to pieces by the heat of compression—pieces that then combine with atmospheric oxygen to begin combustion. This property, almost the reverse of octane number, is the diesel fuel’s cetane rating, a measure of ease of ignition. What the Formula One chemists wanted was stuff volatile enough to form a mixture in their spark-ignition engines, with an adequate octane number, but with the higher energy of compression ignition fuels. This turned out to be a

class of compounds called dienes, those containing two double carbon bonds. The fuels containing such compounds weren’t really gasoline, because they were laboratory creations, but they did allow a useful power boost. If you watched televised races in the 1991 period, you’ll have seen the refueling crews, dressed head-to-toe in spaceman suits to protect them from he slightly carcinogenic, diene-based race fuel, a weird cousin of the stuff you put in your truck’s tank. I recently towed a heavy horse trailer many miles behind a spark-ignitionpowered pickup, and I found myself wishing, in the hills of New Hampshire, that putting my foot into it would produce something more reassuring than weak acceleration and louder knocking. Like the thin, between-the-teeth music of a turbo spooling-up, effectively making the engine bigger by blowing a bigger engine’s-worth of air into it. An engine’s power is proportional to its rpm, times its stroke-averaged combustion pressure, times its displacement. Rpm is really just how often you perform the power producing cycle, combustion pressure varies with how much air you get into the cylinders, and displacement is the size of the hall where this dance is held. Back when the first crisis hit, makers of heavy truck diesel engines decided to improve fuel economy by reducing rpm from 2200 to about 1800. Since a diesel uses only about 80% of its air charge, that meant engines needed either more displacement (undesirable increased weight and bulk) or more combustion pressure. The combustion pressure option was the path taken, and the turbocharger was the tool that made it possible. All these engines use so-called plain bearings, consisting of thin shells plated with bearing metal, running against smooth cylindrical journals ground on the crankshaft, with oil between to carry

6

the load. A plain bearing’s load-carrying ability does not come from the pressure of the oil pump, but from the viscosity of the oil, combined with the motion of the crankshaft. Viscosity is liquid friction, and as the crank turns, viscosity allows the crank to sweep oil from the unloaded side of the bearing, around to the loaded side. The unloaded side of the bearing is constantly kept full of oil by the oil pump. Oil is constantly squeezed from the sides of the bearings by the load, but crank rotation and viscosity always sweep more in to replace it. This action results in the fortunate ability to carry thousands of pounds of load per projected square inch of bearing area. As a plain bearing is loaded more heavily, its journal is pushed off-center more and more, and the minimum oil-film thickness in the loaded zone decreases. Friction rises—but not as fast as the load. The result is that the bearing is most efficient at the instant before the load crushed the oil film to zero and the bearing fails. On the other hand, friction rises as the square of rpm. These two facts being so, it is more efficient to build a high-pressure turbo engine operating at a lower rpm, rather than a lowerpressure, non-turbo) or ‘atmospheric’) engine, running at a higher rpm. Diesel engines can be made to run at much higher rpm—off-shore powerboat racing diesels regularly turn 6000 revs—but in this game, fuel economy is less important than brute horsepower. But over the road, fuel economy is the diesel engine’s reason to exist, so low rpm and high combustion pressure are the winning combination. The exhaust pollutants called nitrogen oxides result from high combustion temperature, but a diesel always burns its fuel in the presence of excess air (lean burn). This ought to reduce peak temperature. What gives? The cause of the nitrogen problem is called ‘sheath burning.’ As a fuel droplet cluster evaporates in the combustion chamber,


it is very fuel-rich at its center, very lean at its outer edges. Somewhere between, there is a layer (sheath) where the mixture is chemically-correct, and the combustion flame seeks out this region because it burns fastest here. This peaktemperature flame region is where most of the nitrogen oxide is produced. The problem is attacked by more thorough fuel-air mixing—developing fuel injectors that can deliver finer sprays that are still capable of deeply penetrating the dense compressed air in the cylinder. This is the reason for those high injection pressures up around 20,000 psi. High-powered rifles, which may be fired at most a few thousand times in their lifetimes, develop peak pressures only 2-3 times greater than this, while an injection pump must last through a hundred million cycles! Turbo Diesel Register Issue 18

7


Torque Converters Your truck’s turbocharger may not be the only turbo-machinery on board. If you are running an automatic transmission, its torque converter is another application of similar principles. A torque converter is a specialized kind of fluid coupling that can convert an input at lower torque and higher speed, into an output at higher torque and lower speed. It is, in effect, a kind of continuouslyvariable transmission using fluid flow rather than gears and other mechanical elements. Because the converter itself cannot change speed over a very wide range, it is coupled to an automatic gearbox whose several speeds supply the necessary extra range. The result is a system able to keep the engine at an efficient rpm over the vehicle’s whole speed range. The converter looks like a big sheetmetal doughnut, bolted to the engine’s flywheel, with a shaft coming out of its center. Inside are an input impeller, face-to-face with an output turbine. There is also a third element—the stator. Impeller and turbine are equipped with radial vanes. The engine spins the impeller, which centrifugally throws oil (the entire unit is kept full of ATF, or automatic transmission fluid, at all times) outward, then into and against the vanes of the output turbine. The turbine’s shaft drives the load. There are analogies to a turbocharger. The converter’s impeller is like the turbo’s centrifugal compressor. Instead of air, oil enters its vanes at their smallest diameter and is flung outward by them, gaining kinetic energy. This outflow of fast-moving oil drives the turbine. A torque converter’s turbine is a radialinflow machine just like the exhaust section of a turbo. In the turbo, exhaust has is ducted to whirl around the outside of the turbine wheel. As it flows radially inward, the whirling gas strikes the turbine blades. Decelerating against these blades converts the kinetic energy of the gas into pressure. This pressure drives the blades around, delivering power to the turbine.

In the torque converter, power comes from the whirling oil being flung out by the adjacent input impeller. As this oil enters the inlet face of the turbine, its kinetic energy is converted into pressure against the turbine’s blades. This creates the torque that spins the turbine. The oil flows inward through the turbine’s radial blading and emerges at a smaller diameter, and with less energy. It is there redirected to flow back toward the impeller again. The oil continuously makes this loop from impeller to turbine and around again. So far, we’re just describing a simple fluid coupling. It’s useful because it can allow an engine to idle while the output turbine sits still, yet can make a strong connection as the engine speeds up. But there’s a problem with this simple device. Fluid couplings are efficient only when the difference between impeller and turbine rpm is small. As the impeller speeds up and the rpm difference rises, efficiency falls. This is because the lowenergy oil emerging from the turbine is rotating much more slowly than are the vanes of the impeller they are trying to enter. The result is violent fluid shear, turbulence, and reduced efficiency. This means a fluid coupling is poor device for accelerating a vehicle from a stop. The stator of a true torque converter fixes this. It is a ring of curved vanes, located between the turbine’s outflow region and the impeller’s inflow region. These stator vanes turn the oil emerging from the turbine, so that it now rotates in the same direction as the impeller. With this sideways kick from the stator, the oil enters the impeller without all that shear and turbulence. Now coupling efficiency is high even when there is a large difference between impeller and turbine rpm. Because of its efficiency, we can examine the torque converter from a conservation-of-energy standpoint. Energy in must equal energy out. Let’s say our engine is spinning the impeller at 2000 rpm, putting in 400 pounds-feet of torque. Let’s say the turbine is turning

8

only 1000 rpm because the vehicle is still accelerating. In order for energy to be conserved, rpm x torque in must equal rpm x torque out. This give us 400 x 2000 = torque out time 1000 rpm. Out put torque must therefore equal 800 pounds-feet. Real torque converters do have some losses because the ATF in them, although very light-bodied, does have some viscosity (fluid friction). Also, despite the best possible design of blade shapes, there is still some turbulence in the converter’s internal flows. This doubling of torque is not something for nothing—it is just using hydraulics instead of gears. If you mechanically geared the engine down at a ratio of 2:1, you would certainly expect the output to be 800 pounds-feet of torque at 1000 rpm (minus “nature’s toll” in the form of a couple of percent friction loss). But how does the impeller, turning 2000 rpm, with 400 pounds-feet of torque, produce 800 pounds-feet of torque in a turbine of the very same diameter? This torque is the result of kinetic energy coming from the impeller—the highenergy oil being flung out of it, against the turbine’s blades. Stop thinking of it as an impeller and think of it instead as a powerful firehose, producing a jet of fast-moving oil. If we increase the speed of the liquid shooting from the firehose, won’t we expect to get increased torque on the turbine? The impeller is just a pump that throws oil at the turbine. The faster the impeller spins, the more oil it throws, at a higher velocity, at the turbine. The result is greater turbine torque. How much torque multiplication can a torque converter produce? The efficient range of torque converters is typically 2-2.5. As the vehicle accelerates and turbine and impeller speeds come together, torque multiplication drops until, at cruise, there is only minor slippage and no torque multiplication. The device ceases to be a torque converter and becomes a simple fluid coupling again. To make this possible,


the stator vane ring is mounted on a one-way clutch. In torque-converter mode, the torque on the stator kicks it back against this one-way clutch, locking it so it can do its job of turning the flow emerging from the turbine. But as turbine speed comes up near impeller speed, torque against the stator drops and then reverses, and the stator turns with the rest of the assembly. The torque converter has become a simple fluid coupling. To save fuel, torque converters are now made with a lock-up clutch, hydraulically operated and (usually) electronically controlled. Under non-acceleration conditions, no torque multiplication is needed, so the impeller and turbine are locked together, eliminating the small slippage and loss that would otherwise occur.

Big and Little Torque converters are sized to the torque they must transmit. At any given input rpm, the bigger the converter, the higher the velocity of oil coming from its impeller will be, and the greater the area of engagement between impeller and turbine faces. This gives the bigger converter its greater torque capacity. Drag racers and fans know that powerful dragsters in automatic transmission classes may use very small converters. To launch from the starting line, the engine must turn at a speed at which it makes high torque, and in a race engine, this can be 5000 or more rpm. Converters for this application are rated in terms of “stall speed”—the rpm at which they will hold the engine, on full throttle, with the brakes on. Even a “little” 9-inch converter can do the job in some of these applications, because at 8-10,000 rpm this small converter will transmit enormous torque. In this case, torque capacity is coming from rpm rather than from converter diameter.

History Lesson At the usual rpm of Diesel engines, adequate converter outflow velocity has to come from diameter rather than from rpm. Both the fluid coupling and the torque converter were invented before WWI by Hermann Foettinger, an electrical engineer working at Vulkanwerke, a shipyard in Hamburg, Germany. As always, it’s fascinating to see how often innovations come from outsiders. Despite creative work by many people in the 1920s and ‘30s, successful largescale application of the torque converter to vehicle drive had to wait for WWII and the men in GM’s Product Study Group Number Three. Early US tanks were made especially vulnerable by having to pause to shift gears, and by the smoke puffs emitted during shifting. The engineers in PSG#3 quickly combined an industrial torque converter with the automatic-shift geartrain from an early GM Hydramatic transmission (which then used only a fluid coupling), and in six weeks they had machined castings and a dyno-ready prototype. Many variations on the basic torque converter have since been built. Some use multiple stator rings or variable-pitch stator blades to broaden the range of torque multiplication. Today, a major goal of torque converter transmission design is to hold engines near their most fuel=efficient rpm as road load changes. The simple 2- and 3-speed designs of the past have given way to the 4- and 5-speed gear units of today, driven by lockup-type converters, and controlled by evermore-complex (often electronic) controls. Anyone wishing to read more about the development of automotive transmissions will enjoy the book “Changing Gears,” by Philip G. Gott. It is published by the society of Automotive Engineers in Warrendale, PA. The SAE publications order line is 412-776-4970. Turbo Diesel Register Issue 19

9


Torqued Off You’ve finished assembling your cylinder head and have set it in place on a new head gasket. Any alignment dowels are properly in place. You run the fasteners down snug, then lightly seat each one with a short-handled ratchet. You reach for the torque wrench, ready to start tightening. Have you ever thought about what is really going on here? As you tighten the nut on a stud, the nut tries to pull the stud up through the cylinder head, even after the gasket has fully compressed. As you turn a bolt, the head stays where it is, but the threads pull deeper into the block. Something’s gotta give here; what is happening is that the stud or bolt shank is stretching as we torque it up—stretching like a spring. And that is exactly what bolts and studs are—powerful springs that are tensioned to hold parts together. Threads are simply a convenient way to tension those springs. The rule of thumb for head gasket clamping force is that it should be four times the force of peak pressure during combustion. If, for example, that peak pressure is 1500 psi in a four-inch cylinder, then we have 1500 pounds times the bore area of about 13 square inches, for a total peak combustion force of 18,000 pounds. Four times this—64,000 pounds—is what we have to get from the head bolts or studs around the cylinder, in order to ensure a durable seal. If there are five bolts around each cylinder, that is a tension of approximately 13,000 pounds from each bolt. Therefore the main thing to remember as you tighten bolts or nuts is that you are establishing a particular, necessary tension in each one. This tension is what holds machines together. This tension is the direct result of how much the material in the fastener (our spring) is stretched. Every material has a sort of spring constant, called Young’s modulus of elasticity. It relates stretch to tension. For steel, this modulus has a value of about thirty million, and it doesn’t vary much with alloy composition or heat treatment. Young’s modulus has nothing

to do with the force required to break or permanently stretch a material—it only tells us how much stretch produces how much tension. As you tighten a bolt, it becomes longer. As you loosen it, it becomes shorter again, resuming its original length when all stress has been removed. This kind of stretch is called elastic deformation. If you deliberately tighten a bolt until you feel that dreadful loosening of the wrench in your hand, you will find that when the bolt is removed and its length remeasured, that it has now permanently stretched. Look closely and you will see that some part of the bolt has “neckeddown” to a smaller cross-section. This is why the wrench loosened—the material was stretched beyond its elastic limit, and has begun to deform elastically. As the bolt necks down, it loses tension. The strength of a fastener is just a measure of how hard we have to pull on it before it stretches permanently. Strength is therefore a measure of how far up the material’s elastic range extends—how much tension we can get out of it before it begins to neck down or actually breaks. The rigidity of the material is its spring constant while it is operating in its elastic range. Strength is the outer limit of the material’s elastic behavior. If we tighten a fastener enough to drive it past its elastic limit, it starts to yield, beginning to neck down, lose crosssectional area, and lose tension. This is why fastener-tightening torques are serious business, not a testosterone challenge. Recommended installation torques are determined to let you use most of the fastener’s strength, leaving some margin of safety before permanent stretch begins. My favorite story concerns a hotrodder who was having trouble with high-strength studs breaking on trick Summers Brothers axles he was using. Over the phone, the company engineer verified that the complainer was using a

10

torque wrench, and did know the correct torque value. So far, so good, but the breakage continued. Eventually, the company sent a rep to figure out the problem. “Let me see you torque up the studs— just like you always do,” requested the rep. The racer ran the nuts onto the studs, seated them, and began torqueing, using an impressive, name-brand clicker torque wrench. Smoothly, he swept the wrench around until it just clicked. Then he turned the nut another quarter-turn. “Stop!” shouted the rep. “What, in the name of heaven, are you doing? You’re supposed to stop when the wrench clicks!” “Well,” replied the racer, “I don’t want ‘em to come loose.” By over-torqueing those studs, the well-meaning, but ignorant racer, was driving the material past its elastic range, permanently stretching the studs. Weakened in this way, they were breaking in service. As soon as the racer torqued a fresh set to the correct value, he had no more breakage. This story is amusing, but people still make this mistake too often—losing, not gaining, installation tension by adding more torque. How do we know how much tension we are putting on a fastener? In some applications, it can be measured directly. For the con-rod bolts in some engines, the instructions call out a micrometer measurement tensioning technique. Measure the fastener’s length before installation, then torque it until it measures a certain number of thousandths of an inch longer. This is simple and accurate because it is based on Young’s modulus, which relates stress (tension) to strain (the amount the bolt is stretched). But you can’t use this lovely method when one end of the fastener is threaded deep into your engine block. In that case you must rely on a torque wrench. What


does it measure? It measures the force required to stretch the bolt or stud, plus the friction in the threads and against the washer. How much of the total torque value is from stretch, how much from friction? There is no way to know exactly, but industry assumes a standard amount of friction, based upon standardized conditions when the fastener is torqued. This means new, fresh parts, free from rust and grit, and it usually means with lightly oiled threads - not dry. Under these standard conditions, the recommended installation torque produces adequate fastener tension to do the desired job.

combustion pressure will pop your head gasket. Find out in advance if the existing head bolts/studs will do the job; ask someone who has done it. If bigger or higher-grade fasteners are required, they are cheap insurance against a long walk. Turbo Diesel Register Issue 20

But in our real world of privately owned machinery, nuts and bolts have usually been assembled at least once before (at the factory). This means that the threads of the nuts will be slightly deformed, because they are designed to do just that on assembly. Nuts are purposely made softer than bolts in order to even out stress concentrations. After such deformation, nuts produce more friction than before. Rust, grit, or lack of lubrication also lead to deviation from standard conditions. How can you do it right and guard against assembly failures? The best way is to return to the standards used at the factory. In heavily-loaded, highstress applications like con-rod bolts or nuts, use new fasteners every time you build. Make sure the threads are clean and put a drop of oil or assembly lube on them before assembling. In less critical applications, at least have a look at each fastener as you build, and reject those that are obviously deformed, or won’t thread smoothly by hand. Clean and oil the parts. Use the torque wrench wherever the manual calls out an installation torque. If you are just beginning in mechanics, let the torque wrench teach your hand what reasonable torques feel like for the various sizes of fasteners. Trying to extract broken fasteners is no fun. If you plan to uprate an existing engine with more turbo pressure, there is the possibility that the resulting higher

11


Rings and Break-In Engine parts are made as precisely as technology and economy permit, but the final manufacturing operation is performed by you, the owner. That is break-in. In this process, the microscopically imperfect surfaces of piston rings and cylinders, crankshaft journals and bearing shells, tappets and cam lobes, are given their final smoothing by being run together in the assembled engine. Normally, through most of the engine cycle, moving parts are separated by complete oil films or by a combination of an oil film with protective surface layers provided by anti-wear additiveS in the engine oil. The minimum oil film thickness under load in a crankshaft bearing can be as little as 1.5 microns, which is .00006 inch. As manufactured, neither the bearing shells nor the crank journals are anything like that smooth. Under magnification, their surfaces are seen to consist of endless peaks and valleys, many of which are taller than the minimum oil film thickness under load. Therefore when you start up your new engine, the tallest mountains on each side of the oil film are going to hit each other. When they do, the high local pressure causes them to weld together, after which the continued motion of the parts breaks this bond, releasing wear particles into the oil film. All this wearing and plucking generates heat, and that causes the local oil viscosity to drop. That, in turn, causes the minimum oil film thickness to get even smaller, leading to more contact, welding, tearing, and heating. Continued without let-up, this can lead to seizure. Crankshafts pretty much take care of themselves in this process, but piston rings and cylinder walls are critical to sealing and therefore to performance. The general rules for break-in have remained the same for a long time: (1) no sustained full-throttle operation (2) alternate cycles of fairly substantial throttle with periods of coasting or reduced power.

Rule (1) avoids generation of excess heat and wear particles that could cause damage. Rule (2) accomplishes two things. First, short periods of heavy throttle apply the pressure that is necessary to drive piston rings through both the oil and additive films to achieve the surface-to-surface contact that is necessary to break-in. [Editors note, Cummins ReCon engines are each run on the dynamometer for a 20-minute break-in period. Five minutes warm-up to check for leaks then followed by a progressive full power run. New engines are only tested for leaks.] Second, getting off-throttle terminates the breakin action, preventing excess build-up of heat and wear particles. The coasting time allows the oil system to sweep the wear particles to the filter, and allows locally generated heat to flow away into the engine structure. A special problem of our era is the failed break-in. This can be the result of “babying” during break-in—steady driving at very low speed and load. This can be made worse by high-tech anti-wear additives in modern oils. Such additives form solid metallic soap films on engine friction surfaces. These films have much lower friction than metal on metal, and can withstand several passes at high pressures like 90,000 psi before they are gouged away. The films are sacrificial – they yield at much lower force than does the underlying part – and they re-form as long as there is additive remaining in the oil. They can affect break-in by being able to carry the local load before the piston rings have worn into full contact with the cylinders. The ring, in effect, develops polished areas and then the break-in stops. For this reason, some engines are supplied with special break-in oil already in place, to be changed after a specified period. [Editor’s note, Cummins engines are shipped from the factory with an initial fill of Cummins Premium Blue mineral based motor oil.] Such oil contains less anti-wear protection, so that break-in can occur more easily. If the rings fail to break in completely, the engine will never develop its rated power

12

because of compression leakage, and it may use oil at an excessive rate. This is why manufacturers now advise that engines be broken-in with fairly heavy throttle, alternating with coasting. On the oil container you will find two basic pieces of information. One is the viscosity rating, such as 15W-40. The other is the American Petroleum Institute (API) category, such as CG4. The first letter identifies the engine type—C for compression ignition, S for spark ignition. The second letter identifies a set of standards that the oil must meet. This, in effect, defines the additive package in the oil, which does things like provide anti-wear action, resist rusting, oxidation, sludging, and so on. Early API categories, such as CA, CB, etc., have relatively little in the way of additives. Recent categories have much stronger additive packages. It is the anti-wear additives, such as zinc dialkyl dithiophosphate (ZDDP – just try saying that one real fast four time), that make break-in harder than it used to be. This has led to practices that you may have heard of, such as the “dry breakin,” or the use of special break-in oils. Follow the advice of the manufacturer or experienced dealer here – they are doing this every day and they know what works. In a dry break-in, the engine is assembled with no oil on the cylinder walls or pistons, and just a dab of oil is applied to each piston skirt as it goes into the hole. Other builders recommend just a wipe of oil on each cylinder from an oily paper towel. When the engine is started, the revs are brought up to half of red-line with no load, and held there for 30 seconds. Then the engine is stopped and the oil is changed. Strange to say, this seemingly weird practice works well in some cases where nothing else does. It is evidence of how difficult the high performance of modern oils can make break-in. This is especially true of synthetic oils, which often have extra-aggressive additive packages. Many engine builders prefer that an engine be broken in on mineral oil before being switched to synthetic. [True of the Cummins engine. Do not


change to synthetics or a synthetic blend until after the engine is “settled” at approximately 10k miles.] The lighter the duty of the engine, the more difficult break-in becomes, because the average applied load is small. When you hear of people having break-in troubles, the explanation usually lies with the oil and a failure to apply heavy enough load. In former times, break-in was a gradual process taking as much as 1,000 miles, consisting mainly of painfully slow driving. This worked because noadditive oils of the 1940s and’50s did not save rough engine surfaces from metal-to-metal contact, and rings were filed by the cross-hatch honing of the cylinder walls into a good fit. The coming of more capable oils—both mineral and synthetic—has forced engine makers to provide finer surface finishes and rounder, straighter cylinders. In effect, more of what used to be break-in is now performed in the factory, leaving less material to be removed in the first few hundred miles of use. Turbo Diesel Register Issue 21

13


Engine Lube Oil Lubrication comes in three flavors: 1) Full-film, or hydrodynamic. As a piston ring slides along the cylinder wall, or as a crankshaft journal revolves inside its bearing shells, metal-to-metal contact is completely prevented by the ability of oil to form a wedge-shaped film between the parts. Full-film lubrication depends upon the phenomenon of viscosity—oil’s internal friction prevents it from being instantly squeezed out from between moving parts. 2) Boundary Lubrication. When an oil film is not present, additives in the oil form protective layers on the parts. Such protective layers can preserve parts from metal-to-metal contact until full-film lubrication is restored. 3) Mixed lubrication. This is a combination of the above, which occurs during engine starting, when most oil has drained away from parts, or when parts motion is too slow to maintain full-film lubrication, as when pistons and rings move very slowly and under great pressure near top dead center. This description makes it clear that, to lubricate under all conditions, oil must have viscosity, and it must have the ability to form protective layers on parts. A crankshaft journal is pushed slightly off-center in its bearings by the forces acting on it. The result is that the clearance space between journal and bearings is thicker on one side, thinner on the other; it forms a wedge that looks rather like an extremely thin crescent moon. On the thick side of this wedge —the unloaded side—the clearance between journal and bearing is about .003”. On the thin, loaded side, the clearance is much smaller—only a few microns, maybe as little as .00006.0001”. What drives oil into the wedge, towards a local pressure that can be as high as several thousand pounds per square inch? Certainly not oil pump pressure, which never exceeds 100 psi. The driving force is in fact the motion of the crankshaft itself, combined with the internal friction—the viscosity—of the

oil. Crank rotation continuously drags oil into the wedge, with enough force to generate extremely high pressure there that supports the load. Naturally, much of this high-pressure oil escapes from the edges of the bearing, but fresh oil is being supplied to the unloaded side of the bearing clearance space by the pump, through drilled oil holes. As long as there is oil ahead of the wedge, with enough viscosity to carry it into the loaded zone, the crank journal and bearing shells will never touch. If the oil had no viscosity, if it were a completely frictionless liquid, it would instantly be squirted out of the bearing and would never support any load at all. Conversely, bearing friction is also caused by viscosity—because of oil’s internal friction, it takes power to turn the loaded bearing. A little-known and counter-intuitive fact is that plain bearings and rolling bearings have approximately equal friction losses in running engines. Rolling bearings do have lower friction at low speeds, which is why auto manufacturers are beginning to use roller tappets again in valve mechanisms. All oils lose viscosity as their temperature increases, and the slope of the viscosity-vs.-temperature curve is called the viscosity index. Two obvious requirements for any oil are (a) that its viscosity must be low enough at low temperature to permit cold-starting and (b) that at the temperature of hot engine parts, it must still retain enough viscosity to form a film thick enough to separate piston rings from cylinder walls and crank journals from bearing shells. In the old, pre-additive days, this meant searching high and low for oil base stocks with a high viscosity index (VI). Pennsylvania crude oils were good in this respect. Now, however, there are ways to alter any oil’s viscosity index with additives, to create what are called multi-viscosity or multi-grade oils. When you see a viscosity given as two numbers, such as 10W40, the oil so labeled behaves as a 10 grade at zero

14

degrees F (W stands for Winter), but as a 40 grade at 200 degrees F. This offers an advantage, as follows. If you filled your crankcase with a straight 10 grade, at piston-ring temperature in a warmed-up engine, this oil would have lost so much viscosity as to be unable to support the loads required of it. Piston rings, backed by combustion pressure, would squeeze this thin oil out from between themselves and cylinder walls, resulting in unacceptably rapid wear. But if you filled up with 40-grade oil, no starter devised by man could turn your cold engine in February, in Three River Falls, MN. Therefore multi-grade oils were called into existence through chemical trickery. They work this way. An additive consisting of long-chain polymers is devised. When cold, molecules of this polymer have little activity, and so they effectively roll up into little balls, having little influence on viscosity. But as the temperature rises, all molecules—oil and additive—have more thermal activity. The long polymers “unroll”, and their long chains partially counter the oil’s loss of viscosity. By the time 200 degrees is reached, the oil is no longer acting like a 10; it’s acting like a 40-grade instead. The oil still loses viscosity, but because of the presence of the VI-improver, it loses less. The possible fly in the ointment is that these long polymer chains are not totally durable. They can be broken by passing through gear meshes and cam lobeand-tappet contacts. Some polymers are stronger than others. This means that, as time passes, the multi-VIs additive breaks down. At first, the Diesel community was skeptical of multi-grade oils, but today, it has been discovered that a good multi-grade oil’s slower loss of viscosity as temperature rises continues to hold good even in the tough environment of the top piston ring, which may operate up near 350 degrees F. Durable multi-grade Diesel-qualified oils now exist which retain their advantages through standard oil-drain cycles. Now a momentary digression. One aspect of design that is now exerting


pressure on lube engineering is top ring placement. When an engine fires, combustion chamber pressure rises very high, and this pressure enters the land clearance above the top ring and the piston ring crevice spaces (the small clearance above and behind each ring), carrying with it some fuel and/or partial products of combustion. As the piston descends on the power stroke, cylinder pressure falls as the combustion gas expands. High pressure remains in the ring crevice spaces, only emerging with its cargo of unburned fuel late in the power stroke. This makes a measurable contribution to unburned hydrocarbons in the exhaust, causing red lights to come on in Ann Arbor, Michigan. Anything that can be done to reduce ring land and crevice volume will therefore cut unburned HC emissions. This, in turn, tempts engineers to locate piston rings closer to the tops of pistons, and to tighten up ring crevice clearances. Both of these changes have a potential to make rings stick. First, higher temperature oxidizes and polymerizes oil into gum faster. Second, the smaller the ring clearances are made, the easier they are to clog. Lube engineers will have to live with these changes, finding ways to lubricate hotter rings and to keep them free. The oil can or bottle carries another piece of useful information, the API category. The API is the American Petroleum Institute, and they divide oils into two basic types: those compounded for spark-ignition (S) engines, and those made for compression-ignition (C) engines. Every engine oil, therefore, carries either a C or S prefix, followed by another letter that designates the particular set of standards that the oil meets. As conditions in engines become tougher (for example, as top rings are required to run hotter in turbocharged engines), oils must be compounded to handle those conditions. New tests are devised, by which oils can be qualified for these harsher conditions. Back in 1950, oils contained nothing much but oil, and you will still hear old-timers who say, “Just give me an

oil that’s all oil—none of these fancy additives.” Back then, there was a grain of truth in what they said. One of the first widely-used oil additives was oil-soluble detergent, added to prevent the formation of sludge and varnish on engine parts. When a sludged-up older engine was switched to the new detergent oil, the detergent action released a flood of corruption that blocked filters and even blocked oilways. This is the basis of the old-timers’ objections. This is no longer valid because today, as all new vehicles employ detergent oils, which prevent sludge from accumulating in the first place. It is removed with the oil and filter at the scheduled changes. Non-detergent oils are still made for applications in which temperatures are too high for the additives to survive. The typical example is lawn-care equipment, whose air-cooled cylinders, their fins usually blocked with grass clippings, run at unbelievable temperatures. As a piston rises and falls, its velocity varies from zero at top and bottom center, to a maximum at about 78 degrees after top dead center. Although piston rings are springy, most of the force pressing them out against the cylinder wall comes from combustion gas, which enters the piston-ring crevice from above, then pushes out the rings from behind. This ensures their sealing. *But if the pressure of the ring on the oil film beneath it is too great, the oil film will become too thin to completely separate the ring from the cylinder. The oil film also varies with velocity; as the piston slows near TDC, oil tends to be squeezed out from under the rings. The tallest imperfections on the ring will begin to occasionally touch those on the cylinder wall. Where they touch, the pressure is tremendous, and the result is local welding. As the piston moves on, the welds break, generating wear particles. The heat generated in this process heats the oil locally, which loses yet more viscosity, making the oil film thinner yet, leading to more contact and heat. It is a vicious cycle, often leading to scuffing. Something is needed to protect surfaces when partial contact is made.

15

That something is friction modifiers and anti-wear additives. Friction modifiers are molecules such as fatty acids that have an electrical affinity for surfaces and therefore form a protective layer on them. They are oily long-chain molecules, and the layer they form has considerable strength and a much lower coefficient of friction than metalon-metal. Anti-wear additives react chemically with metal surfaces to form a layer of metallic soap. Such layers can withstand many unlubricated cycles at pressures of thousands of psi. They protect parts by being weaker than the metal under them, so that scuffing occurs, not on the parts themselves, but in the sacrificial additive layers adhering to them. When such a film is locally gouged off, it re-forms from additional additive carried in the oil. In areas of mixed lubrication (piston rings near TDC, all engine parts at cold-start, between cam lobes and tappets, etc.), these additives achieve remarkable reductions in wear and damage. Really large concentrations of potent anti-wear additives are used in true gear oils - this is responsible for their special “stink”. With the aid of such additives, many gears actually become smoother the longer they run. In high spots, where local pressure is very high, the rate of film formation and destruction is also high. Because some metal is lost with the scraped-away additive film, a polishing action results. GL-5 gear oil is a wonderful thing, and I have seen gears survive in it that had previously failed in ordinary oils. Additives work! Other types of additives prevent rust, allow oils to remain liquid at very low temperatures, and slow oil oxidation at high temperatures. Modern oil may contain as much as 20% by volume of additives, each of which has a specific and essential job to do. A fresh controversy surrounds the latest in automotive oils (those designated with the S category, those labeled “Energy-Conserving”, or SJ. Because most auto engines are spark-ignition


and employ exhaust catalysts, the use of metallic additives in their oil has some potential for poisoning the catalyst in the same way that fuel lead does. The potent anti-wear additive ZDDP (zinc dialkyl dithiophosphate) contains zinc, and for catalyst protection, engine oil zinc content for S oils has now been limited. Oils for compression-ignition applications (diesel those designated with the C—for compression—category) are not limited in this way, and may contain as much anti-wear as engineers find necessary to ensure adequate parts protection. Therefore, it is not true to say “Oil is oil” and blithely pour into your engine whatever you find on sale at the parts store or supermarket. Diesels work harder than spark-ignition engines, and they are worked harder. They need all the protection that modern oils can give. Read your vehicle’s owner’s manual to find the oil type specified, and use it. If there is any confusion, ask your dealer or call the manufacturer. Turbo Diesel Register Issue 22

16


Your Notes:

17


TDR – 80/20 Although you, the reader, may not be aware of it, all of us who write for TDR receive a certain amount of direction from our forward air controller, Robert Patton. That way, we are all more or less shooting at the same target in any given issue. In this issue, that target is one of the unwritten laws of nature, that of diminishing returns. Everyone has anecdotes relating to this, and I’ll get to mine in a moment, but first, this; The central fact of the Diesel engine— its high compression ratio—is a perfect example of this rule. Internal combustion engines make power by burning a compressed mixture of fuel and air. The heat added by combustion raises the pressure of this gas mixture, and this pressure, allowed to expand against a piston or other device, performs useful work. The more we compress the air before burning fuel in it, the higher the combustion pressure that results. As a rule of thumb, this fullthrottle peak pressure is about 80 times the compression ratio—a pretty high pressure, which is why Diesel engines have to be heavily built. The more we expand the burned gases after combustion, the more completely we extract their energy. These facts make high compression and expansion ratios desirable. For obvious reasons, a Diesel’s compression and expansion ratios are equal to each other. The Diesel engine’s desirable fuel efficiency is a direct result of its high compression ratio. Spark ignition engines cannot use such high ratios because, on gasoline, they result in destructive knock. Ideally, to get all the available energy in high-pressure combustion gas, we’d expand it all the way down to zero pressure, but this is impossible because the engine has to exhaust against 15 psi of atmospheric pressure. Also, it takes a certain amount of pressure just to overcome piston and piston-ring friction. For these and other reasons, there is no useful gain in expanding the gas indefinitely. Indeed, if we graph out the energy extracted from high-pressure combustion gas versus compression

ratio, we see a curve that rises steeply at first. For example, you get a big gain by going from the Model-T Ford compression ratio of 3 to 1, up to the 5 to 1 ratio of Chrysler’s “high-compression” sixes of the early 1930s. As you keep raising the compression ratio, however, the gains get smaller, and the curve rises less and less steeply. Finally, when you get up to numbers like 13 or 15, the gain from going to the next ratio higher gets pretty small. This is what the law of diminishing returns looks like! Of course, a Diesel engine won’t even start and run unless its compression ratio is high enough to heat its air charge above the temperature of fuel ignition, so that and other characteristics of the engine set a desirable minimum compression ratio. Pre-chamber Diesels, such as that in the VW Rabbit, have a lot of extra internal combustion chamber surface area through which heat is lost as the piston rises on compression. Such engines therefore need higher compression ratios, above 20 to one, just to start reliably. But there’s another effect to consider here, which redoubles the law of diminishing returns. That is overall heat loss during combustion. As we raise the compression ratio, we also raise the peak flame temperature, and that pushes heat out through the head and piston crown faster. A couple of paragraphs ago we made a curve of theoretical energy recovery versus compression ratio—and now this heat loss effect causes that curve to flatten out even faster. The result is, for either Diesel or spark-ignition (gasoline) engines, that peak efficiency comes somewhere pretty close to 17 to one compression. Bear in mind that, for the spark-ignition engine, detonation may very well prevent you from reaching that high number. There you have it. Theory suggests that maximum energy recovery ought to be associated with an infinite compression/ expansion ratio, but reality makes us stop well short of that goal. As it turns out, more than eighty percent of the

18

recoverable energy has been expended on the piston by the time it is only halfway down the cylinder. That’s a useful fact, because it allows us the luxury of beginning to open the exhaust valves comfortably before BDC—without significant loss of power. I mentioned anecdotes. My special interest is motorcycle racing, and as in other forms of motor sport, bike engine builders use a flow bench to develop and improve flow through intake and exhaust ports. In general, higher flow means higher power. Flow specialists spend their lives surrounded by the insistent whine of the flow blower, watching the gages expectantly to see if that last tweak pushed the numbers up. It’s as close as engineering gets to a slot machine. A number of years ago, I asked Rob Muzzy, a prominent builder, if he used the flow bench. “Are you kidding?” he replied. “You can lose your mind that way. Yeah, I use it, but only to the point of getting about 80% of the improvement I think is possible. If I spent more time, I’d be taking it from another activity that deserved it more.” Another racer friend compiled statistics from ten years of bike road racing championships. The people who won championships were not the people who won the most races—in fact, they tended to finish an average of third—but with zero DNFs and zero crashes. As some annoying person once noted, “they only award the points at the finish.” You can see the reason for this on a stopwatch at any race. Put the watch on someone who is running by himself, without a close competitor, and every lap will be exactly the same, often within a couple of hundredths of a second. This utter consistency comes from running at a pace that the racer is comfortable with. Now put the watch on the two men going for the lead and the picture changes dramatically. Because the leaders are driving each other as hard as they can, each man has to try things he’s not entirely sure of. The result is that both men make small mistakes, which cause


the lap-to-lap variation to become tenths of a second instead of hundredths. The man cruising back in third is delighted with this situation. The leaders are in danger of crashing because of the mistakes they are making, and both are pushing their engines and tires very hard. Will they crash, blow up, or just burn up their tires? The man in third gladly accepts any of the above. He has done a good day’s work through his craftsmanlike conservatism—and has scored championship points. There are lots of ways to throw away the overall result by trying too hard for the details. A particular favorite of mine is the “King of the late brakers.” This is the motorcycle racer who discovers that, by braking late and very hard, he can often pass other riders going into turns. This becomes his religion. When he finally gets into really fast company, his system stops working. By braking late, he enters the corner too fast, which forces him to go wide, muttering “Oops, oops, oops” under his breath. The man he passed on the way in now re-passes him on the way out. Believing utterly in his system, Late Braker does it harder at the next corner, causing him to arrive in mid-corner even hotter and more out-of-shape than before. The other man again dives under him and, while Late Braker is getting himself collected, motors away. Eventually he overdoes it and disappears into a big dust cloud. There is no way to get deprogrammed other than (a) to quit racing is disgust or (b) see that too much of one good thing screws up all other good things. Now a final anecdote, from a completely different field—jet engines. Back when Lockheed was developing its Tri-Star L-1011 airliner, the engines were contracted to come from Rolls-Royce. Rolls engineers worked with might and main to make the RB-211 their best engine ever, but for reasons that escaped them, they just couldn’t reach the thrust and fuel consumption they had guaranteed to Lockheed. The situation became critical, resulting in a costly corporate reorganization. A retired veteran engineer, Stanley Hooker, was

recalled to straighten out the mess. The one condition he set was that his recommendations be followed to the letter. It was agreed. RB-211 development had been broken up into sections—one for the fan, another for the low-pressure compressor, and so on. As Hooker went from department to department, he found that outstanding performance had been achieved everywhere. He also found that the outstanding output from the fan section was screwing up the flow into the lowpressure compressor, and that there were similar startling mismatches throughout the design. Accordingly, he went from department to department, giving instructions. “Just put this extra bit of blade twist here. I think we’d like a little less turning there,” and so on. Department chiefs were aghast. “He’s ruining our stage!” they objected to higher management. Hooker reminded them of their agreement and his changes were grudgingly implemented over all objections. When a whole engine with his changes was assembled and put on test, it produced 6000 pounds more thrust than it ever had before. The moral of the story? What is the point of making a system of outstanding parts, if those parts are not integrated into an outstanding whole? By making the output of each stage compatible with the needs of the next stage, Hooker raised the efficiency of the engine as a whole, even though individual stage efficiencies were thereby reduced by minor amounts. Pushing too hard in one area means neglect of the others. With only 24 hours in a day, we have to stand back, take the long view, and put the energy we have where it will do the most overall good. Turbo Diesel Register Issue 23

19


Diesel Combustion When the plunger in the fuel injection pump moves, the first part of the motion pressurizes the heavy-walled line leading to the injector. Rigid as it is, the line isn’t very springy, but because it is a material object, it has some give. The next bit of plunger motion unseats the injector’s pintle, and fuel begins to accelerate as a spray, into the hot compressed air swirling in the combustion chamber. It would be nice if combustion began now, but it can’t. It can’t because liquid fuel—what the injected droplets are made of—can’t burn. It must first evaporate and its separate molecules must mix with air. Think of the injected fuel as a torrent of little liquid fastballs, pitched by the pump’s 15,000 psi, across the injector pintle. These hot pitches rocket into the hot air, being heated as they go, shedding comets’ tails of evaporated fuel. Because these fuel droplets are evaporating, and because evaporation is a cooling process (it takes energy to boil water, right?), the result is that the evaporation of injected fuel cools the surrounding air somewhat. This further delays the process of ignition. Finally the fuel droplets slow down and their temperature climbs back up from continued contact with the hot compressed air. Around each droplet, a cloud of vapor forms, very rich at the core, less rich as you move away from the droplet. Because the hot compressed air in the combustion chamber is above the fire point of the fuel, as soon as fuel vapor forms and its temperature rises enough, it ignites. Very quickly now, a flame front races along that part of the vapor that happens to have an ideal (chemicallycorrect) mixture of fuel and air. The result is a rapid pressure rise, because by now quite a bit of fuel has been injected, but there has been no combustion. This rapid pressure rise is responsible for the celebrated “Diesel Knock” that makes fuel-miserly direct-injection (DI) engines so noisy. What you are hearing is the sudden, rapid combustion of much of the fuel that has been injected into the cylinder up to the time of ignition.

IDI, or indirect-injection Diesels are those that inject their fuel, not directly into the main combustion chamber, but into a small pre-chamber, connected to the main chamber through a small hole. IDI engines are quieter because (a) ignition occurs sooner inside the hot pre-chamber (all or part of it may be of ceramic, purposely allowed to remain very hot) and (b) the rate of pressure rise in the main chamber is slowed by having to flow through the connecting orifice, which acts as a “shock absorber.” Unfortunately, the extra surface area of the pre-chamber and the high heat transfer rate associated with the rapid flow through the orifice conspire to decrease efficiency. Where DI Diesels of truck size may use about .38 pound of fuel per horsepower, per hour, IDI Diesels use more, up in the mid -.40s. Gasoline engines, as a comparison, need more like 0.5 lb/hp-hr. of fuel. Gasoline engines are less efficient because detonation limits their compression ratio to the range of 8-11:1, significantly lower than the 17:1 of an efficient Diesel. Considerable research is being devoted to finding ways to limit the amount of fuel injected before actual ignition. This will not only quiet the noise of efficient DI engines, but can also cut nitrogen oxide emissions. Ideally, a Diesel engine burns its fuel in the presence of excess air, which is one reason for its high efficiency. But excess air exists only on average, not everywhere in the fuel spray regions. As the cloud of injected, evaporating fuel lights up, the initial flame seeks out the parts of the cloud where the mixture happens to be chemically-correct, racing along what is called the “stoichiometric contour:”—the region in which flame speed is fastest. This is a prime zone for the generation of nitrogen oxides, for (a) formation of such oxides accelerates with temperature and (b) stoichiometric, or chemically-correct combustion generates maximum flame temperature. Another concern of Diesel combustion engineers is soot formation, which is responsible for, in the words of the song, “exhaust… blowin’ black as coal.” In the

20

rich regions of the fuel spray, heat does break down the fuel, but combustion is incomplete because there is not sufficient oxygen promptly available. The result is free carbon, looking in vain for oxygen partners. Some of this carbon does find oxygen later in combustion, but as carbon is sticky stuff, much of it finds other carbon instead, clumping together to form soot particles. A characteristic of Diesel combustion is infrared radiation, which plays a part in heating and igniting later-burning parts of the fuel. This infrared is emitted by the glowing carbon, some of which is later emitted as particulates. Anyone who has seen the exhaust flame of piston aircraft engines (gasoline fuel) during a night takeoff has seen the long, red plume of glowing carbon particles, resulting from the very hot, but chemically uncombined carbon from rich combustion. These engines are enriched for take-off because the extra fuel lowers combustion temperature, making detonation less likely. This, in turn, allows supercharger boost to be turned up to make the extra horsepower needed for take-off. Once safely airborne, power can be reduced, and the mixture is leaned out. The red exhaust flame grows shorter and less bright, dwindling to a six-inch blue cone, surrounded by a whitish glow. This blue color results from radiation from hydrogen combustion and the white from the recombination of dissociated (broken apart by heat) molecules. It can also be seen in the flames of acetylene torches and oxy-hydrogen rocket engines (like those of the Space Shuttle). All this hot drama occurs inside the combustion chambers of your truck engine every time you run it, and what is more, your turbo eats this stuff for breakfast. At one time, the Diesel engine was the white hope of the auto industry because of its low unburned hydrocarbon emissions. This is part of the reason so many Diesel cars were built in the early 1980s. Later, it was discovered that the particulates in Diesel exhaust contain significant amounts of certain carcinogens, mostly based on benzene


rings of six carbons with added sidechains. Currently, Diesel development centers around achieving reductions in particulates (and as noted above, nitrogen oxides). One partial remedy has been improved spray formation, as a means of producing smaller droplets, each surrounded by air adequate for combustion. Injection rate can be varied—slow initially to shorten the ignition delay, then at a faster rate later. Another approach to quick, uniform light-off is to supply part of the fuel, not as a spray, but as vapor added during intake. Simpler fuels, such as alcohols, break up and burn more quickly than traditional Diesel fuel, but bring with them ignition problems. A small research outfit called Sonex has a process of accelerating combustion by seeding each fresh charge with active radicals (reaction-accelerating chemical fragments) created during the previous cycle. This, it is claimed, reduces soot considerably and can allow ignition of alcohol fuels. The cumulative result of research like this is cleaner-burning, more efficient engines. Diesel combustion has a bright future because prime movers based on it remain the most efficient heat engines we have. This makes all the research worth doing. Turbo Diesel Register Issue 24

21


TDR – Basics One of the stand-out features of this magazine is the inventiveness and practical skill of its readers. The book is full of letters from readers who have modified their trucks or have figured out engine or chassis problems on their own. This pleases me because it flies in the face of one of the major trends of our times; the trend toward helplessness in the face of technology. As my wife puts it, there are more and more men who have only two basic skills: (a) Whatever it is that they do at work and; (b) Watching sports on TV. Our great-grandparents could deliver a baby, build a barn, graft plum trees, or repair a side-delivery rake—but each succeeding generation has become more specialized. General skills, common sense, and the willingness to try unfamiliar tasks have been lost along the way. In their place has come a helpless dependence on “experts.” We have come to expect all useful devices to bear the message, “No user-serviceable parts inside—return to manufacturer for service.” A couple of years ago, I had dinner with three profs from a well-known engineering school. All of them agreed that every year, the freshman students arrive with higher math and keyboard skills—and less and less understanding of anything physical and practical. Instead of growing up with clocks, radios, lawnmower engines, and tools, these people have spent their childhoods studying. As adults, they will have to buy a new refrigerator when the condenser gets blocked with lint, have to call an electrician when a breaker trips, and have to walk when a tire becomes flat on the bottom. None of this is necessary. What is the remedy? Play with things! Get in and mess with stuff—your truck, for example. Otherwise, the steady advance of gadgety technology walls us in with unknowable black boxes—of which computers are the worst. After a brief honeymoon of normal operation, my first computer (1986) got flaky and quit working, so I called the dealer, 90

miles away. I had a story lost somewhere in that computer and I needed to get it to my magazine right away. Another 90mile trip didn’t appeal to me. “You got a screwdriver?” the voice in the phone asked me. When I told him yes, he continued, “Break the seal where it says ‘breaking this seal voids warranty.’ Take out the screws, and lift off the top. Inside, you’ll see some flat cables with connectors at the ends. Those connectors are the most unreliable parts in your computer. Unplug each one and carefully plug it back in again. Chances are, this will fix your machine.” It did, proving that simple skills and a willingness to tackle problems remain valuable even in the computer era. Sixty-odd years ago, my mother was driving her father’s huge touring-car when it quit in traffic. Because it just quit rather than stuttering, she naturally suspected an electrical problem, because engines with a fuel problem misfire and sputter as they quit. Opening the hood, she looked around and found an unconnected wire. Looking further, she found a place where it seemed to belong. She attached it. The car ran. You don’t have to be a motorhead to have useful common sense! My first wife watched me and my racetrack friends struggle with a modified Honda twin with trick carburetors on it. No matter what we tried, one cylinder did not run, and we suspected those carbs. She could see us losing our cool, and knew no one was leaving the track until the problem was out of the way. Dinner looked pretty unlikely. So she asked, “Those carburetor thingies—do they each do exactly the same job?” We looked up from our frustration to answer yes. “Well then” she continued, “couldn’t you take off the right-hand carburetor and put it on the left, and vice-versa? That way, if one of the carburetors is at fault, the problem will move to the other cylinder. And then you’ll know.”

22

We looked at each other, helpless in the face of pure logic at work. Common sense plus a willingness to have a go are what you need to begin. In the process of learning mechanical skills, we all round off a bolt or two, twist off some taps, and bang our knuckles —the costs are small. We emerge with a kind of confidence that you can’t get any other way. Machines are not mysteries. They make sense. As one old-timer put it, “The human mind has created these devices, so therefore another human mind can comprehend them.” It is only laziness that allows us to believe that understanding is beyond us. My uncle once went to work for an outfit that wanted to make educational films. They needed a screening room but had dithered for weeks after being quoted big money for the necessary remodeling. His answer was to ask, “where’s the nearest lumber yard?” A few days later they had built what they needed—for about 1/10 of that quote—not because my uncle was a good carpenter or electrician (he was a hacker, in fact), but because he didn’t like to be stopped by what are really non-problems. Wanting a result is the best reason to take up tools and learn how to use them. Only you know exactly what it is that you want. If you do a job yourself—even if haltingly and (at first) without much craftsmanship—you are more likely to get what you want than if you have to explain it to others who then do their version of it for you. Anyone who has dealt with home-improvement contractors knows the truth of this. Many problems in life cannot be solved with a cellphone and a deck of credit cards. There is an element of jumping into cold water here; you want to swim, but the only way is to actually take the plunge. Tackling unfamiliar mechanical work can be a bit of a plunge too. When I was first messing with engines, I avoided ignition work because I didn’t understand it. Although I could tear down and rebuild


engines, I had no idea of how to set ignition timing. Then one day a friend arrived with his newly-completed Triumph engine, expecting me to install and time its rebuilt BTH magneto. I was terrified. It was time for a showdown with my own ignorance. I got out a book and forced myself to read through the timing procedure. Of course, it was stone-simple. It told me to get a wheel spoke and file notches in it every 1/16 of an inch. Poke it in through a spark plug hole until it rests vertically on the piston crown. Starting with the piston at top center, rotate the crank backward until the piston has descended five notches. With the crank in this position, rotate the magneto housing until the (correctlyadjusted) points just break. Tighten the mag hold-down bolts. No dial gages, no ohm-meters—just a simple procedure that worked. I had overcome my ignitionphobia, and we had a running engine. Not knowing how is never a good excuse for inaction, because there is always a how-to book, a knowledgeable friend, or a manual available to help you through the hard bits. There is a pleasing sense of accomplishment at getting through a job like this, and the best part of it is knowing that you overcame yourself in the process. The job looked scary, but when you got into it, and understood its various tasks, you could do each in turn and move on to success. Sort of like life itself. Turbo Diesel Register Issue 25

23


The Factory Knows Best – A Stock Vehicle? Anyone who works with machinery constantly has ideas as to how that machinery can be improved. A design that has or develops problems, or an arrangement that is hard to work with (having to take off the frame to empty the ashtrays) always makes us think of possible improvements. For the person who is handy in the machine shop, there is a great temptation to make the improvements and see how they work. Often they do, which is a great source of personal satisfaction. So why not get to work and just do it? A lot of us do just that—replacing rustedin-place original fasteners with stainless, for instance, so we’ll never again have to find a way to drill out broken studs by fitting seven inches of drill motor into five inches of space. Back in about 1964, a major carmaker did a study on what it would take to make its vehicles last twenty years with only minor service—not the usual flood of failed belts, hoses, U-joints, waterpump, exhaust system, and wheel bearings that is normally unleashed at age five. They concluded that small increases in bearing sizes, higher specs in materials, and judicious use of stainless would do the job, and at modest cost. That cost increase, however, would put their products at such a market disadvantage that it would be foolish to put the twentyyear scheme into practice. From recent experience with a 45year-old aircraft engine, I know that the twenty-year-scheme could work —if anyone were willing to pay the increased initial cost. The engine I am concerned with came from a Truman-era military transport plane, and has been outdoors for at least twenty years, lying on the ground. Yet when I crack the installation torque on its stainless or plated fasteners, most of them spin free in my fingers—no twist-offs, no rounded hexes. The exhaust system—made from temperature- and time-resistant inconel —is ready for start-up any time. The

drawback is that this engine was made for aviation, to the highest standards of the engineering of its time. Price was not a major consideration. Now for the other side of the story. Back in the 1960s, I was trying to race a Japanese motorcycle. It was early days for the Japanese industry, and this machine had lots of problems. I was determined to fix all of them. I covered the machine with my own innovations, and I was very proud of my work. Then I crashed. Instead of a simple trip to the dealer for a few dollars’ worth of crash parts, getting running again meant duplication of long hours in the machine shop, making all my neat stuff a second time. I started thinking about this problem of clever prototypes, versus maybe less clever, but low-priced and easily available stock parts. I like to read history, so I knew that in 1945, Messerschmitt’s top aircraft designer Kurt Tank had described the ease with which he was able to pull away from Allied fighters in his oh-sosuperior long-nosed TA-152. What did this mean with respect to the war? It meant nothing, because while Germany could produce these highly superior machines in prototype quantity only, one single factory in the US was rolling out a completed, four-engined B-24 every fifty-five minutes. Yes, Tank’s engineering was superior, but could it defeat thousands of P-47 and P-51 aircraft that were only slightly inferior? Mass production has its drawbacks, but its great strength is that it can deliver into our hands, at low cost, enough tools (airplanes, trucks, ships, etc.) to get the job done. When an American wartime pilot would complain to his crewchief that his engine was running rough, they’d just hang another mass-produced, availablein-quantity engine on the front of it and the problem would be gone. In another often-heard story, an American intelligence man was debriefing a

24

German artillery officer. The German had been captured after his battery had knocked out more than a dozen tall, undergunned US M4 tanks. The officer was holding forth on the poor quality and training of US forces. “Oh yeah?” said the American, “Then how come you’re in this cage here and I’m the guy asking the questions?” “Because”, the German replied, “We ran out of shells for our 88 mm gun before you ran out of tanks.” Usable mass-produced goods—even if far from perfect—get the job done because they exist. Better ideas are cheap, but production is the key. I thought about the problem of running an airforce, or a trucking company, or a railroad. Availability of parts and service is crucial to all these undertakings. My modified motorcycle was a success in terms of ideas, but in hardware, it was a failure because I could not produce all the parts I needed as fast as I needed them. I was a boy of ten when my family drove up the Alaska Highway in a 1951 Kaiser. Kaiser was a mass-produced automobile (Henry J. Kaiser had automated the production of Liberty ships during the war) but it was very much an oddball. The hammering of hundreds of miles of dirt road driving resulted in transmission tailshaft leakage that threatened to leave us stranded. A Dawson Creek mechanic told us that, although he couldn’t be sure without prying it out and thereby destroying it, he thought the tailshaft seal was a Ford part. This was tempting because we knew there were no Kaiser parts anywhere north of Vancouver. Fortunately, it was a Ford part, and we were able to continue our journey, but this taught me the value of standardized parts, available everywhere. Think your Lambo or Aston-Martin is a fine car? Think again when you’re stopped in Tok Junction, with a broken halfshaft. Likewise, if your mass-produced


vehicle is covered with clever, one-off innovations, who’s going to service it in the middle of the night, in pouring rain, when something quits? You are. Another aviation story, this one about stock procedures. A certain shop was doing a nice little business regrinding six-throw aircraft engine crankshafts for undersized bearings. The service book, doubtless written in 1937, insisted that the grinding wheel be dressed freshly for every crankshaft. Anyone who uses grinding wheels hates to dress them —a new wheel 24” in diameter is $300 or more, and repeated dressing just turns the wheel to powder more quickly. Therefore this operator began to dress his wheel less often, after every five shafts. It wasn’t long before in-flight failures of his reground crankshafts began to crop up, and soon there was an inquest. The problem? By not dressing his grinding wheel as often as specified in the factory manual, the operator was continuing to grind bearing journals with abrasive grains that had lost their sharp edges. Dressing the wheel knocks out these dulled grains and exposes a layer of fresh, sharp ones. Not dressing the wheel meant more heat, higher feed pressure, and slower cutting. The increased heat in turn caused micro-scale surface cracks in the crank journals—called heat-checking —that under flight stress enlarged into outright cracks. This was a case of an operator second-guessing the reasons for factory-recommended procedures. How did the engine maker know that the wheel should be dressed after every shaft? Because they had already made this same mistake, painfully figured out the problem, and found a solution; dress the wheel for every shaft. This underlines another principle; the factory knows its own products best. One of my favorites has to do with vehicle emissions and fuel consumption.

Any modification that results in a 1/10 of a mpg improvement in standard testing is worth millions to auto makers, in their annual struggle to make their models hit the Federally required numbers. Yet magazines remain full of ads for mysterious devices that claim to reduce fuel consumption by as much as 25%. Wouldn’t it be odd if these devices or substances really worked, and not one of the thousands of people in the auto industry knew about them? Another one of my favorites is the liquid, claimed to reduce engine friction by huge amounts. In the usual demo, the salesman attaches a tachometer to your engine as it sits idling. You note and record the idle rpm. Now he triumphantly pours his additive into your oil filler, and points at the tach. Miraculously, your idle rpm has risen by a hundred or more, “proving” that friction has been cut by the magic liquid. And in fact friction has been cut. The magic liquid is kerosene or deodorized lamp oil, with perhaps a few cc per quart of some standard-package oil additive. Your idle rpm has risen, not because of magic, but because the kerosene has reduced the viscosity of your oil—especially so if the engine was cold to begin with. Then why not reduce oil viscosity all the times and reap the savings? Because the oil viscosity specified by your engine manufacturer is the one that best satisfied all requirements for performance and engine components life, in prolonged and expensive testing that goes on all the time. Is kerosene a better lube oil? This example is a shocker; the no-oil test. The representative of the additive company pours his product into a late-model vehicle, its engine running. Then he pulls the drain plug and then dramatically drains all the oil into a pan, which he shows to the incredulous crowd. Then he hops into the now oilless machine and drives it around the lot. The admiring multitude buys the product.

25

In fact, the product could be almost anything, because modern engine oils are so loaded with anti-wear additives that this demo can be successfully performed on almost any vehicle (neither I nor any other sensible person would recommend you try this). Just the oil film remaining on parts, aided by the normal oil additive package, will allow the engine to run at low rpm for a few minutes without damage. The factory knows a lot about its products as a result of constant testing. Do you think (a) that some idea-men with a chemistry set can do better? And (b) that they would prefer to sell their idea to you for $9.95 rather than to the oil companies or vehicle manufacturers for millions? The simple statement “stock is best” is not true in detail, because new ideas are incorporated into vehicles every year, from both within and without the industry. Anything can be improved. But that’s not the point. The point is that stock, for the most part, represents something that is known to work, backed by a lot of experience. Stock also means that service and parts can be found anywhere. Those are valuable considerations. Turbo Diesel Register Issue 26


Diesel Powered Future? Where They Left Off Before WW I, Diesel engine development was of great concern to European governments, as these engines really made the submarine possible. In that war and the one that followed twentyone years later, submarines nearly succeeded in isolating England and winning the war for Germany. Later, as Diesel applications spread widely, these engines were even considered for aircraft, despite their weight. The reason is curious. Until Thomas Midgley of Delco discovered the antiknock properties of tetraethyl lead (TEL), and until Sam Heron invented the internally cooled exhaust valve, the gasoline-burning, spark-ignited engine was not expected to reach really high powers. The phenomenon of combustion knock, or detonation, made it impossible to either raise compression ratio very high, or to use a useful amount of supercharger boost. In engines whose rpm was already limited by bearing technology, that seemed to put a cap on spark-ignition progress. The trip I made, towing a heavy horse trailer behind a spark-ignition-enginepowered pickup, gave me a direct feeling for this kind of technological deadlock. Once the engine’s throttle was open with rpm in the green zone, that was all she wrote. Torque was modest because of the engine’s low compression ratio, and had there been a turbocharger, it could only have boosted torque until the engine began to knock on today’s 1936level gasoline. So I had to shift down to keep speed up even on moderate Interstate highway grades. Faced with this technological deadlock, progressive engine designers in the early 1930s had good reason to believe that the future of high powers might lie with the Diesel, whose combustion process is immune to detonation. Blow as hard as you like into a Diesel’s intake - if the injectors can match fuel to air, the engine can burn it and make power.

The German Junkers firm went ahead and designed its famous Jumo twostroke 205 opposed-piston, twincrankshaft Diesel aircraft engine. It made 700 hp at 2600 rpm from a weight of something over 1500 lb. Guiberson in the US flight-tested an air-cooled radial Diesel of 1020 cubic inches, making 310 hp from a weight of 653 lb. In England, engine pioneer Harry Ricardo prepared test engines to evaluate supercharged Diesels for aircraft applications. Meanwhile, Midgley at Delco discovered TEL, which greatly raised the knockresistance of gasoline fuels; and Sam Heron’s cooled valve greatly reduced the temperature of the hot exhaust valve—a prime cause of detonation. These discoveries, plus much collateral development, made the gasoline engine again the leading powerplant where great power from minimum weight was the requirement. With the exception of the widely used Junkers Diesel, all combat aircraft in WW II were gasolinepowered. Strangely, while the excellent Soviet T-34 medium tank was Dieselpowered in that war, all German tanks still burned gasoline. The aircraft Diesel seemed to have a niche just after the war, when payload of piston-engined transports was limited by the fuel they had to carry on transAtlantic flights. Neither turboprop nor jet engines were yet efficient enough to replace them. Napier therefore designed its dinosaur-like Nomad two-stroke Diesel, which recovered extra power from its exhaust via turbo-compounding. As it turned out, piston engines were good enough to fill the gap while turbine development pushed efficiency of that engine to ocean-spanning levels. The complex and wonderful Nomad was never produced. In the postwar era, fuel refiners were left with huge excess capacity for making aviation gasoline components. The auto industry over time took up

26

some of the slack by constantly raising the compression ratios of car engines, as a means of increasing torque. This required fuel with higher knock resistance. One beneficiary of this was the light truck business, in which inexpensive big-block gasoline engines could haul a lot of freight thanks to high compression and gasoline good enough to keep knock at bay. The era of regulated exhaust emissions changed all that, making the 1970s a decade of big changes. High compression ratio leads to high emissions of nitrogen oxides, which are created by high flame temperature. As part of emissions abatement, compression ratios of sparkignition engines were forced down. Regular-gas autos of 1968-70 routinely ran on 10:1 compression, and sportier models went all the way to 12:1. Very quickly, this dropped back to 8:1. Torque dropped with it. This was the end of the spark-ignition engine as a possible competitor for medium- to light-duty Diesel power. Meanwhile, the Interstate Highway System encouraged higher speeds for commercial trucking, bringing a demand for higher horsepower-per-pound from truck Diesels. The succession of oil shocks, bringing higher fuel prices, underlined further the need for lighter, more efficient and more powerful truck engines. WW II aircraft engines carried the sparkignition concept to a new high, along the way developing large turbochargers that allowed engines to maintain ground-level power beyond 30,000 feet. The gas turbine (jet engine) that rapidly replaced these piston engines after 1945 was conceptually just the compressor and turbine of this turbocharger, with burner cans taking the place of the piston engine as a gas generator. In the process, hightemperature metallurgy necessary for successful turbine operation was developed and commercialized.


This, in turn, made the highly turbosupercharged Diesel truck engine practical. The harder you blow into a Diesel, the more fuel the injection system must provide to burn with the air provided. The only limit to this boosting process is bearing durability and engine structure. Beginning with atmospheric engines at around 160 hp, truck Diesel power has pushed all the way to 600 hp with turbocharging. Today’s turbocharged Diesel engines have taken up where the aircraft Diesel engine left off in the 1930s, after the interlude of the high-power spark-ignition engine. Diesel engines, particularly those designed for marine patrol-boat service, are now generating horsepower per cubic inch similar to that made by gasoline-burning wartime piston aircraft engines, and are only moderately heavier. This is a grand achievement. With the single exception of auto racing, all high-power piston engine development now taking place employs the Diesel cycle. For years we’ve been told that turbines would soon take over these heavy-duty applications—marine and truck. Why hasn’t this happened? The smaller turbines are made, the greater the problem they have with internal leakage and low efficiency. As light aircraft owners are discovering, small turbines are extremely expensive because of the superalloys and other high technology in them. High durability ceramics that were expected to make cheaper turbines possible have been slow in development. Diesel engines therefore remain a huge bargain on a dollars-per-horsepower basis and their durability is excellent. Even the design of future light aircraft powerplants seems to be turning in the Diesel direction—mainly because of the decline in the knock resistance of available aviation fuel. The highly knock-resistant wartime grades 115/145 (purple) and 100/130 (green) are gone now, replaced by aviation lo-lead

(blue). Supercharged spark-ignition piston engines of the previous era must be derated to run on this less knock-resistant fuel, and it likewise prevents future designs from being highly supercharged or turbocharged. The reworked WW II fighter aircrafts that run every year in the Reno air races burn special fuels containing triptane, custom-compounded for them.

driving all the current development of common-rail injection, particulate filters, and improved combustion. It is hoped that a more thorough understanding of the details of heavy fuel combustion may lead to ways to prevent particulate formation. Likewise, research continues on means of aftertreatment, such as plasma-assisted burning of particulates.

On the other hand, every airport of any size stocks turbine fuel in the form of JetA. Diesel engines burn this fuel readily, motivating designers of future light aircraft engines to consider Diesel. US military forces appear determined that their operations will, in the future, use only a single fuel for aircraft, tanks, jeeps, trucks, and even portable generators. This fuel will not be gasoline!

How about other areas? There are still a significant number of steam-turbinepowered ships plying the world’s seas, but the highest efficiency is delivered by large two-stroke turbocharged Diesels, coupled directly to propellers without expensive reduction gearboxes, and turning 60-90 rpm. These huge marine Diesels (typical bore and stroke might be 36 X 60 inches) are the most efficient prime movers now in use, with overall thermal efficiencies above 50%. Diesel power eliminates expensive and unreliable equipment necessary for the steam cycle, such as large condensers and fresh-water sources.

Therefore, as once appeared to be the case in the 1930s, the future or heavyduty power is probably with lightweight Diesel engines. I may be nostalgic for the sweet smell of high-alkylate aviation gasoline, but I can see the technological writing on the wall. In the future, all high-torque, high-durability applications will be fought over only by turbines and Diesels. It’s still no contest on the world’s highways. What about auto engines? For a time in the 1980s, it appeared that the EPA was turning toward the Diesel as a low-emissions powerplant for cars. Its appeal is that because it burns its fuel in the presence of excess air, the Diesel has low levels of CO and unburned hydrocarbons in its exhaust. It is also, because of its high compression ratio, highly fuel-efficient. Then the carcinogenicity of the benzene-like ring compounds in Diesel exhaust particulates was discovered and documented, and EPA enthusiasm abated. It is the Diesel exhaust particulate problem that is

27

Electrical power stations were, for many years, steam turbine powered, with Diesel power reserved for smaller installations or for topping or back-up units. Lately, however, gas turbines have been eating into this business. When their reliability and durability were still in question, their use came as topping units only, with base-load still carried by steam. But today, with turbines much more highly developed, even base-load electrical generation is being carried out by gas turbine power. From this view, the spark-ignition engine is a strange holdout in the light-duty power field, kept alive in the auto market by its lightweight, low first cost, and momentary advantage in the emissions arena. Everywhere else on wheels, Dr. Diesel’s efficient engine is number one. Turbo Diesel Register Issue 27


More About Oil The question of aftermarket oil additives keeps coming up (Steed, Prolong, STP, Microlon, world without end), and it always will. When a person has laid out big money for a shiny, wonderful new Turbo Diesel, that person intends to do more than just drive around in it. That person wants to have a relationship with that truck. In the old days, the relationship was easy. You changed your own oil every thousand miles, you ground your own valves, and you rotated your own tires. In fact, there was more relationship between man and vehicle than most people wanted. That’s why today’s cars and trucks have become such turnkey operations, with extended oil drain intervals and no tune-ups. Just get in and drive. One way to have a relationship with the new vehicle is to buy and mount a cast aluminum “Lone Wolf – No Club” license plate frame and some white rubber mudflaps with jeweled reflectors. Oh, and blue dots for your taillight lenses, to give them that distinctive purple look at night. Okay, all that went out with the end of the 1950s. This is the 21st century here, a time when people are concerned over things like dietary fat and bad cholesterol. Because we are what we eat, and we want to be good, we have to eat carefully. This applies by analogy to new trucks that have cost us $32,000. Just as we are eating vitamin-C, DHEA, and no-flavor lean beef, so we are also tempted to pour expensive additives into the lubricating oil of our trucks, in hopes that performance will improve and that useful life will be extended. I read a wonderful line somewhere, which went like this; “Vitamins were discovered in 1911. Before that time, people just ate food and died like flies.” Something like this idea seems to drive people today to use additives—ordinary pump Diesel fuel and manufacturerrecommended oils can’t be enough. Aftermarket additives are, therefore, the “vitamins” we are tempted to give

our vehicles. Never mind the fact that some highly-advertised “super” oils cost more per quart than most of us pay for a case.

forming a full oil film that supports the load. There is no contact at all between the moving parts, as revealed by electrical conduction experiments.

The ads are wonderfully persuasive. One I saw recently features regular guys strolling in a junkyard. They approach a rusty clunker, start the engine, and listen to its assortment of clatters—collapsed tappets, rod knocks, loose wristpins. “Sounds pretty bad, Bob,” remarks one of the strollers. “That’s right, Bill,” returns another. “We’ll try a bottle of Noo-Life,” Bill confides to the viewer. They pour it into the oil filler and instantly the clattering goes away (or the technician at the audio mixer cuts the treble way down—it’s hard to tell exactly which it is). “Sounds pretty good now, Bill,” says the pourer, turning to the viewer and holding up the now-empty Noo-Life bottle for our inspection of the label graphics. “Why don’t you try a bottle today?”

(2) Contact, or boundary lubrication – in the absence of an oil film, there is either actual metal-to-metal contact between parts, or the parts are in some degree protected by chemical films of oil additive used for the purpose. Such films not only protect parts from damage, they reduce contact friction to 1/10 or less of what it would be in actual metal-tometal friction.

In our minds, we know how it’s done, but in our soft hearts, we’re vulnerable, tempted to try a bottle. Yes, we know that unscrupulous used car dealers have, in the unregulated past, used sawdust to quiet timed-out transmissions, and we know that thick oil or a dose of motor honey (viscosity-index improver additive) will calm the high-frequency rattling of a warn-out engine. But, having laid out those thirty-two thousand ones end-to-end for that beautiful new truck (that’s more than three miles of money), it just doesn’t make sense to pass up products that might work, right? After all, they wouldn’t let ‘em say it on TV if it didn’t work as advertised, would they? Would they? How and why does oil work as a lubricant, anyway? I’ve touched on this topic before in these pages, but a deeper look always gives some fresh insight. As noted in a previous article, there are three regimes of lubrication: (1) Full-film, or hydrodynamic lubrication – most of the parts in your engine are lubricated in this regime most of the time. Viscosity and the rapid motion of the parts drags oil between sliding parts,

28

(3) Mixed lubrication – some of the load is supported by an oil film, some by contact. This kind of friction occurs during start-up, after oil has largely drained from engine parts. It also occurs wherever parts motion is too slow to generate a full lubricant film—at low idle speed between cam lobes and tappets, or near TDC between piston rings and cylinder walls, when the piston is moving very slowly and combustion pressure is high. In what follows, I want to describe in more detail how full-film lubrication works, and what affects it. In a later issue I’ll talk about multi-grade oils, oil additives, and their relation to snake oils. THE PRESSURE IN BEARINGS: Our intuition tells us that crank and rod journal bearings must work because the oil pump forces oil into the bearings, and that oil pressure then supports the load. Simple arithmetic tells us this is false. A four-inch piston with 1000 psi of combustion pressure over it pushes down on the connecting rod and rod journal with a force of roughly 12,000 pounds. If it was the 60 psi from the oil pump that supports this load, the rod journal would need 12,000 ÷ 60 = 200 square inches of bearing area. Since the actual projected area of the rod bearing is more like two or three square inches, we can see this idea is way off. In fact, this bit of figuring reveals what actual bearing pressures are like—namely the 12,000 pounds divided by, say, three


square inches of actual bearing area, giving us 4000 psi as the peak pressure exerted on the oil film in actual con-rod bearings. Where does all this pressure come from, if it doesn’t come from the oil pump? VISCOSITY: It comes from the rotation of the parts, acting with viscosity to drag oil from regions of low pressure, into the high-pressure region under the load. Viscosity is the internal friction of a fluid, such as oil, air, water, etc. If one solid surface slides over another, separated by a fluid, the layers of the fluid must slide past each other. The resistance to this sliding is called viscosity. It is easy to understand why this resistance occurs. As the molecules in one layer collide with those in the next, kinetic energy is exchanged. Because the collisions of these molecules are anything but orderly, this kinetic energy exchange produces random molecular motion, which is heat. Thus, the process of sliding one surface over another with a fluid between them converts orderly motion into heat. This produces a viscous drag force, tending to oppose the motion. We know that the fluids known as oils have more viscosity than, say, air or water. Why should this be? Oils consist of molecules that are long chains, while the molecules of low-viscosity fluids like water are small and resemble balls more than they do chains. As one layer of fluid slides over another, long molecules transfer kinetic energy to more potential partner molecules because they are so long, surrounded by many other molecules. This produces a higher fluid friction, or viscosity. The small, ball-like molecules of water, because each of them contacts fewer other molecules, transfer kinetic energy less widely, and so display lower fluid friction, or viscosity. THE BOUNDARY LAYER: Near a solid surface, the situation is a little different. Molecules—even those having the form of long chains—are very small. At any temperature above absolute zero, they are in constant motion. The molecules of a fluid are especially so, since in order

for the fluid state to exist, the average molecule must have enough energy of motion to overcome any forces tending to bond it permanently to another. Therefore these molecules vibrate, rotate, wiggle, and slither over one another constantly. At any solid surface, these molecules collide with it steadily. Because, on the molecular scale, even the most finely polished surface is rough, these collisions result in rebounds at all angles, favoring no particular direction. For this reason, therefore, the fluid near a solid surface has no net motion along that surface. This relatively immobile layer near a solid surface is called, reasonably, the boundary layer. This means that in the situation discussed above, in which one surface slides over another with a fluid between, the fluid cannot simply slide along the surface. The relative motion has to take place in the fluid, at some distance from the surfaces. This means that there is no escape from the effect of the fluid’s internal friction, or viscosity. No coating we could put on the solid surfaces would be smooth on a molecular scale, and so prevent the formation of a boundary layer there, permitting the fluid to simply slide along the surfaces. The relative motion always takes place in the lubricant itself, so power must be used to overcome the slight viscous drag involved in sliding the layers of lubricant past each other. No snake oil can change this! PUTTING THE OIL UNDER THE LOAD: Now, why does oil remain between the sliding surfaces, rather than being immediately squeezed out by an applied load? Imagine a situation in which a loaded slider moves over another surface, with a viscous fluid—oil— between. The load, by exerting pressure on the film of oil between, tends to squeeze the oil out. If more oil does not somehow enter the space between the surfaces, this loss of oil will soon result in contact and possible surface damage. What can put oil into the space between surfaces? FORMING AN OIL WEDGE: That something is viscosity. As the moving

29

slider glides along, it assumes a slightly tilted position because the oil at its rear edge has been under pressure the longest, so the most has been squeezed out from that region. The oil film between, therefore, takes the form of a wedge, thicker at the leading edge, thinner at the trailing edge. As the slider advances, oil ahead of it does not immediately flow away because its viscosity prevents it from doing so. Once oil enters the wedge, the only way it can escape is to be squeezed out to the sides, or for the slider to pass completely over it. It’s hard to squeeze the oil out because forcing viscous oil out through such a narrow space requires very great pressure. Some does escape, naturally, but it escapes slowly. As the slider advances, a steady state is soon reached, in which the rate at which oil enters the wedge at the front equals the rate of loss through being squeezed out at the sides and at the trailing edge. Oil enters the wedge at essentially zero pressure, but the advance of the slider, coupled with viscosity, carries it into regions of higher pressure—high enough to carry the load on the slider. The slider rises up on the wedge of oil thus produce. The more viscous the oil, the thicker the wedge. Our “slider” could be the skirt of a piston, sliding on a lubricated cylinder wall, or it could be the lobe of a cam, rotating against a tappet. It could even be the rotating journal of a crankshaft, turning inside a sleeve bearing. In all cases, the load is carried by the same naturallyforming oil wedge. Oil enters the wedge at essentially zero pressure, and once the oil is between the surfaces, viscosity makes it easier for it to carry the applied load than for it to be squeezed out. In the process, the frictional drag force in the bearing is typically about one or two thousandths of the applied load. This is why engine friction is as low as it is. The above argument shows that viscosity is necessary if loads are to be carried by sliding parts—it is what keeps the lubricant from escaping out from under the load so fast that the


lubricant wedge collapses. Yet at the same time, the friction loss inherent in lubrication is produced by this same viscosity. It is therefore obvious that a compromise is necessary here. We must have enough viscosity to carry the loads on sliding parts, but much more than that simply increases the friction loss in our machine. THE VISCOSITY COMPROMISE: When engineers specify oils for Diesel truck engines, they are obliged to make this compromise. They cannot allow the moving parts to touch each other, because that causes accelerated wear and parts damage. Therefore they must specify enough viscosity to keep parts separated as much of the time as possible. On the other hand, they also know that the more viscous the oil, the greater the force it takes to make lubricated parts slide over each other. Too little viscosity means wear and damage. Too much viscosity means power loss and increased fuel consumption. For example, a change from a 30 oil to a 50 oil increases frictionloss approximately 20%. Because welldesigned engines typically lose about 15% of their power to friction, this means 20% of 15%, or a power loss of 3%.

Okay, but what about engine life? Can’t I make my engine last longer by using thicker oil? Won’t that keep the parts separated by thicker oil films? Enough is enough. If factoryrecommended viscosity keeps the moving parts separated, making good oil films thicker with added viscosity gets us nothing but added fuel consumption. Also, some of us live in Thief River Falls, MN, where engines have to cold-start at minus forty. With that thick stuff in the crankcase, the starter won’t turn the engine. Even if it does, the heavy oil moves so slowly at that temperature that it will take long minutes for flow to reach all the way to the rocker arms and other parts most distant from the oil pump. The factory, based upon its thousands of hours of testing, and on the years of warranty and service experience, arrives at an oil specification for its engines. Can we get better information by watching the motor honey ads on late-night TV? Turbo Diesel Register Issue 28

Shall we play with this compromise ourselves, in hopes of either reducing friction and getting more power, or concentrate on extending engine life? It’s obviously true that we can cut the friction of well-lubricated parts significantly by reducing oil viscosity. This is a ploy constantly used in auto racing. Shall we run out and get cases of watery 0W-5 oil and reap the benefits of lower friction? We don’t do this because we know the factory chose a heavier oil to cover the full range of operating conditions that their product will meet in use. Yes, we might be able to get away with a lighter oil if we didn’t work our engines very hard—this is an old trick from the Mobil economy runs of years ago. But we bought these trucks to do heavy work, so when we’re towing that big stock trailer up the Rockies on a hot day, we’re going to need the viscosity of the factory-specified oil.

30


Your Notes:

31


Apples-to-Apples Baseline and Overkill One of my favorite demonstrations is the “trick spark plug play,” and it goes like this. The average vehicle has half-worn spark plugs in it. This means that heat and spark erosion have rounded off the sharp edges of the center wires and ground electrodes, somewhat raising the voltage required to produce a spark. Therefore, such half-worn plugs are a bit less certain in their performance than new plugs. There is some irregularity at idle, even a few misfires. The top end is not quite as sharp. Now Mr. Wizard removes these spark plugs and screws in his patent alternative. It may have extra ground electrodes, like aircraft plugs do, or it may have special jagged edges on the electrodes. In any case, these plugs look really different. Mr. Wizard runs the engine and lo! the idle irregularity is gone, replaced by silky smoothness. If a dyno comparison is made, the line made with the novelty plugs lies slightly above that of the stockers. Point proven, Mr. Wizard accepts onlooker applause and prepare to take orders for his product. What’s wrong with this picture? What’s wrong with it is that we are comparing the performance of half-worn plugs with that of brand-new ones. We don’t need engineering degrees to know that new plugs perform better than old ones. A correct procedure would be to test first with a brand-new set of stock plugs to establish a true baseline, then test again with Mr. Wizard’s igniters. Chances are there would then be little difference between the two. Plays like this one rely on our enthusiasm for what is new, and our desire to discover something wonderful. That is no way to do engineering. A lot of bad science will get past us if we don’t think about these things. An eager vehicle owner drives into the modifications shop to have expensive procedures performed on his ride. A “baseline” is run on the dyno, then the mods are installed. Naturally, the new equipment requires some adjustment, so everything gets set to exact new values. When the dyno printer begins

to print the results after all this work, the new curves are sensationally better than the “baseline.” What’s wrong with this picture? The baseline is the vehicle as it arrives in the shop, with the stock daily driver’s usual suite of out-ofadjustment problems. But when the new parts are installed, what amounts to a complete tune-up is performed, so that everything is exactly up to scratch. The fact is that if the mods shop operator wanted to, he could leave the customer’s engine stock and still show a performance gain on the dyno, just from the tune-up work alone. Therefore the rule is that if you want to know what you are getting, you have to design your testing to show only that. A true baseline test should reveal the best that your stock setup can do, because the after-modifications test will certainly try to show the best that it can do. Compare apples with apples. In dyno testing, it is normal to “correct” horsepower figures to what is called a standard atmosphere. This is done to remove the effects of changes in the weather. When the barometer goes up or the temperature goes down, air density rises, and so will horsepower—and vice versa. Likewise, if you compare dyno work performed in Shreveport (sea level) with work done in Denver (5,000 foot altitude), you must compensate for the altitude’s effect on air density. A big gain or loss in power will always be evident on dyno printouts, but if you are working with small gains and losses, they can easily be masked or even reversed by a high barometer during Tuesday’s dyno session and a low pressure storm center gliding over during Wednesday’s session. That’s why raw dyno horsepower is corrected to standard atmosphere. Because many people don’t understand the purpose of dyno corrections, they suspect some kind of jiggery-pokery. They may demand “raw figures,” believing these to be true and correct. They are, but because the weather affects them, they can’t be compared with other figures, taken under other atmospheric conditions on other days. The dyno correction takes out the

32

effect of current weather, and tells us what horsepower would have been on a so-called “standard day,” with atmospheric pressure of 29.92 inches and temperature of 65 degrees F. This horsepower correction is, like other human achievements, imperfect, but what it strives to do is to allow us to compare apples with apples. My all-time favorite is the “parts-list builder.” This optimist sends away for all the speed parts catalogs and then designs his ultimate machine on the kitchen table. These pistons with that head, the other pipes, somebody else’s turbo, and a trick injection pump. All these parts are ultimates in their filed, right? Put ‘em all together and you gotta have dyno-mite combo, right? Wrong. An engine is a system, not a parts list. Better by far to go to a person who builds modified engines all the time, and who has seen what works and what doesn’t, what parts function well together and which do not. It’s your choice—you can either get the experience by trial and error, or you can buy it from someone who already knows. Because I’ve been through the kitchentable process myself, I understand the tremendous enthusiasm a person feels when he decides to take control of the variables and build something really radical and wonderful. It’s hard to then be confronted by the result—a vehicle whose drivability is poor and whose performance is spotty because the various components chosen on the kitchen table don’t work well together under the hood. The basic fact is that parts don’t make engines perform. Performance requires both parts and information. Again and again, I have seen enthusiasts break the bank on trick parts, then refuse to spend a bit more money on the dyno time and professional advice necessary to get those parts working together correctly. Is a cake just flour, sugar, milk, eggs, baking powder, and flavoring? No, that’s just a sticky mess. To get a cake from these ingredients you need correct information on how to combine them and bake them.


In motorcycle racing (which is my field) I have often seen people disappointed by engines they have built. I ask the owner, “How did you phase the cams?” and the answer comes back, “I lined up the marks like it says in the service manual.” In other words, same timing as the stock cams. Is that the best setting with this pipe, these carbs, this compression ratio? The only way to find out is to “roll the cams” on the dyno—try variations on cam phase until you find where the good top power is, the best drivability, or the best acceleration. Once you have a feeling for cam phase, you can choose the kind of performance you want. Sometimes lack of knowledge breaks things. A homebuilder “throws in” a new cam and bends all his valves. Why? Valve-to-piston clearance should be measured any time non-stock cam or timing is used. Lining up the marks isn’t enough. Another hot bike owner bolts on big carbs, but doesn’t know the difference between an idle jet and an F-14. What chance does he have of getting what he paid for? As often as not, the engine doesn’t idle on its gleaming new mixers, it hesitates when the throttle is snapped, and when it finally comes on, the power is like the fabled light switch—either all the way on or nothing. Careful tuning work with a person who understand carburetor systems could transform this engine into a runner that will idle, accelerate, and top-end nicely, but the owner “doesn’t want any help.” When Chuck Yeager flew faster than the speed of sound in summer 1947, the flight was promoted as all-American heroism. The public view was that Yeager had switched on all four rocket engines and then masterfully tamed the X-1 by raw courage. The facts were different. In scores of carefully instrumented test flights, Yeager had mapped out the handling characteristics of the plane as it approached the speed of sound. Speed was increased in increments of 0.05 Mach, or about 35 mph, and control responses were evaluated at each step, in complete detail. Yeager was chosen for this work, not because he

was a swashbuckling adventure-lover, but because his flying was accurate, reproducible, and reliable. In the process, Yeager discovered a need for greatly increased elevator control authority, and testing was halted until this was provided. On the other hand, Geoffrey De Haviland was killed when his jet aircraft entered the transonic region, backed by a lesser degree of research. It developed oscillations from which it could not be recovered. It tumbled and broke up in flight. We can call Yeager’s work “exploring the trends of performance.” By establishing the trends of control response at higher and higher speeds, he and the engineers were able to uncover a possible loss-of-control situation and design around it. After all this careful work, Yeager’s supersonic flight was routine and uneventful. When an engine’s performance is being raised in novel ways, it is valuable to similarly explore the trends in its performance. Many a hastily built engine has rewarded its owner with mechanical problems that could have been detected by a step-by-step approach. When you pay an established modifications house for a high-pressure turbo setup, only part of what you are paying for is a turbocharger, controls, and associated plumbing. The rest of what you get is carefully researched freedom from the unexpected. In some cases, reliability of a modification is more a matter of common sense than of dyno development. Wires will chafe through and short out if they are not supported by tie-wraps and protected by rubber grommets where they pass through sharp-edged holes in sheet metal. Steel pressure lines vibrate, fatigue, and break off if they are not supported in such a way as to prevent this. Think about every part of a new installation and try to imagine the failure modes before they happen. One F-14 prototype suffered a total control hydraulics failure that had the experts stumped. How could three completely separate hydraulic systems—the service system plus two back-ups—fail at exactly the same time? It didn’t take too much differential calculus to discover how. Lines for all three systems were routed

33

and supported identically. At one point, all three lines made a long, unsupported span that could vibrate strongly. With the same vibratory history, all three lines reached the point of fatigue failure at essentially the same time. When the first line failed, the load transfer to the second and third systems failed them too. Common sense would have prevented this. (The Editor, and other columnist, could write many a story on the “unreliability factor” of accessories we’ve installed. My experience fixing these self-imposed accessory problems makes me marvel at the engineering of the vehicle as built by Dodge.) Another effect is caused by our good old American love of gadgets. A man goes into a camera store to get something to take snapshots of his kids, but comes out an hour later with $1,500 worth of professional photo equipment that he never quite learns how to use. The parallel is the operator whose route and load take him over hills that are too steep for fourth gear and too fast for third. What he’d really like is a little more torque—something he can get very easily from a variety of sources. But if the gadget bug bites him, his truck will come out of the shop gleaming with intercoolers and a maze of delightful plumbing and gadgets. Keeping this leading-edge system running right may be more than the owner bargained for. Therefore, another rule of performance modification would be reasonable. If the existing machine isn’t strong enough, how much stronger goes it need to be to do the job? How often do you encounter those types of operating conditions? Overkill may be fun, but the more “over” it is made, the more problems it is likely to have. Running something radical has its own appeal. Big numbers and smoking tires are thrilling. But the harder you push a given piece of hardware, the closer it must live to failure, and when it fails, the taller the column of smoke that results. Enough is enough. If you really have to have 1,000 horsepower, there are bigger engines. Turbo Diesel Register Issue 29


Reasons The usual reason given for the superior fuel economy of the Diesel engine is its high compression ratio. In general, the compression ratio is also the expansion ratio. By raising the compression ratio, you raise the peak combustion pressure. Because this is also the expansion ratio, it means that the gas is allowed to expand very fully. The two effects work together to result in what is called a high air cycle efficiency. Air cycle efficiency is an abstraction, an ideal picture of reality that is useful for comparing one engine with another, but is far from the whole truth. Two more effects intrude here to modify the air cycle—the increase of the specific heats of gases and a chemical temperature effect call dissociation. In comparison with spark-ignition engines, both effects work to the advantage of the Diesel’s fuel consumption.

Increase of Specific Heats with Temperature The specific heat of a gas is the measure of how much heat must be supplied to heat a given amount of it by one degree C. It is normally assumed that this specific heat is a constant. The simple model of a gas is of hard, tiny spheres in constant motion, colliding with each other and with the walls of their container. As the temperature of the gas is raised, the average velocity of these spheres is increased. The pressure of the gas is the sum of the impacts of these tiny spheres against the container walls. If this model were true, the specific heats of gases would remain constant as their temperatures rose—each increase in energy supplied would produce a proportional increase in the molecular activity in the gas. In fact, the molecules of the gases in air are not hard little spheres, but consist of two or more atoms, joined to each other by electrical bonds that are elastic. The molecules of the combustion products carbon dioxide and water vapor each consist

of three atoms, further complicating the picture. At moderate temperatures, these gases do behave pretty much like the aforementioned hard little spheres, and their specific heats therefore change little. But at higher temperatures—such as those found in combustion gas in engine cylinders—additional modes of energy storage appear. Molecules can rotate like little dumbbells. The two atoms of oxygen (O2) and nitrogen (N2) molecules can vibrate rapidly towards and away from each other like two masses on a spring. Gas temperature is manifest by the average velocities of the molecules, but these new modes of motion make little contribution to this. Therefore as the gas becomes hotter, more and more heat energy is lost into these rotational and vibratory modes, and is not available to generate pressure therefore more heat is required to obtain a given increase in gas pressure—more than predicted by the simple air cycle model. The result is that the hotter these gases are made, the less efficient they become at converting the heat supplied into pressure. For this reason, engines are more efficient the lower the temperature of their combustion can be made. Diesel engines have no air throttle, so they take in a full charge of air on every intake stroke. At partial load, only a small amount of fuel is injected into this large mass of air, so the resulting temperature rise of the air during combustion is moderate. Even at full load, Diesel engines operate with about 15-20% excess air to prevent smoke formation. Because of this excess air dilution, the conversion of heat into pressure is more efficient, so less fuel is used. In a gasoline engine, throttling has to control both fuel and air, as spark ignition will work only within a narrow range of mixture strengths. Therefore at partial throttle, a gasoline engine admits a small amount of premixed charge. When this charge burns, it is not diluted with a great mass of extra air as in a Diesel, so it burns at a high temperature. Conversion

34

of heat into pressure is less efficient because of increase in gas specific heats, so more fuel must be used. This is not a subtle, single-numbers effect. It is, in fact, so large that it is presently driving much of the development of new gasoline engines. These are of the so-called lean-burn type. To allow conventional ignition systems to ignite lean mixtures of around 25-to-one, the charge is supplied to the engine in stratified form—with a rich, ignitable zone around the spark plug, and very little fuel elsewhere. As in the Diesel, the presence of extra air limits the temperature rise of the burning charge, making conversion of heat into pressure more efficient.

Dissociation In order for fuels to burn, whether they be fireplace logs, natural gas, or Diesel, their molecules must first be knocked apart by heat. Then the fragments, which are highly reactive, can combine with oxygen and release energy in actual burning. In a similar fashion, if the temperature is high enough, even the products of combustion themselves— carbon dioxide and water vapor—can be knocked apart by the violence of thermal molecular motion. This process is called dissociation. Even the normally highly stable nitrogen molecule—N2—can be dissociated. When combustion products dissociate, they absorb energy from the combustion gas. This lowers its temperature and pressure. As the piston descends on the power stroke, the combustion gas cools, and the dissociated fragments CO1O2, and H2 can now recombine. As they do so, they release the energy they earlier absorbed, but by now the piston has already moved a significant distance, so the resulting bit of extra pressure has less distance in which to do work. As a result, there is a small loss of power. Again, the lower the temperature of combustion, the less dissociation loss there will be—another advantage for the Diesel engine with its excess air.


Nitrogen Oxide Formation When nitrogen dissociates, it may later recombine with itself to form N2 again, or it may combine with oxygen to form that most difficult of exhaust emissions, the nitrogen oxides, which are potent smog-formers. In gasoline engines, fresh charge is often diluted with inert exhaust gas in the process called Exhaust Gas Recirculation (EGR). The presence of this extra gas, which cannot take part in combustion, acts to reduce flame temperature, and thereby cuts the production of nitrogen oxides.

Particulates and PAHs These excellent advantages of the Diesel combustion process so charmed the EPA in the 1980s that for a time it appeared the Diesel would be the choice as the future auto engine. Then they discovered the chemistry carried along on Diesel particulates, and the love affair was over—back to the spark ignition engine. What was found on particulates was called PAHs—polycyclic aromatic hydrocarbons. These are multi-ring structures based on the benzene ring of six carbons. Some also carry nitrogen or oxygen side groups. Some of these compounds are highly carcinogenic, probably because of their ability to mimic biological substances. They are created when Diesel fuel burns incompletely, and they adhere to particulates. Particulates means smoke—we all know the words to the song that goes, “My exhaust is blowin’ black as coal.” Modern Diesels in good condition and adjustment don’t smoke much except at startup and slightly on full load. It doesn’t take much fuel burning incompletely to make visible smoke—even conversion of as little as half a percent of the fuel into smoke results in unacceptably dark exhaust. Any Diesel engine will smoke: (a) If its injectors are deteriorated so their spray has become coarse, (b) If more fuel is injected than can be completely burned.

Spark ignition engines can use all their air, but if a Diesel is made to use more than about 85% of its air charge, it will smoke. Sometimes, in the interest of increasing torque without regard for smoke, an operator may have his injector rack travel extended to supply more fuel than this, and the result is heavy smoke. In the song mentioned above, the owner-operator remarks, “My rig may be old, but that don’t mean she’s slow.” Particulates are the subject of a lot of research. Some developments center on not creating particulates in the first place. Among these are the Sonex combustion process and certain sparkassisted combustion schemes. Neither concept requires the desulfurized fuel now being discussed to be compatible with future Diesel catalytic converters to eliminate nitrogen oxides. Any way you look at it, desulfurizing will cost money, but catalytic converters may win in the end. On the post-treatment side, the basic scheme is to catch particulates on a high-temperature ceramic filter, from which they can be burned off periodically. In a recent public demonstration of such a system, a clean handkerchief was placed over the exhaust pipe with the engine running, then shown to be free of carbon and odor.

The Nitrogen Oxides Problem Why should Diesel engines produce any nitrogen oxides, when at less than full throttle their combustion takes place in the presence of so much excess cool air? The answer to this is called “sheath burning,” and takes place early in combustion. As fuel begins to be injected into the hot compressed air in the cylinder, the fuel is heated by the air but its evaporation also cools the air. The result is that the fuel does not instantly heat up to ignition temperature. This period of time—after the beginning of injection, but before ignition has occurred—is called the delay period.

35

Meanwhile, more and more fuel is being injected, providing yet more cooling through evaporation. Finally the hot air wins and the evaporated, mixed-with-air part of the fuel ignites somewhere. How does the burning proceed? Because a chemically correct mixture burns fastest, the early flame races along that contour surrounding the fuel droplet cloud where the mixture happens to be chemically correct. This fast flame is hot, because it is burning without excess air (there can’t very well be excess air if we’ve already defined the local mixture as chemically correct). Hot flames generate nitrogen oxides. After this initial period of burning the premixed part of the charge, combustion settles down to the normal Diesel diffusion flame burning, in which fuel vapor diffuses into the air around it, burning as soon as conditions permit. One way around this NOx problem is to run all the exhaust through a reducing catalyst. Another way is to somehow limit the amount of fuel that is in the cylinder at the moment combustion actually begins—by shortening the delay period. One of the strengths of the Diesel engine is that it can be made to operate on almost any kind of combustible liquid (peanut oil, canola, filtered crankcase drainings, etc.) but this doesn’t mean it does so happily. The fuel’s cetane number is the measure of how easily it ignites, and this varies among fuels. Fuels of differing cetane number will have ignition delay periods of different lengths, requiring (for best economy) different injection timings. In concept, cetane number is the reverse of the octane number used to measure the self-ignition resistance of spark-ignition fuels. One way to reduce the amount of fuel burned at high temperature as described above is to initially inject fuel at a low rate, a system called pilot injection. The small mass of fuel initially injected quickly heats to ignition. Once ignition is achieved, fuel injection rate can be increased to deliver the full amount quickly. New electronic


injectors operating from common-rail fuel supply can do this because they are just solenoid-controlled valves that do what they are told—including pilot injection. Another reason for interest in common-rail systems is that injection pressure remains constant at all rpm. This ensures good fuel spray break-up and rapid ignition, even at part-throttle. Spark-assist schemes work by forcing early ignition of the injected fuel, avoiding much of the ignition delay caused by the usual processes of evaporative cooling, fuel build-up, then finally ignition. The Sonex system operates in quite a different way. Small cavities are provided in the piston, each reached through a small orifice. During actual combustion, hot gas containing fuel molecule fragments and hydroxyl radicals is driven into these cavities. The small size of the cavity orifices causes the leak-down time of this gas to be comparable with the engine’s cycle time. This means that during the next intake stroke, gas from the cavities is still emerging, and mixes with the air charge. The presence of these radicals leads to earlier ignition with reduced soot and NOx, it is claimed.

Engine Speed and Ignition Delay Ignition delay period affects all Diesel engines. In large marine units, ten crank degrees of delay, at 90 rpm, allows plenty of time for the injected fuel to penetrate the air charge, heat up, evaporate, mix with air, and finally ignite. But in your truck’s engine, turning 2200 rpm, this length of time—say .02 second—would be almost a whole crank revolution. Therefore, as engines are made smaller and faster turning, the process of mixing injected fuel with the air charge must somehow be appropriately speeded up. In large truck engines with full bore diameter combustion chambers, high pressure injectors with six radial injector orifices are used to push fuel rapidly outward through the very dense

compressed charge. Injection pressures of 15-22,000 psi are necessary to give the fuel the velocity necessary to break it up into fine droplets that will evaporate with the necessary speed. Injection velocity can be higher than 1,000 feet per second. In smaller heavy-duty engines, the piston may have a central hockey-puck-shaped cavity of smaller-than-bore diameter, and employ intake swirl. During the intake stroke, the intake flow is given a tangential direction by various means, causing the air charge to rotate around the cylinder axis. When the piston nears TDC, most of this charge is forced into the piston cavity where its rate of rotation speeds up. Fuel injected into this rotation mass of air is mixed more quickly yet. To reach even higher crank speeds, pre-chambers may be used, connected to the space above the piston by a small orifice. As the piston rises on compression, air is driven into the prechamber where it forms a very rapidly rotating, small single or double vortex. Fuel injected into this environment (IDI, or Indirect Injection) is mixed most quickly of all—appropriate for the higher rpm levels of such small engines. The drawback of the prechamber is its increased surface area, which increases heat loss and therefore fuel consumption. Where large Diesels may use less than .35 pound of fuel per horsepower, per hour, prechamber engines consume .42 pound or more. Compare these numbers with .5 pound for well-designed sparkignition engines. We humans never let anything alone, or allow anything to remain simple. The Diesel engine is an excellent prime mover, but to coexist with current human needs, its combustion process and exhaust emissions are being brought steadily closer to perfection. Wake me when it’s over. Turbo Diesel Register Issue 30

36


Your Notes:

37


Tires and the Marketing of America Tire Construction All the recent talk about mysterious tire defects suggests it’s time for some background on how tires have developed to their present technological level, how tires are made, and how they respond to their conditions of use. Progress in tires has always dealt with the twin problems of strength and temperature management. A tire is basically a flexible rubberimpregnated fabric structure, given rigidity by the tensioning of its carcass of cord fabric by inflation pressure. Applied over this carcass is the part that rolls on the road—the rubber tread. The flexibility of the tire allows it to lay down a flat footprint on the road, large enough to generate useful traction. The earliest tire carcasses were made of cotton fabric very much like heavy canvas, with interwoven fibers. Rubber didn’t stick to this fabric very well, and the weakness of cotton required many plies of fabric to make an adequately strong tire. Rubber is elastic, but not perfectly so. When you stretch or otherwise deform a piece of rubber with 100 units of energy, then release it, it returns to its original shape, giving back not 100 units of energy, but some lesser amount – say 70 units. The rest—that other 30% of the deformation energy—appears as heat in the rubber. Flexing rubber generates heat. Because this is so, as a tire rolls and the tread and carcass rubber flexes to lay a flat contact patch on the road, heat is generated. The more rubber there is in the tire and tread, and the faster it rolls, the more heat it generates. The lower the inflation pressure, the bigger the flat footprint laid down on the road, and the more sharply the rubber must flex as it enters and leaves that flat footprint. The lower the inflation pressure, the more heat is generated as the tire rolls. Because applied load also increases footprint size and rubber flexure, the

more load the tire carries, the more heat it generates. Back in 1920, pneumatic truck tires were impractical because the heat they generated in the necessary 15 or 20 plies soon destroyed the tire’s strength. This, and the absence of good highways, were the reasons why there was no longdistance trucking before about 1927. In-city trucks used solid rubber tires in that period, and these were limited by flex-driven heating to low speeds like 20 mph. Racing cars at Indianapolis actually had their pneumatic tires catch on fire from high-speed heating. Interwoven tire fabric had to be abandoned very early on because, as the tire flexed, the interwoven fibers of the fabric sawed at each other until they broke. This caused the adoption of so-called cord fabric, which has all its fibers going in only one direction—there are no interwoven fibers crossing them. To get strength in all directions, these cord plies were applied at an angle to the tire centerline—one ply angled to the right at 45 degrees, the next to the left, and so on. Each ply was embedded in a thin skim layer of rubber, so that when the plies became part of a tire, they were separated from each other by this rubber, and so were unable to saw against each other. The rubber in these plies was “green”, that is, uncured, and in a slightly sticky condition. This stickiness, called tack, is what holds the parts of the tire together during the building process. Early tires were built on a tire-shaped metal form, on which they were cured by heat in wrapped stacks, inside steam-heated autoclaves. Later, tires were built as flat bands on a building drum, then given shape by being driven into a heated female tire mold by the inflation of a ringshaped bag. After the required number of fabric plies were built up, the tread was applied as a long, extruded belt of rubber, carefully applied so as to trap no air between carcass and tread, rolled into place with rollers as the building drum rotated. The right and left edges of the cord plies were rolled over two hoops

38

of high-strength steel wire called the beads, in alternating directions. In the finished tire, these beadwires provide the tensile strength to prevent inflation pressure from forcing the tire’s edges up and over the rim flanges. The green rubber contains curing agents, accelerators, and cure modifiers, so that when the green rubber enters the hot mold at 315 degrees F, it cures to produce rubber of the desired properties, in a reasonable length of time. Curing is a process by which the soft, putty-like green rubber is transformed into a tough, elastic solid. The long rubber molecules are cross-linked to each other during curing by sulfur bonds—a process driven by heat. Once the tire is cured—a few minutes—the mold opens in clamshell fashion and the finished tire is pulled out. It is then placed on a dummy wheel and inflated to tension its cord fibers in the positions they will occupy in use. To make rubber stick better to the cotton fabric, it was first run through baths of rubber thinned with solvents, to drive the rubber deep into the fibers. This brought a great improvement in the strength of the tread-to-carcass bond, and in tire integrity as a whole. Since tires generate heat as they roll (and more as they roll faster), at some high speed a tire may generate enough temperature to threaten its structural integrity. Such failures are familiar to anyone who has an interest in motor racing. The two major types of failure are blistering and chunking. In blistering, oily or waxy elements of the tread rubber, added to enhance the softness and grip of the tire, begin to boil and generate gas within the rubber. As a result, the affected part of the tread turns to foam and swells up, causing thumping and vibration. In chunking, heat deteriorates the bond between tread and carcass, allowing pieces of tread to separate and fly off. I have seen pieces of thrown tread penetrate heavy fiberglass seats on racing motorcycles, and we know from


the recent Concorde aircraft disaster that thrown tread (in that case moving at between 200 and 300 feet per second) can penetrate fuel tanks and destroy hydraulic and electrical connections. Anyone who has driven a car has seen plenty of separated truck tire treads by the roadside. Checking tire pressures on an 18-wheeler takes time, which is why it’s usually done by bonking each tire with a tire iron. If it sounds like the others, it’s assumed to be okay. Sometimes the checks aren’t done, and tires come apart. Tires for the fastest of all applications— racing at Bonneville—have the thinnest possible tread. This reduces heating from rubber flexure, and it relieves the rubber-to-carcass adhesive bond of most of the centrifugal load created by the mass of the tread. In track racing, whenever a tire shows excessive operating temperature (as read by a thermocouple needle, carefully pushed down to the tread/carcass interface), two remedies may be tried. First, inflation pressure is increased to reduce flexure. Second, some of the tread thickness may be pared, or “skived” off the tire, to remove some of the source of the heating. Something needs to be said about how rubber creates traction. By being elastic, it is able to take a print of all the asperities on the road surface, creating a kind of “key” between tire and road. Other, more complicated, phenomena also contribute. Grip increases with the total surface area of rubber in actual contact with the road, which is why tires that require the highest possible grip in dry conditions have no tread pattern at all. They are slicks. The whole purpose of tire tread patterns is to provide drainage pathways for water in wet-road operation. Race tires for moist conditions have just a very few wavy lines cut into them. So-called full rain tires have extensive drainage, and resemble ordinary auto tire tread patterns. The more cuts and channels in a tread, the less stiff it becomes, the more it flexes in use, and the hotter it runs. When rain

tires are used in a race, and the rain stops, the tires promptly overheat and must be exchanged for intermediate or slick tires. The reason Formula One racing car tires now have five grooves is a decision by the F1 governing body to reduce tire grip for race marketing reasons. The edges of tire tread patterns do not generate traction by cutting into the road – the road is much, much harder than the tire. At the end of the 1920s, tire technology had advanced enough that truck tires could be built with some chance of survival on the roads of the time. In a well-publicized PR stunt, Goodyear filled a convoy of trucks with tires and drove them across the whole US, incidentally using up all the tires in the process. The point was made; tires were ready for long-distance truck service. During and after WW II, cotton as a carcass material was abandoned for the much stronger nylon, pioneered in aircraft tires. There were problems in making rubber stick to the new material, but these were overcome. Because of the strength of nylon, fewer plies were needed to achieve a given strength, so tire casings became thinner. Less rubber flexing meant less heat generated, so tread wore more slowly. That, in turn, allowed use of thinner tread for equal mileage, leading to less heating, and so on, in a cycle of improvement that continues to this day. Other types of tire carcass fiber have replaced nylon – rayon, polyester, steel, and aramid. The constant improvement in the strength of tire fibers has allowed a steady decrease in the number of plies necessary to achieve mechanical strength. This, in turn, has reduced heat generation, making tires in general much safer and longer-lasting. The crowning achievement of tire technology is the radial-ply tire, which requires only one carcass ply, and therefore operates with the least heat generation. Radial tires for heavy trucks were viewed with suspicion by operators when they were introduced in the early 1970s, but the outstanding durability

39

and long life of these tires soon made believers of them. Bias-ply tires are built as I described above—by laying on plies with their fibers at an angle to the tire centerline, the first angled one way, the next one the other way, and so on. In a radial-ply tire, the single carcass ply is applied with its fibers at right angles to the tire centerline, so that in the finished tire, these fibers run up the sidewalls in a radial direction, then straight across the tread region at right angles to centerline. The radial tire was invented by Michelin in about 1948, and has since been improved by many kinds of modifications such as various types of under-tread stiffening belts and sidewall stiffness modifiers. The radial tire was made possible by the development of cord fabrics strong enough to make the concept workable. In a sense, the development of the radial tire was nearly suicide for the tire industry. Where bias-ply tires had lasted 15,000-20,000 miles, radials immediately more than doubled this. This made Akron, Ohio, formerly the tire capital of the world, into a ghost town of empty brick manufacturing buildings.

Marketing Versus Physics We live in the commercial world that some call “The Megastore.” Marketing, image, and brand recognition are everything, and quality is, at best, a secondary issue. We no longer buy cars and trucks for value. We buy adventure, a rugged image, an American icon. Vehicles marketed as part of the rugged, manly off-road experience therefore must have tread patterns that suggest all those pioneer virtues. These are invariably described in the marketing blurb as “aggressive tread design.” What this means is that they are rough, knobby-looking affairs, with deep canyons cut between ranks of tall, sculptured, Gibraltar-like tread blocks. You can be sure that plenty of focusgroup time is consumed in determining just what kind of tread pattern will light the


public’s fire this year. Never mind the fact that the only off-road mud likely ever to spatter these SUVs comes from a spray can bearing the vehicle manufacturer’s accessory part number. Now here’s the problem. By jacking the tire up on all these hundreds of little rubber feet, by applying this thick, sculptured layer of tractor-styled tread rubber, the tire designer is building a stove into his tire. Remember, the more rubber there is in the tire, the more heat it will generate. The tire engineer knows all these things better than I do, but as noted above, marketing is pretty important in the Megastore. The vehicle manual tells us to check tire pressure monthly, and to increase tire pressure when carrying heavy loads. It also provides speed warnings or even limits. But in one review I know of, forty-percent of enthusiast vehicles checked at a touring rally were found to have one or more tires underinflated by 5 psi or more. The combination of tires burdened with excess heat generated by flexing, thick “aggressive” tread patterns, plus possible extra heat resulting from underinflation, plus heat from operation in the American west and southwest, appears to result in instances of tread separation. Tractor tires were never intended for high-speed operation, but marketing found a special use for them. In the press, these tread separations are spoken of as if they were caused by some mysterious agency, a “sinister force” yet to be discovered. Nothing whatever is said of the possible physical circumstances of underinflation, operation in hot climates or at high speeds and loads, or the fact that the thicker tread is made, the hotter the tire must operate. To the press, it’s all a mystery. Could the vehicles themselves somehow increase the probability of tire failure? This question has to be asked because, in the game of corporate responsibility, e v e r y o n e s u e s e v e r y o n e e l s e . Remember the big rollover scandals

that panicked SUV owners so recently? On the basis of what she’d read in Consumer Reports, my sister went out and bought the Range Rover, because it passed whatever rollover test CR used. My bet was that Rover wisely fitted tires with harder, less grippy tread rubber, or deliberately underinflated the tires, thereby reducing their cornering stiffness enough to make the vehicles skid before they would roll over. Problem solved. Many people are confused about the effect of tire pressure on tire grip. When stuck in sand or mud, it is useful to reduce tire pressure, thereby increasing the area of the tire footprint and making the tire less likely to dig itself in. This makes it easy to assume that lower pressure always equals more traction. On pavement, the reverse is true. In this case, reduced inflation makes the tire casing less stiff, allowing the footprint to distort and lift up from the pavement. This causes reduced tire grip. Those of you old enough to remember the Corvair handling controversy may also recall what was done to “fix” it. The swing axle rear suspension could, under certain circumstances, jack up and destroy rear tire grip, causing the car to oversteer violently and spin out of control. The answer? Chevy reduced the grip at the front by the simple expedient of placarding front tire inflation at an amazingly low 12 psi. It’s a law of physics, not a mystery, that if you build a vehicle with a given track (lateral distance between wheels), but with its center of mass raised high enough off the ground, it will tip over before it begins to slide. The focus groups tell the manufacturers how high the vehicles have to be to look “Baja-rugged” and adequately manly, and that’s how tall they make them. There are no two ways about it—if you make vehicles taller, they tip over more easily. Perhaps, as some are saying, one or another of the SUV makers did write reduced inflation pressures into their owner’s manuals, in the interest of avoiding the already prickly rollover

40

problem. Then the question was, will the tires give adequate reliability at that pressure? The tire maker’s statistics probably looked pretty good. Nothing’s perfect—there are bound to be a few defects because even fully-automated manufacturing cannot produce zero defects. Because tires have to be heatcured from their surfaces inward, the degree of cure decreases with depth, and surely some zones in some tires will be to a degree undercured, others slightly overcured. When plies, breakers, and tread are applied during the build process, some air or even moisture may possibly be trapped between, forming nuclei around which trouble becomes a bit more likely. This means there will be some statistical scatter in the tolerance of a population of tires for load, speed, temperature, and accidental underinflation. It is the job of quality control to squeeze that scatter to an acceptable width. The most vulnerable tires at the edge of that scatter will not all belong to people who travel loaded, at 90 mph, through Death Valley in summertime, underinflated for conditions—but some will. And when those great thick treads get cooked off of the tires and thrash around inside the wheel wells at a hundred feet per second, some may damage steering linkage, and the sudden thumping and banging are going to badly spook their drivers. Some will coast, shaken, to a safe stop. Others will apply the universal remedy and jam on the brakes, compounding their problems by locking the wheels and so losing control. Some will actually be injured or killed, and we’re all sorry about it. It may be that there are more defects in a population of the subject tires than in some other tire population, but the public debate on this business is not likely to give us that information. Therefore, we won’t really learn anything useful. I suspect that all tire makers try to achieve similar, industry-wide standards of tire quality scatter, but now it’s the job of the courts and the teams of lawyers to find out if this is indeed so in this case.


When the Concorde supersonic transport had its Washington/Dulles tire incident in 1979, fragments of a separating tire tread penetrated the aircraft’s fuel tanks in more than ten places during take-off, but fortunately there was no fire that time. Once a perceptive passenger alerted the flight crew to the existence of a 3 X 4 foot hole in the top of the wing, the machine was turned around and landed safely. The important thing about this incident was what was changed because of it, some of which is as follows; (1) Air France switched to another maker of tires

those professionals who carry that responsibility. Even with the exercise of great care, accidents are still possible. In the case of privately-owned automobiles, matters like tire inflation, vehicle load and speed, highway and ambient temperature, and the possibility of one or more dragging brakes are the responsibility of the operator. Whatever the outcome of the Firestone affair, any operator can greatly decrease his or her chances of ever suffering a tire tread separation by doing the following; (1) Choosing tires appropriate to the speeds and loads contemplated

(2) Inflation pressure was raised from 187 to 220 psi, in the interest of reduced flex and heating (each tire carries 50,000 pounds of load at take-off)

(2) Being aware of conditions—i.e. not driving at excessive speeds in very hot weather or when carrying heavy loads

(3) Much more frequent checks of tire pressure before takeoff were mandated

(3) Setting and frequently maintaining tire pressures at the values recommended for current loads and speeds.

(4) Strain gauges were added to the main wheel trucks to detect and provide cockpit warning of asymmetric strain resulting from a deflating tire

(4) Sensing the abnormal. Experienced racers running at high speed on the Daytona banking slow down instantly when they feel the sudden build-up of vibration that signals blistering or chunking.

(5) In any case in which wheel brake temperature had risen above a set level, the entire assembly was to be stripped and inspected

Turbo Diesel Register Issue 31

(6) Pilots were ordered to limit taxiing prior to takeoff (even rolling at low speed at full take-off weight generates a lot of heat, and constant use of the brakes to control taxiing speed generates even more) Most of what we can learn from this is obvious—treads separate because heat destroys their bond to the tire casing. The lower the tire pressure, and the more weight being carried, the more heat is generated. Frequent tire pressure checks are necessary to prevent accidental underinflation. Other possible sources of tire heating must be controlled. On a commercial aircraft, all these safety matters are handled by

41


Diesel Developments A lot is happening in Diesel engine development and I expect the pace of change to accelerate. We’re not done yet. The obvious driving force is emissions regulation by governments. This past December, EPA released much tighter standards that Diesels must soon meet. There is also serious pressure from trucking companies. Trucking has become so competitive during these past years of economic boom that minor differences in specification can determine the equipment choices made by major fleets when replacement time comes. The automotive trend toward heavy SUVs makes it hard for automakers to meet mandated fleet fuel economy levels. Expect to see new light Diesel engines replace gasoline-fired powerplants in these applications. Yet another reason (not yet accepted in the US, where global warming is perceived as a sinister plot by the Democratic Party) for increased interest in Diesel power is that carbon dioxide emissions per horsepower-hour are lower with Diesel power than with gasoline. The result is that Diesel development has never been so rapid as it is today. Turbocharging was a revolution for highway Diesels because it cut engine weight-per-horsepower. This, in effect, made friction losses smaller in relation to horsepower. When an engine idles, 100 percent of its power is being consumed in friction. As the load is increased, friction’s share falls progressively until, at full load and design rpm, friction may only be 15 percent of output horsepower. What this means is that the lower the load on an engine, the greater the percentage friction loss. As it turns out, the friction coefficient of a plain sleeve bearing like those used on cranks and rods drops as load is increased, becoming a minimum at a load just short of that required to cause bearing failure. Therefore turbocharging, by increasing the load on bearings beyond non-supercharged full load, has the effect of further reducing the friction loss as a percentage of the power delivered. The effect becomes

stronger as engine rpm is reduced. Those of you who flew recip-powered propeller aircraft in WW II, Korea, or Vietnam will recall that maximum fuel endurance comes at minimum engine rpm and high boost to pull a steep prop pitch. The cause is the same. There are problems with turbocharging. A major one is that as more and more air is delivered to a cylinder for each power stroke, so more and more fuel must be injected in proportion. That takes time. While the fuel is being injected, the crankshaft is turning and the piston is moving. Any fuel injected after the piston has moved a significant distance along its power stroke is burned at less than maximum efficiency. For example, if the engine’s compression ratio is 17:1 and the piston has already moved through 1/16 of its stroke, the effective compression ratio applied to fuel burned that late is only 9:1. Forcing more air into the engine and burning more fuel certainly makes more power, but because of this late-burn effect, it doesn’t make as much more power as it should because less of the fuel is burned at or very near TDC and full compression ratio. To get more fuel into the cylinder before the piston has moved significantly takes more pressure, which is part of the reason why such high injection pressures (like 20,000 psi) are being used.

Common Rail Injection Classic Diesel fuel injection uses a jerk pump containing tiny pump plungers driven by a cam geared to the engine. Fuel quantity is controlled by varying the effective stroke of these plungers—a port in the wall of the plunger cylinder is opened once the desired fuel has been injected, thereby “spilling” the remainder of the fuel back to the low-pressure line. This works well at the engine rpm for which it is designed. In this case, the injection pressure drives the fuel spray deep into the dense, compressed cylinder air charge, thereby achieving

42

good fuel dispersion and an efficient burn. At lower engine rpm, pump rpm and pressure are also less, so the slower-moving pump plunger drives fuel from the injection nozzle with reduced velocity. The result can be less fuel spray penetration and/or incomplete fuel droplet breakup, leading to exhaust smoke and reduced efficiency. In big truck engines, jerk pump operation was optimized for the most frequent load condition, which was full power. Engines for smaller trucks or for cars must operate efficiently over a wider load range because the open road is not their only gig. One way of obtaining constant injection pressure at all engine rpm is to deliver the fuel from a so-called common rail, which is a fuel manifold whose pressure is kept at the desired level by a pump. Fuel from this common rail is delivered to the individual cylinders by injection valves that may be either mechanically or electrically operated.

Soup-Ups And Smoke When a stock engine’s power is boosted by equipping it with either a bigger turbo or with two-stage turbocharging, the extra air delivered must be matched by extra fuel. In jerk pump engines, this meant replacing the stock injection pump with one of greater capacity. Fuel injection is normally a carefully developed system that ensures good fuel penetration into the air charge, with rapid evaporation and light-up. Just installing a bigger pump gets more fuel into the engine, so power increases. But simply pushing more fuel into the cylinders does not guarantee that it will penetrate, break up properly, or burn quickly. Highway-certified engines normally “burn” only about 80% of their air charge. Supplying more fuel than this results in less complete combustion. The result is the dense exhaust smoke you see at truck drag racing events.

NOx and Particulates Because Diesel engines burn their fuel in the presence of about 20 percent excess


air, there is little CO or unburned HC in their exhaust—these are products of incomplete combustion. What they do produce is nitrogen oxides (a product of high-temperature combustion) and particulates—both troublesome to eliminate. Particulates are soot particles—clustered carbon atoms —with high molecular weight unburned hydrocarbon molecules stuck to their surfaces. Unfortunately, these adhering hydrocarbons turn out to include some species that are powerful carcinogens. In layout, these molecules resemble tiny six-sided bathroom tiles in groups. The most carcinogenic of these are the ones whose arrangement of six-sided carbon rings has “bays”—regions open on one side but surrounded everywhere else by other carbon rings. As is often the way with such things, measures that suppress nitrogen oxide production may increase soot, and vice versa. Nitrogen compounds form at high combustion temperature, but cooling combustion by recirculating cooled exhaust gas (cooled EGR) tends to increase particulate formation. A lot of high-temp combustion results when fuel takes time to ignite after being injected into the cylinder. This time is called the delay period, and it is just the interval required for the hot, compressed air charge to evaporate and heat up the injected fuel droplets until they ignite. If delay is long, a lot of fuel gets injected before ignition occurs, and much of it then burns hot, creating NOx compounds. To avoid this, so-called pilot injection can be used. A small amount of fuel is injected before the main fuel charge, so that when it ignites, there is not enough fuel present to create the high temperature that makes NOx. But there is, by the same principle, also not enough heat to avoid creating particulates. In some cases, these particulates may be partly burned up by post injection—squirting in a small amount of extra fuel after the main combustion phase. Getting everything to the right balance is not an easy problem to solve. Currently, one technique for solving this dual problem uses one aspect to fix the

other. It turns out that nitrogen dioxide is better at burning up particulates than oxygen by itself. This suggests using the nitrogen oxide emissions as an oxidizer to burn up the particulate emissions. While the engine runs, nitrogen dioxide is adsorbed onto a special surface in the exhaust catalyzer so it can’t stream out in the tailpipe gas. This is then used to burn up the particulates, which have been trapped in the tiny, complex pores in a ceramic exhaust filter. When the two combine, they react to form ordinary (and harmless) diatomic nitrogen, plus carbon dioxide. Making this happen in correct proportions and without plugging up the particulate filter requires some trickery. This may take the form of reversing the flow direction of exhaust gas through the ceramic particulate filter periodically, or of adding extra reactive nitrogen from an external tank of urea. The effect of pilot injection in reduction of nitrogen emissions has been known for some time. It has now become much easier to implement with the coming of high-pressure (20,000-psi) common-rail injection through solenoid-controlled nozzles. A computer can send lots of pulses to a solenoid valve in a short time, but it’s hard to imagine duplicating the effect with the traditional jerk pump and its tiny cam-driven plungers. Pilot injection also cuts noise, because the less fuel there is in the cylinder at light-up, the less noisy the thump that is delivered to engine structure by the resulting pressure rise. As I left my favorite local diner recently, there stood a late-model highway tractor, idling quietly in the parking lot. Quietly? A Diesel? I concluded this must be one of the new engines in which this improved pilot injection combustion process is used.

Diesel Research A group of researchers at University of Wisconsin has recently done some high-level playing with a sophisticated computer model of Diesel intake, air motion, fuel injection, and combustion. Rather than just try whatever ideas they might have had, they decided to

43

basically let the computer try everything. They gave their program certain rules by which to evaluate the results of each simulation run, and by trying one set of combinations of variables after another, steering toward improved results, they were able to learn some novel things. For example, instead of injecting the whole fuel charge at once, much better results were obtained by injecting it in a series of closely spaced micro-injections. Why should this work? All-at-once fuel injection creates big fuel-rich zones (typically six or eight of them—one for each orifice in the central injector tip) surrounded by leaner zones. Combustion takes place fastest where the mixture is chemically correct, and more slowly where richer or leaner than that. Combustion is “completed” by a process of random mixing of rich and lean zones such that most fuel molecules eventually combine with oxygen to liberate heat. This takes time. It would be a much better process if the fuel were better mixed locally, resulting in overall lean-burn combustion whose lower temperature produces little nitrogen oxide. Injecting the fuel in sequential micro-doses creates more numerous and smaller rich zones, surrounded on all sides by much leaner zones. Mixing and complete combustion can therefore take place faster. This is a closer approach to true lean-burn. Soot forms fastest in a cool flame, but by distributing the fuel better, sequential injection may very well prevent local very rich cool spots from forming—rich sources of soot. When the computer results were replicated in real engines, satisfying gains in efficiency and drops in pollutants were achieved. Anything that speeds up combustion has some chance of increasing fuel efficiency. As noted above, if combustion takes so long that it’s still in progress as the piston moves down on its power stroke, the later parts of the fuel burn exert their effect at lower pressure, and through a reduced expansion stroke. That’s a loss that results in high exhaust temperature and increased fuel consumption. The new breed of


common-rail solenoid-controlled injector can do a better job of getting the fuel into the cylinder in a short time. Efficiency —especially at part-load—is thereby increased. This all sounds like a lot of work, and it is. But when lots of good researchers concentrate on a problem, sooner or later the best and simplest methods of solving it are discovered. That will mean more efficient, quieter, and much less polluting Diesel engines in all sizes.

A Note About Turbocharging The usual way to turbocharge engines is to hook up a turbocharger’s air outlet to the engine’s intake plumbing. Where’s the problem? The problem is in the word plumbing—that’s just what it looks like. Pipe, elbows, joints. This is the way supercharged gasoline engines were plumbed years ago. Intake plumbing didn’t have to be smooth and free of flow restrictions, they reasoned, because you had all that pressure to cram the air in whether it wanted to go or not. If you needed more air, you cranked up the boost. Then Cosworth Engineering began to turbocharge race engines for Indianapolis. They discovered that concepts that worked well in unsupercharged racing engines—smooth, straight intake ports with minimum flow restriction, having tuned length to take advantage of organpipe effects—worked just as well in a turbocharged engine. Therefore they designed their turbo engines as if they were unsupercharged engines, running in an artificially dense atmosphere. They were rewarded with “free” horsepower because now that lots of turbo boost wasn’t being wasted in forcing air around sharp corners where it didn’t want to go, more of it was getting to the cylinders where it could make power. Therefore I expect to see Diesel intake “plumbing” evolve to look a lot more like the slick, high-flow intake systems we now see on unsupercharged high performance gasoline engines. Air from

the turbo will go into a long plenum, from which individual intake pipes will go to each cylinder.

Other Fuels? Diesel fuels consist of longer-chain or multi-ring hydrocarbon structures that have to be broken apart during combustion so they can recombine with oxygen to release heat. Unfortunately, many of the hydrocarbon varieties in this fuel strongly resist being broken up unless the temperature is pretty high. By the time thermal collisions have broken up the most shock-resistant carbon chains, most of the oxygen already has partners. That leaves a lot of free carbon chains floating around. Carbon sticks to everything—that’s why it’s so often used for purifying water or whiskey, and why it makes good, high-friction brake disks. It also sticks to itself, so free carbon in the combustion chamber can clump together faster than it can find oxygen partners with which to burn. The result is soot particles. Simpler fuel molecules break up and burn faster, and could lead to reduced soot emission. This is why there are calls to run Diesel engines on “reformulated fuels.” The day may come when cities or regions in which air pollution remains a difficult problem will require the use of such fuels within their borders. One thing is certain. If catalytic exhaust treatment systems become the dominant Diesel clean-up technology, low-sulfur fuels must be provided for their use. Sulfur deactivates catalysts. Refinery processes for desulfurization are now in development.

Variable Valve Timing Anyone who’s built a few circle-track V8s knows that cam timing is job-specific. If you want bottom acceleration, you run short duration and long lobe centers. If you want top end, the recipe is different. No one setting is anything but a compromise.

44

This gets worse with supercharging or turbocharging. Often, to keep pistons and valves cool, long timings are used in turbo engines. Air blowing through the cylinder on valve overlap performs a useful cooling of pistons and exhaust valves. But if this same engine is operated at lower loads, the long valve timing reduces the air charge because late intake closure allows the rising piston to pump out what was just sucked in on the previous intake stroke. The upshot is that engineers have long wished for variable cam timing. Many patent devices for achieving this exist, but no system combined all desirable attributes—variable lift and timing, reliability, simplicity, and acceptable cost. Now the increased value of a solution may have brought one into being. Navistar (International) has introduced a hydraulic system of valve operation that has no camshaft. Instead, each valve is opened and closed by hydraulic pressure, delivered through an electromagnetic control valve directed by a computer. Valve spool motion is very small—only a few thousandths of an inch—but it can control a much larger flow of highpressure hydraulic fluid. This technology has been developed as an offshoot of military systems by the Sturman Co. In essence, engine valves can be operated through any lift and timing desired, simply by changing the programming of the control signals. This would allow an engine to have short valve timing at lower rpm, with the timing extending as rpm built up during acceleration. This ensures a more nearly constant air charge regardless of rpm. Longer overlap could be provided during high boost operation of turbo engines, as a means of internal cooling. Likewise, engine braking would become a software item—a special set of computer commands that turns the engine into an air compressor by opening the exhaust valves at the top of the compression stroke. Variable valve timing allows the engine to be continuously re-optimized for changing conditions—no more compromise.


Don’t expect to see this valve drive system on Formula One racing cars soon—its operating speed is presently appropriate only for slower RPM truck crankshaft speeds. Just when we think of the Diesel engine as a highly developed and efficient power system, along come significant new refinements that push it to new heights. There’s more to come. Turbo Diesel Register Issue 32

45


In the Toolbox Every person’s toolbox contains a lot more than tools. With the possible exception of those who climbed impulsively into the Snap-On truck and cried out “One of each, please!” everyone’s tools have stories, and even the use of tools can arouse particular memories. For example, all my life I have heard from mechanics that even ownership of an adjustable wrench (much less its use) labels me as a hacker. There is a reason for both sides of this question. Yes, if I put a pipe on my twelve-inch Crescent wrench and try to tighten critical fasteners with it, I run the risk of rounding-off their hexes when my improper choice of tool opens up under abusive pressure and slips. This does not happen, however, because I am a human being with both experience and judgment. I know that the range of large box- or combination wrenches that the Crescent’s range represents would add fifty pounds to my toolbox and subtract a thousand dollars from my bank account. I need those big sizes only a few times a year, and when reasonable pressure on the adjustable wrench doesn’t move the part, I buy what I need. Meanwhile, those three adjustable life-savers stay in my tool collection, and I am strongly resistant to hoots and catcalls from purists. There are three #2 Phillips screwdrivers. The little stubby one goes where the others won’t, but you can’t spin it. The big long one dates to when I was jetting Kawasaki triples and had to reach carb clamp screws clear across the engine. Picking up that tool reminds me of desperate days in the hot sun, working over the heat of a piston seizure. Yes, I admit to admiring the beauty of Snap-On combination wrenches. Their smooth shape is as right as that of a cat or a horse—a pleasure to have in hand or to look at. Purists, avert your eyes! Mixed in with these beauties are other makes. I remember a hot-headed friend once saying to a Sears salesman, “There are only two kinds of tools in this world—Snap-On and snap-off––and I only have time for the former.” While

extremes are fun, I enjoy the variety of hand wrenches I have. Can a screwdriver be beautiful? When I was a little boy, I loved the emerald, ruby, topaz, and crystal clear plastic handles on cheap screwdrivers. When I grew up, I learned that such handles could, under great pressure, slip on the shafts. Well, never mind, for thirty years I had a #3 Phillips with a clear yellow handle that did every job. Every time I reached for it, hoping that this particular Phillips head would not be the one to slip and become so mangled it would have to be drilled out, there was a tiny spark of that longago little boy pleasure in that tool. Finally I broke it doing something I knew at the time was improper. My punishment is its replacement—a practical NAPA special in a dreary military gray. It’s as unattractive in its way as the matteblack-handled Snap-Ons next to it. In the top center drawer, next to a grizzled pair of Vise-Grips are my Robinson wire twisters. Fasteners on racing equipment and aircraft are secured against unscrewing by stainless, brass, or copper safety-wire, twisted into place according to prescribed rules. At one time I was doing a lot of this, and in company with others. I ordered left-hand twisters in the hope that in this way I’d be able to know who had done what. Recently, in disassembling a big Pratt and Whitney aircraft engine, I found that left-hand twisters get around; while most of the safety wire on this 28-cylinder engine was twisted to the right, I did find some left-twist as well. Pleasure in the right tool? For me, this is most true of snap-ring pliers. Years ago I tried to make do with a universal snapring “system.” Now I know that the word “universal” as applied to the uses of a tool, often means “does not work.” After spending five minutes carefully inserting and tightening the correct set of jaws, one jaw would sproing across the room from the force applied in unseating the snap-ring I wanted to extract. After much searching, I might find the jaw and try again. After three tries, I had a useless, incomplete tool. Forget it—now I have

46

every kind of snap-ring plier, inside, outside, straight, 45, and right-angle. In the drawer I will find the answer. Even more satisfying in their way are the transmission pliers I bought, which are like normal broad-bill pliers in reverse, with the serrations on the outsides of the thin jaws. These are definitely strong enough to reliably extract and control the thickest, stiffest eyeless transmission snaprings. Standing there at the bench the first time, with the first extracted ring still on the tool, I smiled. Now I was the master of the situation. No more sproing, no more hands-and-knees searching for rings that had shot off of makeshift tools, bounced off a wall, and disappeared. The Snap-On or Matco truck used to come to shops where I’ve worked. This was a weekly Easter for us all. There was pleasure in laying out quite a lot of money to have those gleaming objects that worked so well. Occasionally today I see a tool truck in the pits at a racing event and I have to climb up, behold perfection, and walk away $100 lighter with just a few items I’ve wanted for a long time. The beauty most often perceived is that which is closest to us —the stuff we use every day. Opposite my hydraulic press is a badlymushroomed piece of thick copper bar stock. This is my soft hammer. Why no official copper or brass hammer with proper handle? The impromptu fisthammer had become a shop friend by the time I could countenance spending all the money they want for a “real” soft hammer. Besides, holding this tool as our long-ago Neanderthal ancestors held their fist-axes has its own historical charm. The piece of copper has its story, too. This is OFHC copper, for OxygenFree, High Conductivity. In general, every time you add an alloying element to a metal, you reduce its melting point and its electrical and heat conductivity. This bit was originally bought for making some kind of detector in a previous job. Now it straightens pressed-together crankshafts. Where’s the torque wrench? They’re here somewhere, and they are not of


the kind that have to be recalibrated in a lab every six months. They are springs— bending bars with pointers to show the amount of bend in foot-pounds or inch-ounces. The accuracy of these simple tools depends on two things—my ability to read the scale, and the Young’s modulus of steel. Neither of these requires periodic recalibration (okay, if the pointer doesn’t zero, I bend it until it does). If the wrench falls on the floor, I don’t have to call an ambulance for it. Summers Brothers speed parts were hot in the 1970s and there was a salesman whose problem customer kept breaking the special super-strength fasteners used with these classy axles. How could this happen? At the customer’s shop, the salesman asked him to show him how he torqued his bolts. The man obliged, pulling out one of the very fanciest of “clicker” torque wrenches and carefully pulling the fastener up to the recommended value, “click!” Then with equal care he gave the fastener another quarter-turn...

For all these reasons, there is a pleasure for me in the use of each of my tools. I don’t review their stories each time I use them, but on some level I’m aware of them all the same. They are high above the status of mere objects. I remember looking, as a boy or a young man, into the toolboxes of my elders and being amazed at the unimpressive nondescriptness of what was there. This is just how my tools will look if and when grandchildren arrive who care to look at them. “How can he work with all that old junk? Why doesn’t he get rid of half of this stuff and get some REAL tools?” Turbo Diesel Register Issue 33

Even my toolbox may attract their scorn —a 1963 Craftsman with unfashionably few drawers (made in the last century, for crying out loud). Maybe I’ll get myself a new one some time, but for now, the labor of cutting the rubber matting to fit the drawers, and of screwing down the socket-retaining clips is already complete. It works. A new toolbox would need to have all this work repeated. Also, I can carry this box out to the truck —barely. Some toolboxes I’ve helped to lift feel like they’ve been poured half-full of steel. That’s not practical for me. For a long time I carried a large and beautiful metric combination wrench that fit only one thing—the rear axle nuts on Kawasaki racing triples. It had no other application. I didn’t want to use the Crescent because axle nuts are one of those critical items. Doing the axle nut became a ceremonial affair. The wrench stayed in the box until the bikes for which it had been bought returned, twenty years later, as classic collector’s items.

47


Racing Diesels? An early example of Diesel engines in racing was the attempts by WW I German submarines, operating on the surface, to catch up to merchant shipping. The German “pocket” battleships of the late 1930s, built to the terms of the post-WW I settlement, fully exploited the compactness of Diesel engines and their fuel, as compared with the larger volume occupied by steam boilers and engines with the larger amount of oil fuel they required. This, in terms of design, was a form of racing – trying to extract speed from powerplant advantages. At Indianapolis in 1952, the Cummins Diesel racing car was fast enough to set a new lap record and qualify on the pole for the 500-mile race. A decision had been made in 1950 to allow Diesel-powered cars of 402 cubic inches displacement, supercharged or not, competing with unsupercharged gasoline-powered engines at 270 cubic inches. After beginning with a roots-blown and overbored version of their six-cylinder truck engine, Cummins decided to try that newfangled device, the turbocharger. This new car, although heavy at 2500 pounds, showed that Diesel power was not just for slogging up hills, pulling heavy loads. With the engine turning 4000 rpm and the tubo assisting the intake process to the tune of 15-20 psi, the Cummins made about 400 hp. Because of its engine’s efficiency, the car was able to carry enough fuel to run the race without refueling stops. Unfortunately the turbo inlet was positioned such that in the 1952 500 race, it became plugged with track debris and the Cummins roadster was out at less than half distance. Diesel engines are not throttled— their cylinders always take in a full charge of air. This is valuable for efficiency because lean combustion takes advantage of better conversion of combustion heat into cylinder pressure at lower temperatures. This specificheat-of-gases effect is also the basis for all the current lean-burn development in gasoline engines.

At full power, it is normal for Diesels to deliver only enough fuel to use about 80% of the cylinder air charge. This limit is observed because it has proven to burn almost all the fuel, and with very little smoke. The conversion of as little as 5% of the fuel supplied into carbon results in heavy black smoke. Power, however, continues to rise as peak fuel delivery is raised, offensive smoke is produced, and concerned citizens point and sic the authorities on you. Combustion pressure is the result of (a) the temperature of the burned gases and (b) the number of molecules resulting from combustion. The more zooming molecules there are to collide with the piston crowns the greater the resulting power. As it happens, enriching the fuel-air mixture past the point at which every hydrogen and carbon atom from the fuel finds its oxygen partner does increase the number of molecules in the combustion gas, and this increases power. This effect has been used for years by Diesel mechanics to coax a bit of extra power from hard-worked units—but it does cause smoke. I saw this effect at work this past month at Bonneville Salt Flats, when a truck called “Phoenix,” powered by a twostroke Detroit V-16 engine of 1472 cubic inches, set an unlimited Diesel truck record at just over 250 mph. It left a trail of black smoke five miles long. A crewmember told me that they had “run into a wall of air” at 190 mph and had to improve their truck’s shape to go faster. The front of this monster vehicle (weight is 18,000 pounds) is mostly defined by a 1943 International K-7 cab, with permitted rounding and smoothing performed in a flawless manner. Behind this cab, the shape tapers gradually like an aircraft fuselage, ending in the four upturned turbo exhaust pipes and dragchute housings. The V-16 engine is a Detroit 16V92, an industrial engine normally used in tugboats, to power offshore oil platforms,

48

and to drive US Navy river craft like those once used in Vietnam. It is more or less six feet long. The crank is made in two sections, bolted together at its center, so the engine is in effect a pair of V-8s coupled end-to-end. Each V-8 section has its own large roots blower in the Vee of the cylinders, and each blower is equipped with bypass vanes. These, when used with turbocharging, allow the roots blower to start the engine, but then open to allow flow from the turbo(s) to bypass the roots once turbo pressure exceeds roots pressure. Detroit two-stroke Diesels are uniflow engines, with four exhaust valves in each cylinder head, and rings of fresh-air ports at the bottom of each cylinder. Fresh air under pressure waits to enter the cylinders from an air gallery surrounding the bottoms of the cylinders. When the descending piston uncovers the charge air ports, this air rushes into the cylinder, pushing exhaust gas upward toward the open exhaust valves. Four large turbochargers (each with a compressor wheel just under six inches in diameter) are used on this truck, along with a couple of turbines disintegrated early in Bonneville Speed Week. Turbo problems are common on the salt flats. The apparent cause was debris passing through the engine. Two replacement turbos were flown in. Because of the increased charge air from the turbo system, fuel delivery had to be increased as well, but I did not get a crewmember to reveal what kind of injection changes were made to accomplish this. These engines use cam-driven unit injectors. What kind of tires can possibly survive the weight and speed of this giant? I wondered; but the answer is simple —tires from heavy jet aircraft. The drive wheels of this truck are bias-ply Boeing 747 main-bogie tires, each rated to carry 40,000 pounds and to survive operation at 225 mph. On this truck they would be loafing with respect to load, and therefore likely able to tolerate some overspeed without damage. The fronts are Boeing


707 nosewheel tires, mounted on wheels from a Fokker F-28. The crew “don’t like to turn the engine over 3000 rpm,” but figure horsepower at about 4000, which would be 2.7 horsepower per cubic inch. At 250 mph, the pressure of air resulting from full conversion of kinetic energy into pressure is 167 pounds per square foot. If the front of the truck were just a flat plate and there was no tapering tail behind to reduce wake turbulence, drag would be approximately this 167 pound figure, multiplied times the frontal area. Since frontal area is about 55 square feet, this comes to about 9000 pounds of drag. At 250 mph this would require about 6000 hp. A highway semi does better than this—its drag is only about half that of a flat plate of equivalent area. This reduction comes about because instead of stopping the oncoming air cold against the front of the truck. Much of the flow is diverted around the shape, retaining much of its original speed. Even if the actual drag of the Vast Diesel Racer is half that of a flat plate, we are still left with a need for 3000 hp, plus several hundred horsepower more to overcome rolling resistance and transmission losses. Maybe that 4000 hp is a realistic figure! I watched at the start as the big green truck made one of its runs down the salt. Some minutes before time to go, the engine was started by its twin electric starters and on-board complement of heavy batteries. It started immediately and was warmed up by throttle cycling, sounding like a distant railway locomotive idling. It continued this idling as its pushtruck (powered itself by a twin-turbo 16V92 engine) accelerated it off the line. A push truck is necessary for very fast Bonneville vehicles because their tall gearing makes starting from rest all but impossible. Incidentally, Turbo Diesel pickups are the standard tow, push, and support vehicle of choice at Bonneville. Altitude is 4000 feet and Speed Week weather is usually hot, resulting in a midday air density only 80% of that at sea level. This makes everyone appreciate turbocharging!

A few hundred feet out from the start, driver Carl Heap applied power. Instantly the push truck was made invisible by the thick stream of black smoke from the four six-inch chrome stacks. Like a wind-tunnel flow demonstration, the smoke flowed up over the push truck and down its back to the salt. The push truck pulled off the lane and the giant racer quickly disappeared down-course. What did not disappear was its long heavy smoke cloud, which continued to drift northward, rising from its own heat, for twenty minutes. Rich-mixture power! With two fast passes, made within the required time, their average was over 250 mph, raising their own record by almost 20 mph. It was fascinating to watch two of the crewmen loading drag chutes for this machine (just try to imagine stopping from 250 mph by using only the service brakes). First one man coats all internal surfaces of the parachute container with talcum powder. Then with another man feeding him the nylon harness, he folds it in a prescribed way, back and forth, and places it in the cavity, followed by the chute itself, suitably folded. Another 16V92-powered truck ran in a stock-bodied class on a 224-mph record (its own). This one was based on the cab shape of a 1997 Freightliner, with no tapered tail behind it. The engine was mounted behind the cab in the open —an impressive assembly of machinery, driving through an Allison automatic transmission. Wretched excess is fun! In the world of hot-rodding, too much is just enough. What next? I wonder what a determined crew of big spenders with 3/4” drive tools could get from an EMD locomotive engine? Turbo Diesel Register Issue 34

49


Choices The Diesel engine is the most efficient prime mover currently available. Spark ignition engines lose out because the detonation-prone-ness of their fuel limits compression ratio and so limits air cycle efficiency. Gas turbines coupled to highspeed alternators might be attractive but are limited in small sizes by leakage past the tips of their fast-moving blades. Fuel cells sound great but where will we get all the platinum that they will require? Where will we get and how will we store the hydrogen that really makes them shine? This being so, the question is, how can we use the Diesel engine to best serve human purposes? In the US, fuel is still cheap, so that’s not the issue. The issue is that terrible 1980s discovery that Diesel particulate emissions carry on their surfaces some pretty carcinogenic compounds called PAHs, or Polycyclic Aromatic Hydrocarbons. It is for this reason that so much emphasis is being placed upon reduction of particulates by such means as exhaust filtration. Accumulated particles on the filter are burned off periodically, using partly the nitrogen oxides normally present in the exhaust, and partly nitrogen supplied externally. Europe has different priorities because oil has to come a long way, from parts of the world European countries would prefer not to be dependent upon. European governments also take greenhouse warming of world climate more seriously than is general in the US. Cutting the overall fuel burn is the key to reduced carbon dioxide emission, and the excellent economy of Diesel engines is the key to this. Choosing Diesel power is made easier by tax breaks that reduce the price of Diesel as compared with gasoline. Emissions are certainly an issue in Europe, but cutting oil imports and carbon dioxide emissions are equally sought. European strategy is to encourage the rapid development of highly efficient, reasonably clean Diesel engines for passenger cars. To this end, Euro-regulators are willing to relax auto Diesel emissions standards somewhat to make development of such engines more attractive.

Here in the US, smaller Diesels will have to meet the same standards as large truck engines, and the high cost of this in smaller engines is expected to keep US drivers in clean but less fuel efficient gasoline-burners for the foreseeable future. On another subject entirely, back in the period between World Wars I and II, the US Navy was earnestly seeking improved Diesel submarine engines. Diesels were preferred because of their fuel economy and because heavy fuels are less prone to form explosive vapor than is gasoline. The submarine was a difficult problem because the necessary surface speed required a lot of power, while the engine spaces were small. When compact lightweight Diesel engines were built, engine frames, cylinder heads, and piping cracked, and crankshafts broke. After the notorious military procurement scandals of WW I, a new strictness made the sale of engines to the US government much less attractive. No company wanted to undertake the necessary engine development when only small numbers of engines would be bought. The special problem was the crankshaft. Big power meant lots of cylinders and fairly high rpm, but the resulting long and complicated shafts were subject to the build-up of torsional vibration in particular rpm bands. This constant twisting back and forth soon fatigued and broke the shafts. Making the parts heftier made the engine unacceptably heavy. Simply forbidding operation at certain speeds was about as effective as the Air Corp’s perpetual campaign to stop people from walking into spinning airplane propellers. Words don’t stop habits. The solutions were a clever piece of work. Someone in the Navy department realized that the needs of submarine engines—high power and high speed with light weight—were close to what would be required if Diesel railway locomotives were built. It’s not clear

50

to me whether the Navy helped the railroads develop their engines, or whether the railroads helped to amortize the cost of developing submarine engines. Either way, a deal resulted, and two types of durable engines—the GMWinton and Fairbanks-Morse—resulted. These were the basis of the US Navy’s highly successful, but largely unsung, sub operations in the Pacific in WW II. The crankshaft torsional vibration problem was solved in an equally neat way. Instead of coupling the Diesel engines directly to the sub’s propellers for surface operation, or to motor-generators for battery charging, a railway-like Diesel-electric drive was chosen. In this system, the Diesels drive only the generators, and are never coupled to the props, which were driven by electric motors. This allowed the Diesels to run at a governed, safe rpm while propeller rpm was varied electrically. In locomotives, this system eliminated the problem of designing a clutch strong enough to start a train. Automotive Diesel engines are also subject to crankshaft torsional oscillations, but their cranks are relatively short. This shortness can make the crank’s oscillation frequency enough higher than its firing frequency to avoid trouble. Automotive cranks can also be made robust enough to live or are given torsional dampers to absorb the oscillation energy fast enough to prevent its building up to dangerous levels. Today it appears that automotive Diesel engine design has converged upon an open combustion chamber four-stroke design with four valves and a centrally mounted injector. However, the history of the Diesel engine reveals a rich variety of alternatives. The FairbanksMorse engine referred to above is a prime example. A two-stroke, it had two opposed pistons in each cylinder, connected to two crankshafts. In each cylinder one piston controlled exhaust ports through the cylinder wall, while the other controlled the inlets. Exhaust ports were made to open before the inlets by phasing the exhaust crank to


run about 15 degrees ahead of the inlet crank. Many such opposed-piston Diesel engines have powered trucks, buses, railcars, and even aircraft. In some, a single crankshaft operates both sets of pistons by means of massive rocking levers. A more usual arrangement was to couple two crankshafts by bevel gears and a shaft. The classic engine of this type was the Junkers Jumo 205 Diesel, which powered so many German Ju-52 transport aircraft just before and during WW II. The aim of the opposed-piston Diesel was to achieve uniflow two-stroke scavenging. As exhaust left the cylinder via cylinder wall ports exposed when the exhaust piston was near its BDC, fresh air would enter through inlet ports uncovered by the inlet piston at the opposite end of the cylinder. This simplified end-to-end flow minimized the mixing of exhaust gas and fresh air. Another advantage of the opposedpiston engine is that it has no cylinder heads through which to lose combustion heat to coolant. Currently commercial software packages exist by which to model in detail the formation of sprays by Diesel injectors. The penetration of the fast-moving droplets into the dense, compressed air near TDC can be studied, and details of how evaporating vapor, trailing behind them, mixes with the air can be considered. Other computer models deal with the actual ignition and combustion of this vapor. Ideally fuel vapor would mix intimately with air before burning, but much of combustion is at first very incomplete because oxygen takes time to mix with fuel vapor. Heat from nearby combustion knocks hydrogen atoms off of fuel molecules to leave connected rings of carbon atoms. If these carbon structures clump together before they find oxygen with which to complete their combustion, they may become exhaust particulates. Other forms of incompletely burned fuel form the carcinogens that adhere to particulate surfaces. This clumping and sticking is a natural attribute of carbon, which is why it can be used in gas masks and other purifying

apparatus—carbon attracts and holds impurities. An early application of this natural stickiness of carbon is in the “smoothing” of expensive whiskeys by storing them in barrels whose insides have been charred to carbon. Combustion researchers would love to find a “silver bullet” that would prevent formation of carbon particulates, but for the moment the available path to reduced particulate emissions is exhaust filtration. Cross your fingers. Back in the 1920s, when it was clear that detonation—combustion knock —was a barrier to further development of spark ignition engines, systematic research at GM-Delco labs revealed just such a silver bullet. In this case it was tetraethyl lead, a highly poisonous organo-metallic compound which, added to fuel in gram-per-gallon quantities, could reverse pre-combustion chemical reactions that led to detonation. Thus far no such miraculous fix has been found for Diesel particulate formation. One way or another, the particulate problem will be solved, because the Diesel engine is too useful a machine to do without. Turbo Diesel Register Issue 35

51


Brakes Brakes used to be so poor, with so little ability to absorb energy, that even car drivers were advised to “use a lower gear” when descending steep hills. The reason for this was that the brakes, unassisted, did not have the capacity to continuously dissipate the required energy. If you relied on the brakes by themselves, the temperature of drums and linings would rise high enough to deform the drums, causing them to expand and cone away from the shoes. The lining, optimized for lowertemperature use, would lose part of its friction coefficient and brake torque would fall. The pedal would feel spongy because of the bending that went on as shoes were forced against now coned drums, and braking effect would diminish. Hence, it was necessary to supplement the energy-conversion and heat dissipation ability of the service brakes by shifting to a lower gear and using engine compression and internal friction as well. Disk brakes were supposed to fix all that. Back when Dunlop disk brakes were for the first time fitted to the Jaguar factory racecars at the Le Mans 24-hour race in France, it was like a miracle. The Jaguars would stay on full throttle long after other makes were on the brakes, then brake violently at he seeming last instant and dart around the corners. Mercedes had resorted to equipping their endurance cards with an “air rake,” a flap normally lying flat against the rear bodywork, which was erected like a sail by hydraulic cylinders as the driver applied the brakes. Jaguar’s Dunlop disk brakes were compact and simple, a contrast to the last of the drum brakes were huge finned aluminum affairs that filled the entire space within the wheels, and had several shoes. The advantages of disk brakes are real. As a disk expands, it still lies flat and so does not curve away from the friction material being pressed against it by the caliper. There is no bell mouthing as with a drum, therefore no spongy pedal.

However, disk brakes have their own special problems, and these have come to light as engineers have become more skilled in giving vehicles just the amount of brake capacity they need, and no more. One of the worst problems has been created unintentionally by government fuel-economy regulations. A light car or truck generally gets better mileage than a heavier one, and all parts of the vehicle are fair game for weight reduction—including the brake disks. A panic stop form the vehicle’s maximum speed takes only a few seconds, during which there is almost no time for brake disks to transfer any heat to the air around them. Thus, essentially the entire kinetic energy of the vehicle and its load is put into the disks as heat. The heavier or faster the vehicle, the greater the kinetic energy and the higher the final, end-of-braking disk temperature will be. The same is true of disk mass—if the total weight of brake disks on the vehicle is reduced, brake temperature must increase. This is what has happened on production cars and light trucks. The weight taken out of disks in the interest of improving fuel economy becomes a liability when you see brake lights right ahead and have to use all the rake you have to avoid being part of a chain-reaction rear-ender. As you are braking, you can feel the braking force fading as disk temperature drives the pads to a temperature at which their friction coefficient falls rapidly. You hope you’ll get stopped in time. (Hope is a wonderful emotion, but not mush use against physics.) After a few of these experiences, people want something stronger. In vehicles of the 1980s it wasn’t too bad —you could still get back the missing brake torque by installing aftermarket pads such as metal-ceramic or sintered metal. But whatever the consumer can do, the factories can do better. Knowing that smallish disks and better pads equal better brakes, they reduced brake disk mass again, restoring marginal braking power by use of the latest, most aggressive pads.

52

This is why there is a brisk aftermarket business in improved brakes. Most production brakes use disks of marginal size. This reduces brake torque by simple leverage—the bigger the disc your caliper grabs, the larger the lever arm on which caliper grip acts—and vice versa. Owners of sporty autos like to increase wheel size, then fit low profile, race-style tires that give the same rolling diameter as original. This makes a lot of extra room inside the bigger wheel. Into this room, the aftermarket puts bigger diameter disks, gripped by calipers that offer something more than marginal function. I am talking here about systems that cost $1,500-4,000. OEM calipers tend to be single piston designs that accommodate wear by sliding on a pair of rails. The aftermarket calipers have pistons on both sides of the disk, so the caliper body can be solidly mounted. Multiple piston calipers have elongated pads that “wrap” a larger sector of the disk. Their increased area translates to lower pad temperature and so less fade in hard braking. In some cases the calipers are aluminum, which compensates for some of the increased disk weight. When vehicles equipped in this way have to stop right now, the pedal stays up and braking power does not diminish—al the way to a dead stop. When a vehicle ahs a marginal amount of disk mass, causing its brakes to operate at high temperature, several things can happen—all related to heat. One is “hard-spotting” of disks, resulting in a rhythmic thump at the pedal (this could also occur with drums). When a disk is repeatedly driven to very high temperature, the structure of the iron crystals in it can change to another form that occupies a slightly greater volume. You will then see dark, slightly raised regions on your disks, possibly with streaking. You can have the disks turned to expose a new, flat surface, but the process doesn’t stop. It comes back to make more hard spots and more thumping. The cure for this problem is either to switch to disks of a superior material, or to increase disk mass to bring peak operating temperature down.


Loss-of-pedal Another problem is disk coning. If you apply the edge of a 12” metal ruler to the surface of a new disk, it will touch everywhere—the disk is flat. But if the disk has been used hard, as in making repeated mountain descents, the disk may be slightly coned, as the straightedge will show. The pads can wear at an angle to accommodate some disk coning, but beyond a certain amount—and especially if you fit new pads—it will cause loss of pedal height. As you apply the brakes, the pads will touch the disk, and then have to be pressed enough further to make them flatten against the slightly angled disk surface. This causes the caliper piston to tilt in its bore as well—something it can, within limits, do. This loss-of-pedal can be a problem if really hard braking is needed, but little pedal height remains. Why do disks cone? Heat is generated in some proportion to the speed of the disk past the friction pad. As the OD of the disk has a larger circumference than does a circle drawn around the disk at the inner edge of the pad tracks, the OD part of the disk moves past the pads faster than does the ID part. Therefore there is more heat generated the further out you go on the disk. At the extreme case—in which the disks become really hot—this greater temperature at he disk OD causes that region of the disk to try to expand more than the inner region. At lower temperatures, the disk is strong enough to resist this, but when very hot, even iron loses strength. The expanded outer region of the disk pulls so hard on the cooler, inner region of the disk that it causes it to yield. In other words, the expansion of the hotter OD region stretches the cooler ID region. When the disk cools and everything tries to return to its original size, the stretched inner region is s tiny bit too big. This causes the disk to assume a slight cone shape. Because the disk material is now at this slight angle between the two pads, it takes more pedal stroke to get the pads flat against the disk when you brake.

The trend in brake design is to make the pad track on the disk radially narrower, thereby reducing the difference in disk velocity (and heating rate) between outer and inner edges of the pad track. This loss of pad width is made up by making the pad longer circumferentially. You can see this best in multi-piston calipers made with either four or six pistons. Another advantage of this narrow-padtrack disk design is that it places more of the pads’ action at a larger radius from the center of the disk, thereby increasing the leverage and therefore the brake torque. Another cause of loss of pedal is friction pad deformation. Most friction pads are molded onto a metal backing, but the two materials have different expansion coefficients. The hot, expanding pad material therefore arches up the cooler metal backing. Once your foot is off the brake, this arching of the pad pushes the caliper pistons back into their bores. When you next go for the brakes, you get a big adrenaline rush as your foot nearly goes to the floor. A quick second stab as the pedal brings the brakes back, but does little for your confidence in your stopping power. Another cause of low pedal is loose wheel bearings or flexibility in the structure supporting the wheel bearings. As you drive through a corner at some speed, the side-load of cornering causes the wheels to cock to the side a bit, carrying their disks with them. This cocking pushes the caliper pistons back into their bores slightly in a process called “pad knock-off”. The next time you go for the brakes, the pedal has to move an extra distance to pump the pads back to the disk, so again, you have momentary low pedal.

Brake Fluid Much is made of the problems of brake fluid boiling. Everyone’s vehicle manual provides a schedule for brake fluid replacement but hardly anyone

53

ever pays any attention to this. Little by little, the high boiling point claimed by the fluid manufacturer becomes lower and lower because the fluid absorbs moisture from the air. In very hard use, heat can penetrate the brake friction pad and heat the caliper piston enough that, when you take your foot off the pedal, the overheated fluid behind the piston boils with enough pressure to push the fluid back up the line, through the master cylinder relief port, and into the master cylinder reservoir. Now you have a really serious problem when you go for the brakes next time. Enough volume of gas may have been generated in the hot caliper that a full stroke of the pedal —right to the floor—fails to bring the pads back to the disk. Hey, this expensive bottle of silicone brake fluid has a much higher boiling point than that nasty old DOT3 or DOT4 fluid. That’s gotta be better. I’ll switch… Try it—it may work for you. My experience with silicone DOT5 wasn’t good, however, because DOT5 is a lousy lubricant—in some cases bad enough to cause the master cylinder piston to fail to return all the way. Therefore I just follow the recommendation in the vehicle Owner’s Manual, rather than trying to secondguess the manufacturer.

Brake Pads As disk brake performance has risen with such things as sintered metal pas or carbon pads, used in multi-piston calipers, stronger measures have been taken to keep heat from reaching the brake fluid behind the caliper piston(s). Sometimes insulating material is placed between the friction pad and the caliper piston. The (open) end of the caliper piston, facing the pad, is sometimes slotted to resemble a castle nut, thereby reducing the contact area between pad and piston, and also allowing air to flow through the piston cavity. A variety of cooing baffles is sometimes used to direct air through the caliper.


Ventilated disks —those with radial slots through which cooling air can move —neatly double the surface area from which brake heat can be transferred to the air. In racing or other heavy-duty applications, air ducts may be used to bring cooling air to calipers and disks. Openings in the wheels can assist in moving air across the hot parts. Friction material is another subject. Originally, friction pads were organic, made of a reinforcing fiber (today this is Kevlar, but in older pads it was asbestos) impregnated with organic resin. Such pads give good friction without heavy pedal pressure, but have limited temperature tolerance. Fade begins when the resin content volatilizes, forming low-friction layers of glaze or even gas between the pad and disk. Sometimes brass wire can be seen, molded into organic pads. Its purpose is to conduct heat away from the hot friction surface. Disk wear is low with organic pads, but in wet weather their action can be erratic. Metal-ceramic and sintered metal pads retain their friction coefficients to higher temperatures and they work well even in wet conditions. Because of their hardness they wear disks much more rapidly.

important in aircraft, whose landing gear must be light yet whose brakes must absorb huge energies from heavy weight and high speed. Carbon is slowly moving into the commercial field as a component of friction pads. True carbon-carbon remains too expensive for such use, the problem being that this material is made in high-temperature ovens by a process that can take as long as six months. Carbon itself is a good friction material because it is sticky (that’s why it’s used in making whiskey, in water filters, i.e. impurities stick to it). Bear in mind that carbon exists in several forms. Diamonds are forever, graphite is a dry lubricant, and plain old carbon makes a good friction material. Brakes being as important as they are, it’s a shame they are so often poorly maintained. But it’s understandable— brakes lack the glamour and excitement of engines and turbochargers—and they’re covered with road dirt, hidden behind the wheels, invisible. The problem is that when brakes fail to work as we require, the excitement is immediate and unbearable. Turbo Diesel Register Issue 36

The brake material of the future—used widely on high performance aircraft, racing cars, and motorcycles—is carbon-carbon. This material is a matrix of amorphous carbon, reinforced by super-strength carbon fibers, all baked together into a solid. Both the disks and pads are made of the same material, and its friction properties can be tailored by changing the orientation of the fibers during manufacture. Aircraft brakes are made just like a clutch stack, with half of the disks rotating with the wheel, and the other half held stationary. The stack is compressed by a ring of hydraulic cylinders. The advantage of carbon brakes is that they are very light and can continue to operate normally at temperatures that would melt iron brake rotors. This is very

54


Your Notes:

55


Burning It All Up The turbocharger whistles thinly through its teeth as you open up the throttle a little to climb the long hill that lies just ahead, hardly seeming to notice the big stock trailer hooked up behind. You take another sip of coffee. At times like this, life is good and it’s had to think of Diesel power as anything but a mature essential of modern life. Meanwhile, back in Ann Arbor, Michigan, new hoops are being dreamed up for Diesel combustion technology to leap through. The EPA is at work on stiffer emissions standards. When I visited MIT’s Sloan Automotive Lab in the 1960s and early ‘70s, it had almost become a museum, so little research was conducted there. In the foyer was a Liberty V-12 aircraft engine, a legacy of World War One. Somewhere, you could find Professor Taylor’s famed “rapid compression machine,” with which so much valuable flame chemistry research was once performed. In the test cells stood research engines, both Diesel and spark-ignition, mostly cold and unused. When I asked the director about this, I was told that engine technology was no longer a university research subject. It had all become proprietary—corporate property. All that changed when engine emissions became big business. Today the Sloan Lab, and many others like it, is back in full operation. Engineering students know there are well-paid jobs in industry for people who have spent their four years studying combustion. Currently, X-ray imaging has been used to study Diesel fuel sprays, and new insights have been gained. One of the most interesting is that fuel can cavitate as it emerges from the spray nozzle—that is, the fuel is somehow pulled apart to form interior cavities, filled only with fuel vapor. Cavitation is a familiar and destructive phenomenon for those who design marine propellers. On the low-pressure side of such propellers, pressure can drop low enough that cavitation bubbles are produced, which stream across

the blade surface. When such bubbles collapse, fluid rushes from all directions toward a single point. When they collide there, extremely high pressures and temperatures are produced. As a result, cavitation bubbles that collapse against the prop blade surfaces can actually remove metal, causing erosion that looks like sand-blasting, or like the effects of heavy detonation on aluminum pistons. Why would fuel oil cavitate during Diesel injection? Isn’t the pressure very high during injection—like 1300 atmospheres, or 20,000-psi? Yes, and that may be just the point. Although we learn in school that liquids like water and Diesel fuel are incompressible, at very high pressures this is not true at all. In fact the venerable Dowty, Ltd., makers of aircraft landing gear in the UK, substitutes the compressibility of oil for heavy steel springs in their suspensions. Quite likely, therefore, fuel oil cavitates during Diesel spray formation because the compressed fuel is expanding so fast as it leaves the nozzle. It has been remarked that this fuel cavitation may be useful in causing the fuel spray to break up. Previously, it has been assumed that fuel sprays develop instabilities as they rush through the dense, hot air near top dead center on a Diesel engine’s compression stroke. These instabilities cause the fuel stream to break up into twisted sausage-like droplets. These, still rich in kinetic energy, having initially emerged from the fuel nozzle above the local speed of sound, break apart by the familiar mechanism of being flattened by the pressure of their encounter with the air, and then break into circlets of sub-droplets. Each fast-moving droplet leaves a fuel-rich vapor behind it. In the intentionally turbulent air, these millions of fuel-rich tails are whipped and folded and combined with more air. At some point in the process, air and fuel somewhere become heated enough to ignite, and the charge so far injected begins to burn. It does so fastest in those zones where the fuel-air mixture

56

happens to be near to chemically correct, because such a mixture burns fastest. Richer or leaner zones are hot, but delayed in catching fire by their slower chemistry. In these zones, heat breaks fuel and air molecules apart. Hydrogen atoms are stripped off of carbon chains and, because of carbonto-carbon attraction, these fragments clump together. When the flame does finally sweep through these slowerburning regions, the carbon clumps may already have grown large enough that oxygen from surrounding air cannot reach most of them. Instead of burning, these carbon lumps become soot. As injection continues, the processes of droplet breakup, evaporation, mixture formation, and ignition also continue— and in the zones almost too rich to burn, carbon clumps continue to form. The amount of fuel energy wasted in soot formation is insignificantly small. What is large is human concern over the effects of the soot. Diesel engine makers don’t like soot because it is something obvious that critics can point at. The EPA doesn’t like soot because carbon is attractive—things stick to it. The stickiness of carbon is a basic technology in the compounding of tire treads—by mixing carbon into rubber, the attraction of carbon particles helps tie long rubber molecules to each other in useful ways. Similarly, carbon is used in smoothing whiskey—by storing it in wood barrels whose insides have been charred. Bad-tasting “impurities” are absorbed onto the carbon. In Diesel combustion, carbon attracts other bad company. These are the polycyclic aromatic hydrocarbons (PAHs) that we read about—multiple carbon rings with a variety of geometries and attached side groups. Some of these mimic molecules necessary for metabolism, and can lead to concerns. Until it was discovered that soot particles carry these carcinogenic passengers, the Diesel engine was looked upon as at worst ill-smelling and occasionally smoky, but essentially blameless. Since that discovery, elimination of soot has become a major aspect of Diesel engine development.


Two basic approaches exist to this problem. One is to improve the combustion process so that soot is no longer produced. The other is to accept that Diesel combustion is inherently sooty, and concentrate on trapping the soot in a filter of some kind, and then eliminating it. The first approach has produced classic high-pressure injection into a swirling air charge. It is also driving such developments as multi-burst injection, in which fuel is no longer injected steadily until the whole quantity enters the cylinder. Instead, a small pilot injection is made, and there is a pause to allow it to ignite. Then the main fuel charge is sprayed as a series of bursts, with time intervals between to allow better fuel-air mixing. One technology discussed in this connection is piezoelectric drivers. Solids are held together by electrical forces, so it makes sense that their dimensions are also determined by these forces. Certain crystals have the curious property of changing their dimensions when a voltage is applied across particular planes. Such piezoelectric devices have long been used as transducers in phono cartridges, or as send-receive elements in SONAR. Because they can act quickly, they suggest a new generation of multi-burst Diesel injector. Another possibility is suggested by the discovery of cavitation in Diesel fuel sprays. Or, if a gas were dissolved in the fuel at high pressure, its expansion as the fuel emerged from the nozzle might break up fuel sprays even more effectively than the newly-understood cavitation effect. In Dr. Diesel’s original engine, and for some years thereafter, fuel was blown into the cylinder and finely broken up by an associated very high pressure air blast. In the automotive field, this has its parallel in the Orbital Engine Company’s gasoline injector. This injects a measured fuel quantity into a pre-chamber, then by air blast forces that through an orifice at supersonic speed into the main chamber. An extremely small 10-micron fuel particle size is achieved in this process.

Another concept is suggested by the fuel airbleed that has been standard for many years on automotive carburetors. The first purpose of feeding bleed air into the fuel flowing in a carb’s main circuit is to correct the mixture. This does not concern us here. The second is to create within the fuel a large amount of free surface (the interior of the resulting air bubbles) which assists the break-up of the fuel stream as it emerges into the low pressure intake air flowing to the engine. All liquids have surface tension, which results from weak moleculeto-molecule forces within the liquid. Because of this, energy is required to create fresh liquid surfaces, and so droplets resist being broken into smaller sizes. If those surfaces already exist (fuel airbleed) or if there is some source of potential energy within the fuel that can create such surface area (sudden expansion/cavitation of compressed fuel, or expansion of dissolved gas into bubbles) these would provide useful tools with which to reduce fuel spray particle size.

on CAI, or Controlled Auto Ignition. In this process, a premixed charge, somewhat diluted with hot exhaust gas, is compressed until it spontaneously ignites in many places. Although you would expect such combustion to be rough and rapid, it can be controlled to produce a rate of pressure rise much like that of current engines. Of particular interest is the ability of this system to operate on very lean mixtures. This gets the attention of Diesel engineers because it suggests a way to avoid a major current problem; a Diesel combustion chamber contains a continuous range of mixture strength from 100% fuel (injection spray droplets) to 100% air regions not yet reached by fuel. Somewhere in this range lies ideal, complete combustion, but elsewhere lies soot formation. If the mixture strength could be controlled in advance, and the charge then ignited in many places by CAI, the volume percentage of charge in which soot formation could take place could be greatly reduced.

It would also be very agreeable if a chemical “fix” for soot formation were possible. In the case of gasoline combustion, the organo-metallic additive tetraethyl lead (no longer legal for use in pump fuels) acts as a rate catalyst, countering the conversion of heatproduced molecular fragments into a form that leads to knocking combustion, or detonation. One could imagine a chemical process that would discourage the clumping of free carbon into soot. So far, no such magic bullet has been found.

Operating an engine in this mode calls for charge volume control and variable exhaust gas recirculation to hold the charge at the temperature level needed for Controlled Auto Ignition. To avoid pumping loss, this would require another technology, now appearing— that of variable valve timing. Such engines would also have to be capable of multi-mode combustion, for the CAI phenomenon cannot cover the necessary load range.

It is now being suggested that the methodologies of spark ignition (gasoline engines) and compression ignition (Diesel) will converge as time passes. To meet fuel consumption standards, spark ignition engines must seek to operate either at higher compression ratios or with leaner mixtures. To cut particulate emissions, Diesels must re-engineer their combustion process.

Much is said in the press of the coming fuel cell vehicle power revolution, but even with a mature technology in hand, a long time will be required in which to construct the production facilities and fuels infrastructure required. In the meantime, piston internal combustion engines will soldier on in ever more refined form, burning less fuel and more cleanly, to provide the power we require. There is no shortage of ideas.

Some years ago, Honda showed a two-stroke motorcycle engine that ran

Turbo Diesel Register Issue 37

57


Through the Cycle As the piston of a Diesel engine rises on compression, there is nothing above it but air, plus whatever fraction of inert exhaust has either been left in the cylinder from the previous cycle, or has been intentionally admitted to the cylinder as exhaust gas recirculation (EGR) along with fresh air as an emissions-reduction measure. Either way, the gas above the rising piston is being rapidly heated by compression, bringing its temperature up to many hundreds of degrees. Fuel is sprayed into this hot air from an injector nozzle. At first, evaporation of the fuel cools the air immediately around it, but turbulent air motion brings the resulting fuel vapor into renewed contact with heated air. This little interlude of evaporation, cooling, and re-heated is lumped together under the term “ignition delay.” Chemical reactions begin, increasing in intensity until they have to be lumped into the complex mess we call combustion. It looks simple on the white page in the chemistry book—the carbons and hydrogens of the hydrocarbon fuel combine neatly with oxygen atoms from the air—as if they were all polite dance partners at a cotillion—but in fact thousands of different reaction steps are involved. The process is violent and chaotic, with molecular fragments speeding in all directions, colliding, combining, flying apart, and re-combining again. One thing is sure: heat is released, increasing the velocities of all the molecules that emerge from combustion. Previously, pressure in the cylinder was high from piston compression, but now, as the average velocities of all the molecules beating against the walls of this container have so risen, they beat much harder. Meanwhile the piston has reached top dead center (TDC) and the rod has begun to swing past the vertical position. The tangent of the rod angle (TDC is zero), multiplied times half the stroke, now gives the effective lever arm of the crankshaft. At TDC there is zero leverage, but as the rod swings over, leverage grows, reaching a maximum when rod and crank arm are at right angles—somewhere near 76 degrees

after TDC. Meanwhile, combustion pressure continues to rise as the injector sprays more fuel and that fuel is broken up by thermal collisions. Its pieces find oxygen mates, releasing heat, adding and adding to the cylinder pressure. But now the piston begins to fall, so the volume above it begins to increase. The same amount of hot gas, occupying an expanding container like this, means the pressure must fall. As peak combustion pressure begins to fade, effective crank leverage increases. The net result is that torque on the crank continues to rise until something like 30 degrees after top center. Then torque begins to fade to lower values. It may seem strange, but 80% of a cylinder’s power stroke is generated in the first 70-80 degrees of rotation after TDC. As the piston descends on the power stroke, under ever-dwindling pressure, the gas above it falls in temperature as well as pressure. Because a diesel engine needs a high compression ratio to heat air hot enough to ignite the injected fuel, it also has a high expansion ratio. The more the combustion gas is expanded, the lower its temperature falls. When exhaust valves open, somewhere near bottom center, gas temperature in the cylinder is much lower than it would have been in a comparable gasoline spark-ignition engine. This temperature difference arises from the difference in compression (and therefore expansion) ratio. Compression has to be on the low side in a gasoline engine (between 8 and 11 to one) because otherwise the touchy, heat-sensitive fuel will detonate at some time late in combustion, generating sonic pressure waves that make trouble. In a diesel engine, despite its higher compression ratio (between 15 and 23 to one), there can be no detonation because fuel burns almost immediately upon being injected into the cylinder —it has no time in which to be altered by heat into a sensitive explosive, as in gasoline engines. Detonation takes time to develop, but diesel combustion consumes the fuel before that time can elapse.

58

When the exhaust valve opens, spent gas, still at several hundred degrees, accelerates through the port and rushes out. This situation is an ideal one for rapid heat transfer. First, the gas is hot. Second, its velocity is high. High velocity causes violent turbulence, which in turn keeps fresh hot gas swirling against the metal interior of the exhaust port – there is no relative calm in which an insulating layer of stagnant gas can build up on the port walls to insulate them. The fastmoving hot gas scours the port walls, constantly driving heat into them. The higher the temperature and velocity, the faster the heat flows from the gas into the port walls. In a gasoline engine, about half of the total heat rejected to the cooling system is picked up here in the exhaust port, because velocity and temperature are so very high. It is for this reason that exhaust ports are made as short as practicable—to limit the heat picked up by the engine, which must then be collected by the cooling system in the form of heated water, and then piped away to a liquid-to-air heat exchanger (radiator) to be disposed of. But in a diesel, because of its high compression/expansion ratio, exhaust gas is much cooler, and therefore much less heat is rejected to coolant from the exhaust port region. This is why gasoline-powered trucks have no trouble mustering up heat from their busy cooling systems, with which to heat the cab in winter. In the case of diesel engines, it can be more complicated because a lesser release of heat means slower engine warm-up and less heat available for the side-jobs like keeping us humans warm. Diesel engines also need coolant radiators much smaller for their horsepower than do gasoline engines. The other side of this coin is that if less heat is being rejected to the cooling system, that heat must be going somewhere else. It is—much of it is going into driving the pistons, doing useful work in twisting the transmission input shaft. This is why we bought


diesels in the first place. With some of the money we saved on fuel, we can buy woolen shirts that will keep us warm while the lesser flow of waste heat from the engine takes its time warming everything up. Efficiency! Turbo Diesel Register Issue 38

59


Diesel Politics Rudolph Diesel invented his engine quite deliberately, acting from his understanding of thermodynamics. The results weren’t exactly as he had planned, but the efficiency of his engine was high, based as it is upon the dual principles of high compression/expansion ratio and lack of intake throttling. It had another important merit: because its fuel was vaporized by the mechanical action of the fuel injector, assisted by the high temperature of compressed air in the engine cylinder, the fuel did not have to be naturally volatile. The very first internal combustion engines had solved the mixture preparation problem by running on city illuminating gas, and later types by adopting that volatile, unwanted, and dangerous by-product of lamp oil manufacture: gasoline. Unlike Diesel fuel, these fuels could easily form combustible mixtures outside the engines they served. We know that wise owners of gasoline-powered inboard boats take elaborate precautions to force-ventilate the engine space before starting, lest they be blown to Kingdom Come by a gasoline vapor explosion. The Diesel’s relative immunity from fuel explosions, combined with efficiency, got it steady employment as power first for ships, and then for submarines. Early engines ran best on full throttle, as it took time to learn how to control injected fuel sprays over a range of loads. The next natural application was in rail locomotives, where fuel efficiency, reliability, and modular construction were seen as powerful advantages over the traditions of steam. With the development of easier starting and wider load range, Diesel engines next appeared in trucks. It was probably the marine and heavy truck applications that gave the public the lasting idea that Diesel power meant massive weight and dreary (if efficient) performance. I well recall early Diesel transport trucks racing down rolling Pennsylvania hills, then grinding arduously up the next slope in lower gears, followed by a long line of fuming auto drivers.

Help arrived in the form of the turbocharger. This device had long existed—Dr. Sanford Moss at GE had experimented with turbochargers just after WW I—but what was required was the urgency of WWII to force these previously exotic devices into large-scale production, and to compel development of the metal alloys required for turbine blades, stators, and disks. Turbos enabled aircraft to maintain constant horsepower from take-off all the way to maximum altitudes over 30,000 feet. Further help came from the extremely rapid postwar development of jet engines, whose large scale production drove down the price of the materials technology. The turbocharger turned the Diesel engine into the ideal truck engine. The power to keep up with traffic on the new interstate highways—uphill or down—no longer required a gigantic engine, as the power of a turbocharged Diesel is proportional not to its piston displacement, but to its airflow. And that airflow, delivered by the turbocharger, was really limited only by the mechanical strength of the engine and by the injection system’s ability to deliver the required fuel. The denser compressed charge of turbo-Diesels required yet higher injection pressures to achieve the necessary spray penetration, but the results were well worth the development effort. Europe has always been fuel-starved because taxes are high and oil fields either distant or well protected by political barriers. The automobile got its start in Europe, but it was in the US where fuel has always been cheap that the auto first reached mass production to become a necessity of life. Postwar European cars were small, powered by tiny engines. The Diesel engine, by now developed into a flexible, convenient power source, offered a way to build even more economical autos. European nations offered special tax incentives to encourage the production and sale of Diesel vehicles, with the overall goal of limiting the outflow of currencies to oilproducing states.

60

The visible result in the US was that Diesel European prestige autos enjoyed a brief vogue in the 1980s, stimulating domestic automakers to “convert” gasoline-fueled designs to Diesel operation. The resulting failures gave Diesel autos a black eye. In this same time period the EPA decided that Diesel particulates were a greater threat to human health than had been appreciated. This triggered a quest for anti-particulate technologies that continues to the present. Very highly turbocharged marine Diesels were meanwhile developed for military and sporting motorboats. The specific power of these engines is impressive, equaling the levels set by some WW II gasoline aircraft engines. Such high power density requires the best in bearings, piston sealing and cooling, and fuel controls. The military forces of the US now began to develop what they called “the singlefueled battlefield,” in which all vehicles and other power-conversion apparatus to be procured for any military use in future must run on jet engine fuel. From the turbine engines in tanks and helicopters to the Diesel engines in Army trucks and other vehicles, the single fuel greatly simplifies logistics. Compare this with a past in which three or more aviation gasolines, a vehicular gasoline, and turbine fuel all had to be separately provided. Could this be a model for a civilian future? During the 1990s the trend-setting California Air Resources Board called insistently for the development of zeroemissions vehicles, which came to mean electric cars. Political power proved no more able to prevail over reality in this case than when King Canute commanded the flooding tide to go back out to sea. Efficient electric cars at attractive prices failed to materialize. As a question of further interest, consider how much more severe California’s power blackouts could have been had hundreds of thousands of electric cars had been forced to take their power from the same grid.


All the while, technologies of gasoline spark-ignition and Diesel engines moved ahead, and even showed some tendency to converge. A stratifiedcharge, lean-burn gasoline engine begins to look a lot like a Diesel. Diesel engines gained economy and flexibility from electronically-controlled commonrail fuel systems. Either technology can now reach efficiency levels comparable with or superior to the much-ballyhooed fuel reformer fuel cell cycles (fuel cells which get their hydrogen by breaking down a liquid fuel such as alcohol in an on-board reformer). Even higher efficiency can be reached by fuel cells operating on pure hydrogen fuel, but there is no cheap source for this fuel, no convenient means of storing it, and there exists no distribution system for providing it to motorists. This being the case, it looks like we’ll go ahead pretty much as before, using piston internal combustion engines powered by fuels that actually exist, are easy to store, and are widely available. While it could be that revolution will in future beat evolution, for the moment the steady evolution of the piston engine is doing the transportation job and promises to do it better every year. Diesel power is not going away any time soon. Turbo Diesel Register Issue 39

61


Future of Diesel in the US Right now, 40% of new cars delivered in Western Europe are Diesel-powered, while in France, that number rises to 60%. In the US, hardly any Dieselpowered cars are sold, and Diesel and spark-ignition power share the light truck market. Is this as far as Diesel power will penetrate into the nonindustrial transportation sector here in the US? Many years ago European governments gave maximum priority to fuel conservation because almost all petroleum there must be imported. No government relishes an unfavorable balance of trade. Because Diesel engines typically burn only 75% as much fuel per horsepower-hour as do gasoline engines, those governments created fuel tax incentives to widen use of Diesel power. Here in the US, fuel has historically been much cheaper than in Europe. When smog in US cities was officially attributed in large part to vehicle exhaust, the natural priority was to cut smog-forming emissions and do nothing about fuel consumption. Early emissions legislation concentrated on reducing emissions from private cars, most of which were gasoline powered. Diesel engines, because they burn their fuel in the presence of excess air, emit less unburned HC and CO than do gasoline engines. They were therefore, at first, less regulated. In the 1980s, it was discovered that particulates (soot) in Diesel exhaust was a carrier for complex polycyclic hydrocarbons, at least some of which are highly carcinogenic. Emissions of oxides of nitrogen (NOx) were of great concern because of their role in smog chemistry. They are particularly hard to eliminate, and they are present in Diesel engine exhaust. For a time, US auto makers seeking lower-emitting engines had considered Diesel power as a possible solution. New emissions regulations for Diesel engines increased the cost of such developments, making it cheaper to continue refining the sparkignition gasoline engine instead. Costs rule!

By contrast, European nations place higher value on reducing total fleet fuel burn than on reaching the lowest possible emissions. European regulations accept somewhat higher Diesel emissions (the NOx component which contributes to smog) in return for their considerable fuel savings. Here in the US, use of Diesel power in automobiles is discouraged by tighter (more expensive to meet) emissions standards. (If you need all the details of the tighter NOx legislation in the US, check out Issue 38, page 28, “Diesel Power in the USA.”) The basic difference between the US and European outlook arises from the different value placed upon the fuel saved by the Diesel engine. This value is both monetary (a gallon saved in the US is $1.65, but a gallon saved in Europe is $5.00) and political. In Europe, all petroleum is imported, but in the US, only about 25% of the oil is imported from Arab countries. In addition, US motorists like their smooth, fast-accelerating, relatively odorless gasoline-powered autos. Sales resistance to Diesel power comes from (1) its association with heavy trucking and therefore unexciting engine performance and (2) the extra noise and exhaust odor of Diesels. What can change these perceptions? In response to tightening emissions standards worldwide, Diesel technology has advanced very rapidly in the past five years. Developments now entering service or about to do so have the power to change the public’s perception of Diesel power. (1) Diesel noise—the clatter of Diesel combustion is caused by the rapid ignition of a large amount of injected fuel, resulting in a steep pressure rise that is like a hammer blow. The high pressure, common rail injection systems now coming into use on advanced engines overcome this by pilot injection. Instead of injecting the whole fuel charge in one continuous spray, a small pulse of fuel is injected first, after which injection pauses. Because the pilot injection

62

contains only a tiny amount of fuel, its ignition is accompanied by only a modest pressure rise—and little noise. Only then is the main charge of fuel injected. Because it ignites immediately, its pressure rise depends on the rate of injection—which can be controlled. (2) Association of Diesel power with low performance. Older drivers remember being trapped behind Diesel trucks laboriously climbing long hills. A typical heavy-duty truck engine of that period developed 165 horsepower. Today, almost three times that power is typical in highway tractors. The development of turbocharging has made possible Diesel engines that deliver as much power per cubic inch as any sporty gasoline engine made. Anyone who has driven the current generation of sporty turboDiesel European cars is familiar with this truth. Formula One engineer John Barnard has suggested it is time that F1 adopted a turbo-Diesel engine formula to speed the development of such engines for autos. Modern Diesel engines can deliver all the performance any driver could want. (3) Emissions problems—Currently Diesel engines can deliver either low particulates and high NOx, or low NOx and high particulates (We’ve covered these basics! See page __.), but developments such as piezo-electricactuated injector valves, cooled EGR, and exhaust post-treatment are whittling away at the edges of this compromise. Meanwhile, a more profound change in the form of Homogeneous-Charge Compression Ignition (HCCI) is in the research stage. If the difficult load-control problems inherent in this technology can be solved, it may in the future deliver very low levels of NOx emissions, allowing exhaust HC and CO (inherently less with Diesels) to be dealt with by a simple oxidation catalyst process. Meanwhile, the Gasoline Direct Injection (GDI) engine has been touted as the engine of the future—able to close the fuel consumption gap between gasoline and Diesel, but without current Diesel problems. This engine type operates at


low- and mid-loads in a stratified-charge combustion mode, achieving its economy and low NOx emissions through a reduced combustion temperature. GDI is the favored technology in the Japanese auto industry. In Japan, urbanization and crowding limit vehicle speeds. Therefore Japanese emissions-test driving cycles favor low loads. In such test condition, GDI engines look very good—approaching Diesel levels of fuel economy. Who needs Diesels, say the proponents of GDI, when you can achieve the same results with improved gasoline engines? The surprise has been that GDI has failed to deliver in the US. Here, where higher loads are common, GDI’s advantage shrinks because the engine spends less time in its efficient, low-throttle stratified-charge combustion mode. This leaves the Diesel engine very much in the running as a future powerplant for US autos and light trucks. Its position can only grow stronger as current rapid development brings it closer to compliance with US EPA auto emissions standards.

do not meet the 2004 and tighter still 2007 EPA emission standards. Which will change first—the standards or the engines? Compliance with each new level of emissions reduction costs motorists the hundreds of millions of dollars required to develop and manufacture the necessary technology. This cost is hidden in the purchase price of new vehicles (some estimate that compliance and “contingency engineering”—planning for possible future regulations—add 40% to the price of new vehicles) and in the various inspections and repairs required to maintain compliance. Will this remain acceptable to most Americans if in addition the price of petroleum continues to rise rapidly? Our lives are constructed around cheap transportation. At some level of fuel price, advanced Diesel engines will become the best and cheapest all-around solution. Turbo Diesel Register Issue 40

Other problems of wider conversion of auto production to Diesel power include the need for expansion of component manufacture (turbochargers, EGR coolers, exhaust catalyst systems, etc.) and for higher production of low-sulfur Diesel fuel (required if catalysts are to be used on Diesel exhaust). Fuel consumption aside for the moment, the fact is that many Americans still like big cars. A big car can carry the whole family, while towing a boat or horse trailer. Auto emissions standards had nearly killed off the big family sedan, but then the SUV arrived, made possible by an emissions loophole intended to protect certain commercial vehicles from regulation. Despite attack from environmentalists and others, the SUV thrives and earns record per-unit profits for its producers. This vehicle is an ideal application for turbo-Diesel power. Wider conversion to Diesel cannot happen while engines now in production

63


Flame Diffusion and Your Next Diesel Diesel and spark ignition combustion are quite different. In the gasoline engine, a pre-mixed fuel-air charge is ignited at one or more points by spark. As this mixture is nearly uniform throughout the combustion chamber, turbulent flame propagates through it, igniting the mixture ahead of it as it travels. The diesel process begins with pure air that has been heated by compression to several hundred degrees—well above the ignition temperature of the fuel. Into this dense, hot air is sprayed liquid fuel as a very high speed spray—moving at close to 1000 feet per second. The tiny fuel streams collide with the air, become unstable, and break up into pieces whose fluid surface tension tends to draw them into spherical form as droplets. Large droplets, still moving fast, break into smaller ones, rapidly increasing the total surface area of the injected fuel. Evaporation into fuel vapor takes about one millisecond. Ignition can’t take place instantly because evaporation is a cooling process—energy is taken from the surrounding hot air to give fuel molecules the velocity they need to break free from the liquid state. By momentarily cooling the surrounding air, evaporation delays ignition for several crankshaft degrees. But rapid local mixing is being driven by the turbulence of the air charge—in many Diesel engines there are swirlinducing fences cast into the intake ports for the purpose of forcing charge air to enter the cylinder tangentially, creating rapid swirl. This turbulent mixing brings fuel vapor into contact with hotter air. Igniting combustion is like starting a small business—you don’t succeed unless conditions are right. Many such businesses fail because their income take too long to build up, and they therefore cannot afford the interest on their starting capital. So it is inside a diesel combustion chamber. Where the local mixture of fuel vapor and air is too rich, the extra fuel takes energy from any small flame kernel that springs into being, slowing it spread or putting it out of business entirely. Where the

local mixture is too lean, it is an excess of air that robs energy from incipient flame kernels. But where fuel and oxygen happen to exist in chemically correct proportion, maximum heat is generated in flame kernels, and there are minimum losses to their surroundings. It is therefore here that ignition takes place most decisively, and here that flame spreads fastest. Think of the fuel spray region as an “onion” of layers, each of a particular fuel-air ratio—beginning with 100% air, zero % fuel at the outside, and becoming richer as we penetrate inward to increasingly fuel-rich inner layers. One of these layers is, as Little Red Riding Hood said, “Just right.” It is here in this just-right layer, where fuel and air happen to be mixed in exactly right proportion, that ignition has the best chance of establishing itself and progressing rapidly. Now it gets more interesting. If this were a spark-ignition engine, the fuel-air mixture would be the same everywhere, and the flame front could race through it easily. But it’s not the same everywhere. As we travel outward from this just-right, chemicallycorrect mixture layer, the mixture becomes leaner. As we travel inward, toward the fuel-dense core of the spray, the mixture becomes richer. In both directions, the conditions for combustion become less favorable. Therefore the flame spreads fastest along the just-right layer of the onion, but cannot move very fast inward or outward. The flame therefore spreads laterally through the onion shell of correct mixture, enveloping the fuel spray with a combustion layer. Because of the ideal combustion conditions in this layer, its burning temperature is very high, so this is where nitrogen oxides are most likely to be produced. Once this layer has ignited and spread, it is fed by fuel diffusing outward from the fuel-rich zone at the center of the spray, and by air diffusing inward from the leaner zones farther out. Where fuel and air meet in nearly correct

64

proportions, they burn well. Elsewhere, burning occurs but, because it is either leaner or richer than ideal, combustion is slow and/or incomplete, and must wait for further turbulent mixing to find the additional fuel or air required. This situation is the “diffusion flame” that has been and is being so intensively studied. Now think of what happens as fuel diffuses toward this flame zone. The more closely it approaches, the hotter it gets. What is “hot”? Higher temperature in a gas means that each molecule is, on the average, moving faster. As the fuel molecule—for example a chain or ring structure of ten or so carbon atoms with attached hydrogen atoms—approaches the combustion zone, it encounters faster and faster moving gas molecules, beating against it. At some point, the most energetic of these collisions carries enough energy to overcome the bond energy of the least-strongly-bonded hydrogen atom. It flies off rapidly to its fate—combining with one or more oxygen atoms to form either active fragments like OH-, or “going all the way” to burn completely to water—H2O. The closer this carbon structure comes to the flame zone, the more hydrogen atoms it loses. Ideally, our carbon chain or ring should itself break apart, finally reaching the flame zone as individual carbon atoms, able to enter into mature consensual unions with oxygen atoms to form either CO (carbon monoxide, itself an excellent fuel which will burn further when in hot contact with oxygen) or carbon dioxide. But in fact the carbon structure strongly resists being broken apart, and there are many of them in the fuel-rich zone approaching the flame. Carbon is very sticky stuff—it likes to adhere to itself and to other atoms, which is why carbon is used to absorb bad tastes from whisky and cigarettes (whisky barrels are charred on the inside, and filter tips contain carbon). It is also why the most powerful breaks—those on large aircraft and Grand Prix cars and motorcycles—have pads and disks both made of carbon. Therefore many of the


carbon structures clump together as they collide in the molecular free-for-all. This renders them better able to resist the storm of molecular collisions that the nearby flame zone is producing. Some of the clumps are broken up and burn to CO or carbon dioxide, but the complex statistics of this molecular dodge-‘em game guarantee that some carbon clumps survive and even grow steadily larger. These we call diesel particulates, or soot. The diffusion flame is often studied by making high-speed movies of the ignition and combustion of single fuel droplets. In order to remove the effects of gravity (the hot gases from the flame are lighter than the surrounding air because of their thermal expansion, so in a gravity field they rise), such single droplet combustion experiments have been flown on the Space Shuttle in an apparatus called the “glove box.” On earth, the single droplet is suspended at the end of an ultra-thin quartz fiber and is ignited electrically. In the absence of gravity, and in perfectly still air, the diffusion and combustion processes can be studied in their most elemental and undisturbed form. Light emitted from the various diffusion zones contains specific frequencies which reveal useful information about the stages of chemical reaction. One of the fascinating things seen in droplet movies is “sunspots”— black objects swimming around on the droplet surface. These are tiny crusts of pyrolyzed fuel—carbon particulates— which result from the loss of hydrogen from fuel molecules as they approach the flame zone. Carbon clumps form as described above, and some fall back onto the fuel droplet to become these sunspots. As the fuel droplet evaporates to nothing, some of these carbon particles remain, while some are burned or partially burned. It has been proposed that water be emulsified into Diesel fuel—that is, broken up into droplets so small that they do not join into larger droplets or settle to form a heavy layer on the bottoms of fuel tanks.

When a fuel droplet containing subdroplets of water is ignited in droplet combustion studies, it naturally is heated by the combustion layer surrounding it. When the droplet’s temperature significantly exceeds that of boiling water, the water droplets flash into steam, blowing the burning droplet apart. This achieves fresh mixing of fuel and air, and thereby may improve combustion. This model also reveals how a series of tiny fuel injection pulses, as opposed to the traditional single injection, my generate fewer particulates. Each pulse of fuel injected produces its own cloud of fuel droplets—an onion with its own layer of chemically-correct mixture that can ignite and burn rapidly. A greater total area of such layers is generated in several smaller onions than in one large one, providing a greater total flame surface into which fuel and air may diffuse to complete combustion. Unfortunately, rapid, high-temperature combustion in the chemically-correct mixture layer is responsible for the production of nitrogen oxides—the most difficult of the smog-forming chemistries to eliminate. Nitrogen exists in air (air is 78% nitrogen) as diatomic molecules of two tightly-bonded nitrogen atoms. It requires a violent molecular collision to break this nitrogen-nitrogen bond and set single nitrogen atoms free so they can combine with oxygen—that is, it requires high temperature. At present, techniques which reduce soot formation tend to increase production of nitrogen oxides, and methods of cutting nitrogen oxide production generally lead to higher soot production. Like the condemned prisoner asked to choose between being hanged or shot, the diesel engineer yearns to say “None of the above.” If combustion is made more complete through improved mixing, higher injection pressure, smaller fuel droplet size, multi-pulse injection, etc., this will reduce production of particulates but will

65

tend to increase production of nitrogen oxides. The main technique for reduction of nitrogen oxide generation is to reduce flame temperature. Intercoolers were added to your Turbo Diesel truck in 1991 to cool the intake air and reduce in-cylinder temperatures. The currently favored method is to dilute the intake flow with some cooled inert exhaust gas. Sadly, reducing flame temperature subjects the inevitable carbon clumps to less high-speed molecular banging and hammering, so more of them will survive right through combustion to be blown out in the exhaust to the waiting soot detectors of the EPA. Let’s say we choose reduced flame temperature, because nitrogen oxides are so hard to get rid of. Let’s start by diluting the intake air 10% with oxygendepleted, inert exhaust gas recirculation as our means of reducing flame temperature. That 10% recirculated exhaust gas is equivalent to throttling our engine’s intake flow by 10%, thereby reducing its power by the same amount. This is like paying for a 300-hp engine and receiving only 270-hp. How popular is that? If we decide against reduced flame temperature, we have to get rid of the added nitrogen oxides that result by employing a reducing catalyst in the exhaust. Sadly, the catalyst doesn’t work so well in diesel exhaust because its temperature is lower than the temperature at which the cat likes to operate. So we tweak one of the new multi-pulse fuel injectors to inject some “re-heat” fuel late in the cycle, to heat the exhaust back up again. You couldn’t do this in a spark-ignition engine because it burns up all the oxygen in its charge, but Diesels, in order to avoid producing exhaust smoke, burn only about 80% of their oxygen on full throttle. This leaves enough to make re-heat practical. Burning fuel late in the cycle wastes the power it might have produced, so this is not economically attractive either. But maybe we decide to live with it.


Or maybe we decide to go with the reduced flame temperature, which cuts nitrogen oxides but boosts production of particulates. We could use a particulate filter instead of an exhaust catalyst. Does it get plugged after a while? Can we periodically burn the particulates off of it without burning the filter as well? Filters and cats aren’t free, so we convert their price to a per-mile expense and make the usual comparisons. Headache time! Well, folks, if we reconsider the lowsoot, high nitrogen oxides approach, there’s also a technology that adsorbs the nitrogen oxides onto an active substrate and holds them there for a while. Periodically we react them back to ordinary nitrogen with a chemical reaction, perhaps involving an outside source of nitrogen. More technology to buy and maybe even trick fluids to carry along with us as we drive? What next? A fine from the EPA for driving with an empty urea tank? Or maybe we’ll be ratted out by our new engine-control computers, which will surely keep track of such things and report them to the dealer. At this point in a bad dream, I usually try to wake up, realize that it’s all imaginary, and go downstairs to make myself a good breakfast. Diesel engineers are not allowed to wake up because all of the above is real. Turbo Diesel Register Issue 41

66


Invisible Technology A turbocharger uses the exhaust energy of a piston engine to drive a fastspinning turbine coupled to a centrifugal supercharger. The pressure output of the supercharger in turn raises the power of the attached piston engine. The idea of a turbine—a device for turning fluid flow into rotary motion—is as old as the human eye, watching a winged seed spin down from a maple tree. A variety of water turbine designs improved upon the efficiency of simple water wheels, and the steam turbine took over the generation of electric power after 1900. Working at the German diesel engine builder Sulzer in 1911, Alfred Buchi used an exhaust-driven turbine to supercharge a diesel engine. The French turbine engineer Auguste Rateau, was asked by the Lorraine-Dietrich firm to develop a gear-driven supercharger for an aircraft engine in 1915. He rejected the idea, reasonably pointing out that to deal with the fall of air pressure with increasing altitude, such a device would require a variable drive ratio. In 1916 he considered the alternative idea of a turbocharger, whose rpm and output would not be tied to engine rpm. The following year he made tests with an experimental turbocharger. Meanwhile attempts were made to build a self-driving turbine engine—in effect a turbocharger whose compressor supplies air, not to a piston engine, but to a burner whose hot, expanding output of gas is fed back to drive the turbine. If mechanical power is taken from the turbine, the result is a shaft turbine. If the engine is instead designed to produce jet thrust, the result is a turbojet. A shaft gas turbine, built in France just after the turn of the century, produced little power at an efficiency of about 3%. In the US, a graduate student at Cornell, Sanford Moss, began work with the turbine concept in 1901, arousing interest at GE (bear in mind that the steam turbine revolution was at this time in its first great rush of success). By 1907 Moss had a turbine running for GE, but

it revealed the following discouraging truths:

Rateau, and would develop it to high efficiency by 1940.

(1) The low strength of available materials at turbine temperatures limited cycle efficiency. (2) The efficiency of compressors was low. (3) Turbine efficiency was also very low.

During the 1930s much work was done in Europe to achieve cooling of turbine blades by means of air or water. Little of practical use resulted.

This caused GE to give up gas turbines in 1907. The French effort closed two years later. The Brown-Boveri firm now made a turboblower system to increase the output of boilers, selling the first examples around 1910. This was a simple kind of gas turbine cycle whose product was hot gas rather than mechanical power. Knowing of Rateau’s aircraft turbocharger work, Cornell Professor W.F. Durand asked Sanford Moss to consider its problems. Moss, who is said to have disliked airplanes, ran the first GE turbocharger in 1918. To evaluate its performance at high altitude, a dynamometer was built onto the bed of a truck, which was driven to the summit of Pike’s Peak. A Liberty V-12 aircraft engine, normally giving 400-hp at sea level, gave only 221-hp in the thinner air atop the mountain. When the turbo was fitted, the increase in intake manifold pressure pushed power back up to 356hp. The idea was proven. The critical problem motivating this work was the loss of power by aircraft engines as they climbed to higher altitudes. In the then just-ended First World War, the altitude performance of aircraft had become a matter of strategic importance. The British, too, had made tests with simple turbochargers but had decided the fire risk from potential failures of hot plumbing was too high. (Many such failures would later plague the US B29, each of whose engines was served by two B-11 GE turbos.) They instead adopted the gear-driven centrifugal supercharger previously rejected by

67

Once made aware of the value of the GE turbocharger, the US Army paid all the development costs of turbocharger work at GE from 1919 through WW II. The GE turbo employed a turbine that looked just like one stage of a modern jet engine turbine. The rim of a disk several inches in diameter was fitted with many short, wing-like vanes, and engine exhaust was directed against these by a circular nozzle-box. The spinning of the turbine placed great centrifugal stress on the very hot blades, which through a process known as “creep” gradually grew longer until they stretched apart and flew off. From 1918-1922, the blades were made of an ordinary spring steel and failed quickly. The vanes in the nozzle box suffered “scaling”—a kind of accelerated rusting in which an ironbased alloy combines with oxygen to form layers of scale which are sloughed off until the part is too thin to survive. In 1922 a new material, Silchrome I, was adopted. This added nearly 10% chromium and a small amount of silicon, thereby achieving useful effects: (1) Chromium combines with oxygen to form a tough protective layer of chromic oxide, keeping oxygen from reaching iron atoms to generate scale. (2) Chromium also forms hard carbides with the carbon in steel. These tiny carbide particles act like “pins,” to prevent the sliding of layers of atoms across each other. The result is increased resistance to creep at high temperature. (3) Silicon combines with iron and oxygen to form a silicate glass that acts as a further barrier against oxidation. In the early 1920s the US Army became aware of a new stainless alloy,


KE965, then being used to make highperformance exhaust valves in Britain. Its use in GE turbos allowed safe blade temperature to rise from around 1100deg F to almost 1400. In this material both chromium and nickel, with a little tungsten, are combined with iron and carbon to form an austenitic crystal structure. Tungsten’s value at high temperature had already been proven in so-called “high-speed steels” for use in metal cutting. Tungsten combines with carbon to form extremely hard carbides to pin atomic layers, preventing gliding movement. The nickel and chromium, because they dissolve in iron but have different atomic sizes, further impede deformation by acting as local regions of stress. KE965 was the Army’s turbo blade material from 1928-33, when an improved but similar material, 17W, was adopted. Meanwhile another set of conditions made ready to drive a new program of materials improvement. British Royal Air Force officer Frank Whittle had been told by “experts” that his gas turbine (jet) engine would be too heavy to fly (by ignorant analogy with massive marine steam turbines) and would run too hot to survive more than a few minutes. He stuck by his own calculations and in 1936 found private development money. Shortly he had a machine running well enough to impress those who saw it. By 1939 the British Air Ministry reversed itself, taking the project from Whittle and assigning it to private industry. World War Two began in September 1939, greatly increasing the urgency of development. The original turbine blade material “Stayblade” (a steam turbine alloy) was so vulnerable to creep that after short running time blade length had increased significantly. When the engine was shut down, the loose blading could be heard to make a clinking noise while the turbine wheel coasted to a stop. A better material—Rex-78—was substituted for the moment, and the Wiggin Laboratory of the Mond Nickel Co. was set the task of quickly developing improved turbine blade and nozzle materials.

By July of 1942 they had produced the first blades in the Nickel-based alloy Nimonic 80, which enabled, for the first time, reliable jet engine turbine disk operation for 25 hours. Earlier, the US Haynes Stellite Corp. had produced some corrosion-resistant alloys that later turned out to have outstanding properties at high temperatures. These were the first three “Hastelloy” materials, A, B, and C. When tested by GE in the summer of 1941, Hastelloy B was the most promising. At its plants in Lynn, Massachusetts, GE tested aircraft turbochargers by operating them as low-grade gas turbines—connecting compressor output to a simple burner can, injecting fuel, and routing the resulting hot gas back to drive the turbine wheel. In 1937, a Swiss engineer, Rudolph B i r m a n n , h a d o r g a n i z e d Tu r b o Engineering Corp. here in the United States. His 1922 university thesis had described a gas turbine. Now he offered the US Navy a new, more compact kind of turbocharger, driven by a radial inflow turbine, just like the type used in today’s automotive turbos. As in the case of the 19th-century Francis water turbine, this device led the flow into a snail-shaped housing surrounding the turbine wheel. This set the flow into whirling motion so that as it flowed radially inward, it gave up its rotational energy to the wheel, emerging along its axis at the center. Turbo Engineering equipped the R-2600 radial engine of a TBF aircraft with such a turbo blower in late 1941. This system was able to maintain sea level power all the way to 40,000 feet. Birmann’s turbine wheels were forged of a heat resistant steel containing nickel, chromium, and tungsten. Its blades were internally air cooled. He argued that his design could operate at a higher speed for a given flow than could the large wheel turbines of aircraft turbos. It also exposed less blade surface to hot gas. Being made in one piece, Birmann’s radial-inflow turbine

68

solved the old problem of how to attach the turbine blades. The advantages of present-day turbos were all listed by Birmann more than 60 years ago. Unfortunately when one of Birmann’s novel turbos was fitted to a Navy Hellcat XF6-F2, it proved unreliable. Since the war, the diesel engine has become the workhorse of the transport industry. The turbocharger, once it was made reliable, transformed the diesel from an economical but heavy monster into a system capable of producing essentially as much power as anyone could wish. There are turbocharged diesels today capable of producing almost one horsepower per pound of weight—the equivalent of the muchadmired and very powerful piston aircraft engines used in WW II and through the 1950s. As always, the diesel’s high compression ratio and lean combustion make it the most efficient of internal combustion engines—diesels give maximum “bang for the buck.” At present the industry standard turbocharger turbine wheel material is Inconel 713C, or 713LC. This material offers high creep resistance at high temperature and is easily cast. Creep resistance allows the hot turbine wheel to withstand the strain of spinning at 100,000 rpm or more for hundreds of hours. The ability to be cast is important, for it would be prohibitively expensive and difficult to machine these parts from solid—both because the material is extremely hard and because the desired shape is complex. Inconel 713 is nickel-based (73%) and contains chromium for both oxidation resistance and for solid-solution strengthening along with molybdenum. This is hardening brought about by the local strain—think of this as “bumps”— within the crystal, caused by the presence of the different-sized atoms of chromium and molybdenum. Some aluminum and titanium are present (6%) to bring about precipitation hardening. As the material cools from


melt temperature, the aluminum and titanium can no longer remain dissolved. The excess precipitates out of solution to form regions of hard intermetallic compounds such as nickel aluminide. For a model, think of how rock candy forms as a hot, saturated solution of sugar in water coolers. The precipitated intermetallic forms tiny particles which remain present even at high service temperatures. They act as super-strong pins to prevent slipping of layers of metal atoms past each other (creep). Notice that there is a small amount of carbon in 713C (0.2%). This can be useful in forming very hard chromium carbides. Because such carbides form preferentially in the “interstitial zones” between crystals, the process somewhat depletes nearby crystals of their chromium, leaving them open to oxidation attack. To prevent this, small amounts of niobium and tantalum are added. These metals glom onto the carbon first, allowing chromium to stay put and do its anti-oxidation job. This is probably important in turbos used on Diesels, which have significant amounts of free oxygen in their exhaust. In the 713LC, the letters ‘LC’ stand for ‘Low Carbon,’ in this case less than 0.05% Everyone has heard of “turbo lag,” which is the time taken for the turbine wheel to accelerate to a speed high enough to deliver rated boost when the throttle is opened. Metal turbine wheels are heavy—their density makes them about eight times heavier than the same volume of water—so lighter turbines would cut turbo lag. One approach is to make the wheel out of a heat-tolerant ceramic such as silicon nitride or silicon carbide. Such wheels have been made, but brittleness remains a problem because ceramics are extremely defect-sensitive. The free electrons present in metals act as a kind of molecular glue that allows metals to tolerate the existence of small cracks. In ceramics, all the electrons are tightly bound, making the materials less fault-tolerant.

You can feel the effects of electrons in materials directly every time you drink hot coffee or tea. The cup, being ceramic, conducts heat slowly because its bound electrons cannot move around to transmit heat rapidly. The spoon— especially if it is solid silver—conducts heat much more rapidly because the electrons in it are free, almost like a gas. Their mobility allows them to transmit heat rapidly. It may be possible to make improved ceramics by either reducing defect size or by compressing the parts when hot by “HIPping” (comparable to forging of metals). Another possibility now receiving attention is that of making hot parts out of solid intermetallics like nickel aluminide, which is also lighter than conventional heat-resistant metals. Intermetallic engine valves and turbo rotors have been made and are possible commercial materials for the future. Turbochargers are small machines that look as simple as the basic idea behind them. The high technology—of turbine and compressor aerodynamics, and of high-temperature metallurgy—is invisible. Turbo Diesel Register Issue 42

69


It’s a Drag Once when I was droning across our great nation in a dinky Class-C motorhome, I found myself trying to figure out what its aerodynamic drag might be. I measured its frontal area at the next fuel stop—93 inches wide by 108 inches high, or about 70 square feet. To figure the drag, I would have to know what aerodynamicists call the “dynamic pressure,” or just “Q.” Dynamic pressure is the pressure that results when the energy of moving air is transformed into pressure by stopping it. Putting the flat of your hand out the window at highway speed is a crude measure of this pressure. It increases as the square of speed. At 65mph it is about 11 pounds per square foot, and at 100mph it is about 25 pounds per square foot. Now it gets a little more complicated, because the motor home (or Turbo Diesel pickup, or whatever you are driving) is not a flat plate moving with its plane perpendicular to its direction of motion. Instead it has rounded edges or other attempts at “streamlining,” so the full Q is not developed everywhere on its frontal area. For a given shape, therefore, its degree of streamlining is stated as a multiplier, a coefficient. This is a number, less than one, which tells us how good or bad our shape is. Very poor streamlining, such as the proverbial side of a barn, has a drag coefficient of 1 because it is just a flat plate. As an object is better streamlined, it acts as if it were smaller, so the coefficient is smaller. For a highway truck or other breadbox like my motor home, or for an unstreamlined motorcycle and upright rider, this number is often about 0.6. For very slinky latemodel car, there are claims all the way down to 0.32. For really small numbers, you have to look at fish-like shapes such as that of the great airships of the early 1930s, whose drag coefficients were as low as 0.07. The key to low drag is that, after pushing the air aside to make room for your vehicle, you take care to smoothly put the air back together again after it passes. Otherwise you leave behind you a turbulent wake, boiling with energy that your engine has to supply. Making the front of your vehicle nicely rounded helps a bit, but not as much

as does making it gradually narrower or lower toward the rear. The aim of such narrowing or tapering is to keep the airflow attached to the shape, rather than letting it separate to form turbulence. Now let’s estimate some drag. In the motor home example there are 70 square feet of frontal area, the estimated drag coefficient is 0.6, and the dynamic pressure is, say, 11 pounds per square foot. Multiply them all together to give an estimated force required to push the vehicle through the air; 70 X 0.6 X 11 = 462 pounds. If we multiply this force, times the vehicle’s speed in feet per second, we get the number of footpounds of work we must do to push the shape through the air, per second. If our speed is 65 mph, this is 95 feet per second, so 95 X 462 = 44,000 ft-lb per second. One horsepower is 550 ft-lb/sec, so we divide 44,000 by 550 to get the power our shape needs to drive it, or 80 horsepower. To estimate whether this is reasonable, we can figure backwards from fuel consumption. The motor home is a nasty gas-burner, and four-stroke gasoline engines need about 0.5 pound of fuel, per horsepower, per hour. I know my swaying road-palace gets 8 miles per gallon no matter what speed I drive it, and at 65 mph that is 65/8 = 8.125 gallons per hour. At 6 pounds per gallon that is 8.125 X 6 = 49 pounds of fuel per hour. At half a pound of fuel per horsepower hour, this should be giving me something like 98 hp (49 pounds divided by .5 = 98) If my motor home had a Diesel engine, it would need just as much power but use only about 2/3 - 3/4 as much fuel. Assuming these rough figures are correct, how do we explain the 80 hp and 98 hp calculations? Where does the difference go? Tires get hotter the faster you drive, as a result of the rolling resistance that comes from their constant flexure. This easily eats up the 18 hp difference. Now let’s assume we’re driving a pickup-sized vehicle with more like 45

70

square feet of frontal area. At the same 65 mph (or 95 ft/sec) we’ll have the same dynamic pressure of 11 pounds per square foot, multiplied times our “breadbox” drag coefficient of 0.6, so our drag force will be 45 X 0.6 X 11 = 297 pounds. At 95 ft/sec this requires us to do 95 X 297 = 28,500 ft-lb/sec of work. To get power requirement we divide by 550 to get 52 hp. Let’s add in 15 hp for rolling friction, giving 67 hp. To convert this into estimated fuel mileage, we multiply 67 times a good Diesel engine specific fuel consumption of .38 pound per hp-hr, so 67 X .38 = about 25 pounds of fuel per hour. Diesel is a little heavier than gasoline so this is 3.8 gallons per hour, or 65 mph divided by 3.8 gallons = 17 miles per gallon. What if we want to go lots faster? Dynamic pressure increases as the square of the speed, but to get drag horsepower, we have to multiply times speed a third time (feet per second times the drag force), so the bad news is that horsepower requirement increases as the CUBE of speed. That is, if we now want to go twice as fast—130 mph—our original 52 drag horsepower will increase nine times—to 468 hp. Our rolling friction will increase too, bringing our power requirement, in round figures, up to about 500 hp. This is why Bonneville racers who wanted to set a truck record using a highway tractor found themselves making a ton of horsepower from their quad-turbo, two-stroke, 16V92 Detroit Diesel marine engines; but they were quite unable to crack the 200 mph mark. Their monster creation was making big wheelspin ruts in the salt and leaving a trail of black smoke that a coal-burning tramp steamer could envy—but not going any faster. Then they started trying to close up their wake by turning that boxy 1943 tractor into a fish. A long housing was built around the 16V92 engine, tapering elegantly almost to nothing at the tail. (The rear section mainly held the parachutes for stopping.) Why should this work? Think of the tapering tail as


being a tapered wet bar of soap, gripped by the “hand” of the surrounding air pressure. Squeeze a wet, slippery, and tapered object and it shoots out of your hand. That same air pressure could not act on the flat back of the tractor’s original cab because the flow separated from its too-sudden curvature, leaving the wake a random, whirling tangle of vortex flows—all of them carrying away energy. But taper the rear of the vehicle so that the flow remains attached to it and its pressure can be put to work to overcome drag. Fish don’t know it, but their shapes are about as streamlined as anything can be. With the slinky tail, that giant Diesel achieved a speed of around 260 mph. That would otherwise have required twice the huge power they were already making (which wouldn’t have helped anyway because the tires were already spinning at “only” 200 mph!). See Issue 34, page 152, for the write-up on the Phoenix diesel truck. You’ve noticed that stylists have been rounding the corners of even the most utilitarian vehicles lately. At least some of the time, this has the purpose of reducing the drag coefficient by teasing the air into following the shape at least a little bit before separating into the normal turbulent wake. This helps the manufacturer keep his fleet average fuel consumption from being too terrible— and it may save a few dollars’ worth of fuel over the vehicle’s lifetime. Back when the Volkswagen van was in prototype, the plan was to make its body panels as flat as possible to cut manufacturing cost (the more complex the die, the more it costs). With the resulting sharp-edged body, the best the clattering air-cooled flat-four engine could coax was about 45 mph. And so back it went for reconsideration; with the corners rounded (as can be seen on the many antique VW vans parked at Grateful Dead revivals) the machine was able to go fast enough to keep up with slow interstate traffic.

200 mph. What’s happening is that they must use a lot of their power to drive their downforce system. This consists of front and rear wings of specified dimensions, plus a complex undercar venturi flow. About half the car’s power goes into driving all this trickery, generating enough downforce that the cars could race on the ceiling if need be. This downforce translates into extra grip from the tires, enabling the cars to corner at about 3 G lateral acceleration and to brake at 3-4G. Their top speed isn’t muchgreater, if any, than it was when F1 cars made only half as much power, but their lap times are quicker because they don’t have to slow down so much for the corners. Drag is a drag. Even a superslippery Zeppelin needed a thousand horsepower or so to push through the lower atmosphere at 40-60 mph. Up at 30,000+ feet in the much thinner air, a 747 at subsonic cruise still needs about 50,000 pounds of thrust to shoulder its way through the stubbornly resisting molecules. But air has its pleasures, so we accept the compromise. Turbo Diesel Register Issue 43

Ever think about Formula One cars and their reputed 800 horsepower? With all that power they ought to be able to go a lot faster than they do—a measly

71


Diesel Review Periodically I like to mentally review the several reasons why Diesel engines are so much more economical than their competitors. The first of these is their use of a high compression ratio. The first and obvious reason that this is a benefit is that only by highly compressing its air charge can a Diesel engine ignite its fuel. Compression raises the temperature of the air charge high enough that the fuel, when sprayed into it, is promptly heated enough to ignite, requiring no ignition sparks. When a Diesel is cold-started, the mass of cold metal surrounding the air as the piston compresses it can take enough heat from it to prevent normal auto-ignition, so a cold engine can be difficult to start. To enable cold-starting, electrically-heated glow plugs or other starting assist is therefore temporarily required. Once the engine starts, this auxiliary ignition source is switched off and the engine continues to run on the heat of its own compression process. High compression increases efficiency. It does so by greatly increasing the temperature and pressure of combustion, and then by highly expanding the resulting high-pressure combustion gas. If a cylinder of uncompressed air were mixed with a hydrocarbon fuel and ignited, its pressure would rise by about seven times. As atmospheric pressure is about 15 psi; this would result in a peak pressure of about 7 X 15 = 105 psi—not a pressure that would make driving a Cummins-powered vehicle very exciting. The rule of thumb is that peak combustion pressure is roughly equal to 100 times the compression ratio. In the above example the nonexistent “compression ratio” is 1:1, which multiplied times 100 gives us a similar peak pressure—about 100 psi. In normal engines the compression ratio is also the expansion ratio. Imagine our hypothetical Diesel engine has a 17:1 compression ratio and that it has just burned its injected fuel to a peak pressure of 1700 psi. If we open the exhaust valve when the piston has halfexpanded this high-pressure gas, we are

throwing away useful energy because the hot gas is still at several hundred psi. It contains valuable pressure energy which can still do useful work on the piston. So we wait to open the exhaust valve until the piston has expanded the gas to a pressure low enough that what remains cannot efficiently do useful work on the piston—a pressure on the order of 100 psi. Fortunately for us, this is plenty of pressure to operate a very low-friction, high-speed device—a turbocharger. Why not just keep increasing the compression ratio and getting more and more temperature and pressure from the burning fuel and air? There is only so much energy in the chemical bonds of the fuel, so that sets an upper limit. The big question is, how much of this fixed amount of energy can be made to do work on the piston, and how much will go out the exhaust as waste heat? An engine with no compression wastes most of this energy as exhaust heat (like a campfire), but as compression ratio is raised, more of this limited amount of energy is directed to the piston and less is lost out the exhaust. Therefore the power increase to be had from increasing compression is an effect of diminishing returns; increasing compression ratio a whole number beginning at 1:1 gains us much more than does raising it a whole number beginning at 16:1. There is another effect that prevents compression from yielding further benefit at very high numbers; heat loss. The higher the combustion temperature, the faster heat is pushed out from the hot gas into the necessarily much cooler piston and cylinder head. At some very high compression ratio, the dwindling gain from the increase is canceled by the rising heat loss. There is yet another reason we don’t just keep on increasing compression. That reason is that at high temperatures, gases begin to lose their ability to translate heat into pressure. At normal combustion temperatures, most of the energy in the gas exists in the form of rapid molecular motion—the classic example being a room full of perfectly

72

elastic billiard balls, zooming, colliding, and bouncing off the walls of their container. The useful pressure on the top of the piston arises from the zillions of tiny collisions as fast-moving nitrogen, carbon dioxide, and water molecules in the combustion gas bounce off the piston crown. But as we push gas temperature even higher, the energy in the gas begins to take on other forms which contribute less to pushing the piston. The violently agitated gas molecules now contain significant energy in the form of rotation and various modes of molecular vibration—energy that doesn’t push the piston. This loss is said to arise from an increase in specific heat of the gas—that is, at higher temperatures, it takes more energy added by combustion to raise the temperature of the gas by one degree. If a given molecule vibrates hard enough—its two or more atoms bouncing back and forth against the chemical bond energy holding them together, like masses joined by a spring—can actually come apart. This is called dissociation, and it is an energy loss because it has taken energy from the gas to overcome its own bond energy—breaking apart. This energy may be recovered if the dissociated atoms recombine promptly, but the likely outcome is that this won’t happen until the piston has moved down somewhat on its power stroke, reducing the temperature of the combustion gas enough that recombination becomes more likely. But now the moment of peak pressure has passed, and the energy “repayment” of recombination comes too late to help move the piston much. Or recombination may not take place until the hot gas has expanded into the exhaust pipe. Either way, our engine has lost a tiny bit of its peak pressure, and therefore makes less power. Okay, those depressing losses exist, but it remains that Diesels benefit from the fact that they can, and must, use a high compression ratio. Spark ignition engines could do this too, but for one fact—detonation. Because the fuel in a spark-ignition engine is mixed with air for a long time before it burns,


chemical changes that are driven by heat have time to change some of the fuel into a sensitive explosive. As the charge of mixed gasoline and air burns after the spark, the hot, expanding combustion gas compresses the yet unburned mixture, thereby heating it. If this heating goes far enough (and higher compression makes it hotter), some heat-altered bits of mixture go off by themselves in the process known as detonation or combustion knock, and then burn at the speed of sound. When this sonic wave hits the inside surfaces of the engine, it produces the ‘ping’ or ‘knock’ associated with detonation. Normal pump gasoline can detonate in spark-ignition engines at compression ratios as low as 8.5:1, or about half the compression ratio of a heavy-duty Diesel. This is a major reason for the spark-ignition engine’s higher fuel consumption—it cannot safely use a high compression ratio. Diesels cannot detonate because their fuel is in the combustion chamber for too short a time before it ignites; there is too little time for the chemical changes that must precede detonation

take in a full charge of air. At lower load, only the fuel is throttled—never the air. Thus, at part-throttle, a Diesel is burning its fuel in the presence of very large amount of extra air. This lowers the bulk temperature of the combustion gas, thereby deriving even more benefit by avoiding specific-heat and dissociation loss. In the jargon of modern engineering, a Diesel engine is a naturally lean-burn device. Lean-burn can be achieved in sparkignition engines, but it is neither as natural nor as easy as it is in Diesels. Mixtures of gasoline and air can be ignited by spark only over a range of from 10:1 (rich) to 18:1 (lean), so in order to achieve extreme lean-burn (like 24:1) a spark-ignition engine has to create a mixing zone near its spark plug rich enough to be ignited. This stratified-charge operation is achieved by spraying the gasoline toward the spark plug from a special injector—a process that has some similarity to what happens normally in Diesels.

That extra air also has the effect of reducing peak combustion temperature, thereby avoiding some of the specificheat and dissociation loss (mentioned above) normally associated with high temperature combustion.

Back in the late 1980s the automotive world had a brief romance with twostroke engines. A major reason for this is another piece of jargon called pumping loss. In a normal, air-throttled automotive gasoline engine, the usual load condition is 10% or less, with more power being used only for on-ramp acceleration and the like. With the engine throttled in this way, every time a piston falls on its intake stroke, it is pulling a fairly strong vacuum above itself, and it takes work to do this because the piston is, in effect, compressing the atmosphere. In simple two-stroke engines, the crankcase is used as a charge air pump, so when the engine is throttled, crankcase pressure falls just as low as the pressure above the pistons. As a result of having nearly the same pressures above and below the piston during the intake process, such two-strokes have very little pumping loss.

Add to this the fact that most engines are not on full throttle all the time, but Diesel engines are not air-throttled. Because of this, a Diesel’s cylinders always

How is this different from a Diesel, which is never air-throttled? It is not different. Therefore, at part-throttle a Diesel engine also benefits from reduced pumping

A second reason for Diesel efficiency has to do with the specific heat effect mentioned above. Because a Diesel injects its fuel only an instant before combustion begins, there is limited time available for fuel-air mixing. To assist this process, it is normal for Diesel engines to inject at full throttle only enough fuel to react with about 80% of the air charge. The extra air is present just to increase the chances that a given fuel molecule’s hydrogen and carbon atoms will all find partners in the mad dance of combustion.

73

loss, as compared with conventional spark-ignition engines. For all these reasons, therefore, Diesel engines are more fuel-efficient than their competition. Intensive work on gasoline-fueled, spark-ignition engines is closing the gap somewhat, but the compression ratio effect is a biggie and detonation will keep gasoline engines relatively inefficient as long as pump gas is as nasty as it is (the old US Army Air Corps, circa 1936, had better gasoline than the stuff now at the pump). Another small effect also contributes to apparent Diesel fuel economy. I say “apparent” because if fuel were sold by the pound instead of by the gallon this effect would not exist. That is the fact that Diesel fuel has a higher density than most gasolines. A usual density for Diesel fuel is .85 gram per cubic centimeter (water weighs 1.0 gram), while that of gasolines is closer to .75 gram. This gives the Diesel operator 14% more fuel mass per gallon. The debate rages on as to whether the automotive future belongs to gasoline or Diesel. Gasoline engines use more fuel but are—at least for the moment—easier to clean up. Diesel engines use significantly less fuel but remain—again, at least for the moment—controversial sources of nitrogen oxides and particulates. A lot depends on how government policymakers evaluate the relative importance of (a) limiting petroleum imports and (b) limiting emissions. That’s politics! Turbo Diesel Register Issue 44


Official Cure-Alls This issue’s theme is”‘placebos,” so I thought I’d review some official cure-alls that have come and gone. Science and engineering people know that we approach truth by successive approximations—our knowledge increases, but it is never complete. We hope to learn enough about the problems we face to be able to deal with them. In politics, answers must be presented as total solutions. This makes it very hard for regulators such as the EPA to serve both the physical facts and their political masters. For a time during the ‘80s, the Diesel engine was hailed as the powerplant of the future because it was highly fuel efficient and emitted little carbon monoxide. Automakers hastened to cobble up Diesel heads to put on their existing gasoline engine blocks. A few retired folks headed for Florida in such autos, getting a refreshing 30 mpg. Sadly the gas engine blocks proved too light for Diesel operation, and the emissions researchers decided that Diesel engines were at least as “atmospherically nasty” as the gasoline variety. Carcinogens hitch rides into our lungs aboard the tiny carbon particulates found in Diesel exhaust, and the NOx produced when injected fuel sprays light up was deemed a serious source of photochemical smog. Learn a new thing every day. In the aftermath of the ‘73 oil shock we were told that biomass-derived fuels would set us free from politically shaky dependence on imported petroleum. Midwestern farmers loved this, with its promise of millions of acres of corn transformed into ethyl alcohol. Uh, well, there is the problem of fuel system parts corrosion—easily fixed by buying a new vehicle equipped with an alcohol-tolerant fuel system. Another bother—storage of alcohol-bearing fuels leads to formation of “water bottoms”—layers of alcohol containing dissolved water, which have separated from the lighter fuel above. This is a case of not being good to the last drop. Those

last few gallons could be something of a public relations problem, seeing as how water doesn’t burn, and even pure alcohol requires a much richer mixture than gasoline in order to ignite and burn properly. Suddenly everything was solved by MTBE, or Methyl Tertiary Butyl Ether! What happened to environmentally and politically-correct biomass fuel from corn? When policy changes, so do the explanations. Instead of putting an end to dependence on imported oil and revitalizing those vacant towns in the Corn Belt, the idea sort of morphed into using small amounts of low-energy fuels such as alcohols and ethers to lean out the fuel mixture of older, high-polluting cars. This would slightly cut the amounts of CO and UHC spewing into the urban atmosphere and thus benefit us all. And so our new gasolines all took on the sharp, pungent smell of MTBE. Unlike alcohol, MTBE doesn’t dissolve limitless amounts of water, and it doesn’t eat up fuel system parts, either. Phew! Saved by Science! The stuff also has a pretty high octane number too, so gasoline-burning cars and trucks didn’t knock and ping the way they did back in ‘77 when gasolines first began to go downhill fast. And so matters hurtled into the future for a few years. Gas pumps looked the same except for the little sticker that proclaimed “contains MTBE.” Meanwhile, although MTBE is not limitlessly soluble in water, it is a little bit soluble. And so ground water containing dissolved MTBE from spills and leaking underground tanks trickled here and trickled there. After a few years it reached the wells of quite a few California towns and cities. Oh dear, what have we here? This water smells and tastes bad! Water tests confirmed the presence of notgood-for-you MTBE. Suddenly MTBE, the former darling and savior of urban air quality, became an evil witch, and was banned. The plants that had been rushed into MTBE production were shut down. New plan necessary.

74

On another stage, the drama of fuel lead was being presented. Tetra-ethyl lead (TEL) is a deadly poisonous organo-metallic compound that has an absolutely miraculous ability to stop combustion knock in spark-ignited gasoline engines. Some go so far as to say that TEL may have made the crucial difference in the Battle of Britain, increasing the power of Britain’s Spitfire and Hurricane fighters to clear English skies of German bombers. A drop of the pure stuff on your skin will kill you, but from the 1930s through the 1970s, motor gasolines contained up to 4.3 grams of TEL per gallon. Our parents used to ask for “ethyl” at the gas pump—a common name for octane-boosted pump gas. The EPA chose the catalytic converter as its major anti-emissions technology. The converter, by reacting exhaust pollutants back to harmless or at least legal forms, was nevertheless easily put out of action by leaded fuel. Now, anything that could damage converters was politically excommunicated. A schedule for lead reduction cut fuel lead to zero by the mid-‘80s. To help us to hate lead more easily, studies of urban air quality were quickly run up to reveal that the presence of airborne lead was significantly depressing the intelligence of children of lower-income families. Lead must go! Meanwhile, the sharp smell that comes from new car catalytic converter is sulfuric acid. To prevent knock with the lower octane of reduced-lead fuels, compression ratios were dropped from the previously usual 10:1 to more like 8.5:1. This also had the effect of significantly increasing fuel consumption, but it is our clear duty to save the children. Lead went. As the second phase of compensating for the lost octane number, the fuel companies began to add higher percentages of knock-resistant aromatic compounds to their brews. In my corner of the world, this news arrived in the form of the plastic float in my van’s carburetor soaking up the new aromatics and sinking to the bottom of the float bowl. My exhaust became black; my engine’s


idle became the classic sputter-and-stall. I bought a new float for $7 but you can bet thousands of new carburetors were sold to motorists who needed only a float. From an octane standpoint, the loveliest of the aromatic hydrocarbons is benzene, a ring of six carbon atoms, each carrying one hydrogen atom. Alas, a famous study of the Istanbul shoe industry by the dedicated physician Muzzafer Aksoy identified benzene as the cause of many excess leukemias among the population of Istanbul shoe workers. Benzene is a powerful solvent for rubber (and plastic carburetor floats, fuel lines, etc.). Mixed with rubber it becomes an adhesive strong enough to hold shoes together. Working in shops whose atmospheres were heavy with benzene vapor made the shoe workers sick. Benzene was tagged as a carcinogen and its use surrounded by a thicket of regulations. But there are lots of other aromatic compounds—toluene, xylidene, cumene, etc. All are based on the benzene ring structure, but with one or more sidegroups in place of a hydrogen or two. Are they carcinogenic? Some say yes, some say no, and others say maybe. Besides, EPA couldn’t very well make all hydrocarbon chemistry illegal, could they? And so although you can’t legally put much benzene in gasoline any more (some occurs naturally in casinghead petroleum) you are welcome to double the percentage of other aromatic compounds – now often as high as 40%. As with catalytic converters, the no-lead era exchanges one problem for possibly another. DDT was once advertised as “harmless to human beings and pets,” and to this day it has its loyal defenders. And what about other opinions? One of these holds that urban incinerators generated much of the airborne lead as a result of disposal of old building materials, some of which bear layers of lead-based paint. Science? Or politics? When the experts get up to give their testimony, we notice that they all went to good schools and got good

marks. But the opinions they give are quite different. Does science selflessly advance human knowledge, or is it just opinions-for-hire? The next panacea scheduled to save the world was the electric car. What a beautiful prospect—instead of long waits in the gasoline lines, and sitting for hours on jammed freeways breathing hot engine exhaust, we can just hum our way home, plug in the charging cable to the convenient wall socket in the garage, and live perfectly ever after. Instead of paying $15-20 to fill the tank with smelly gasoline every week, why, it’ll hardly cost anything to drive—pennies! The politicos loved it too—the generating plants for all the extra electricity those cars would use could be located somewhere else. Shall we call it “pollution relocation?” A future of zero-emissions electric vehicles would leave the cities as fresh as a mountain meadow. Meanwhile, the electric plants could be located . . . Well, where would we locate them? How about the Four Corners? Nobody that votes lives there—it’ll be ideal! While all this excellent planning was going on, California’s electric power crisis was brewing. How would that crisis have looked if, say, 25% of the auto transportation’s horsepower-hour requirement was added to electricity demand? Now the sums. Electricity-generating plants are about 30-35% efficient, and long-distance powerline transmission efficiency varies from a low of 85% to a high in the 90s. Neglecting any losses involved in stepping line voltage down to battery voltage, we know that the battery charge/discharge cycle is doing well to achieve an overall 70% efficiency. Electric motors get hot, which tells us that less than 100% of the power in becomes power out. Let’s give them an 80% efficiency. There will probably be a 15% mechanical loss in powering the vehicle’s wheels, so let’s give that stage a generous 90% efficiency. To get overall efficiency, we multiply all the above together; .33 x .88 x .70 x .80 x .90 = 15% total system efficiency.

75

Why, my goodness me! That number is less than the efficiency of ordinary car and truck engines, whether gasoline or Diesel! That means it’s more fuel efficient to burn the fuel in the vehicle than it is to burn the fuel to make heat, use the heat to raise steam, use the steam to turn a generator, use the electricity to charge a battery, discharge the battery to turn an electric motor, and finally (getting tired now) use the electric motor’s power to drive a car. The above ignores other difficulties which are not insignificant. Electric cars need heaters in wintertime. Ever paid for electric home heating? You get the idea. And air conditioning—they used to tell us that auto air conditioners took 15 horsepower to run. Twelve hundred pounds of lead-acid batteries built into a molded-fiberglass chassis that doubles as a battery case give a range about the same as that of electric cars of 1910—60 miles. Better batteries? Yes, they exist, but are either expensive or contain disagreeable stuff like molten sulfur at 900 degrees, or both. Unfortunately, no battery can store a respectable amount of energy when compared with an equal weight of liquid hydrocarbon fuel. Then why are hybrid gasoline-electrics succeeding? They derive all their power from a small, efficient combustion engine which is too feeble to provide rapid acceleration of the kind US drivers are accustomed to. When the main engine can spare some power (in town, in traffic, etc.) it charges the battery. When efficient cruising is required, the main engine provides it (small cars require only 15-30-hp at steady highway speed). When more acceleration is needed than the main engine can provide, both combustion and electric power are used. The electric part of the power unit is in effect a rubber band which is “cocked” when the main engine can spare the power, and its “snap” is added to the feeble push of the small, economical main engine to result in respectable acceleration. Hybrid vehicles can, and probably will, be built using other forms


of temporary energy storage such as flywheel or compressed air. Right now, the word “electric” has been made iconic by years of political advertising. Tomorrow’s policy may differ. If you think that doing the math for the electric car’s total system efficiency was enlightening, let’s not even talk about fuel cells. It is to be expected that in the future, technology will continue to provide improved responses to our needs. The nature of politics will require our leaders to seize upon each new technology as a complete solution. Because complete solutions are not attainable, there will be various disappointments. Placebos, cure-alls, perhaps it all depends on how the politico’s spin the information. Given the choice between laughing and crying, I choose to chuckle. Turbo Diesel Register Issue 45

76


What is a Hemi? What is a “hemi”? We all know that hemi is short for hemispherical, or half a sphere. No engine today has a completely hemispherical combustion chamber, because so deep a chamber would provide a very low compression ratio. Therefore it is correct to use the description coined by Chrysler engineers during WW II, and call the modern hemi chamber a spherical segment chamber—shallower than a full hemisphere. The two valves in a hemi, or spherical segment combustion chamber, are disposed with a fairly large angle between their stems. During the 1920s and ‘30s, when deep, full hemi chambers had their heyday, this “valve included angle” was often as much as 90 or even 100 degrees. It was explained at the time that angling the valves in this way provided much more room “on the diagonal” for large valves than was available if the two stems were parallel, with their valve heads contained inside the cylinder’s bore circle. This was important at a time when valve area had to make up for a lack of sophisticated port shapes. At the time, when full and deep hemi chambers were adopted in the engines of racing cars and motorcycles, poor fuel quality limited compression ratio to the vicinity of five- or six-to-one. This suited the large volume of such a deep chamber. Around 1930, when air-cooled aircraft engines began to be seriously supercharged, their compression ratios stabilized around 6.5:1 because, even with 100-octane fuel that came into use after 1936, that was about all they could stand without detonating. As motor gasoline octane rose, the combustion chambers of unsupercharged engines had to become smaller to take advantage of the higher compression. The easiest way to raise compression was the use of high-domed pistons. Perceptive designers realized that their use was a dead end—for two reasons. First, a deep chamber and tall piston dome increased chamber and piston surface area, making engines run hot.

Second, the chamber that resulted from a high dome became more and more like the skin of half an orange after it’s taken from the juicer—thin and very spreadout. Such a chamber took longer to burn than did a more compact chamber. Therefore progressive designers of the later 1930s to 1950s began to swing valve included angles to smaller values and to make the classic hemi chamber shallower. This allowed the piston to be flattened out, reducing its surface area and thereby solving its heat problem, while the shallower chamber reduced its surface area as well, making the head run cooler. Meanwhile, the new science of airflow measurement revealed that there was something special about the hemi chamber shape. Airflow is compared between ports of differing styles and sizes by expressing it in terms of cubic feet per minute, per square inch of port throat diameter. When this is done, hemi chambers are seen to flow more air for their port size than do parallel-valve chambers or pent-roof four-valve chambers (four valves are used in many engines today because they overpower their flow deficiency with sheer valve area). The reason why hemi chambers flow so well is that the curving inner head surface surrounding the intake valve performs some of the function of a diffuser, slowing the air as it comes out from under the valve and efficiently converting its velocity energy into cylinder-filling pressure. This phenomenon is an interesting subject in itself. The idea of angled valves in a partspherical combustion chamber is at least 100 years old. George Weidely of the Premier Motor Co. designed an engine with inclined overhead valves in 1905, and surely there were others. The hemi idea received new emphasis during World War One when a Dr. Gibson, working at Britain’s Royal Aircraft Factory, enunciated basic principles for a successful air-cooled engine cylinder. One of these was that the fewer holes you make in a hot cylinder head, the

77

less it warps. This meant that the ideal number of valves was two. He stated that further, the stems of these two valves should be angled apart far enough to permit the placement of cooling fins between them to cool the hot center of the combustion chamber. The truth in these rules reverberated through the 1920s and ‘30s, causing an increasing number of racing and sports auto and motorcycle engines to use this chamber type. By 1924 the hemi chamber was adopted as ideal for air-cooled aircraft engines. In 1927, Charles Lindbergh would fly solo, west-to-east across the Atlantic to Paris, behind a Wright J5C engine featuring hemi combustion chambers. Even with its advantages, the hemi chamber took time to assert itself on the ground. An early problem was combustion itself. Just after WW I, engine pioneer Harry Ricardo began to license his concept of “squish” combustion, as applied to side-valve engines. The side-valve places its two valves, stems pointing downward, beside the piston. The combustion chamber includes the bore circle and the extra area above the two valves. Ricardo’s squish concept consisted of shaping the head so that the piston nearly touched it at top center. As the piston approached the head, the mixture between head and piston would be squished-out rapidly into the main chamber at the side, located above the side valves. This rapid jet of mixture provided turbulence that accelerated combustion, allowing such engines to safely run on low-octane fuels. The squish effect worked at all engine speeds, giving such engines great flexibility and pulling power. Overhead valves were at first used mainly in racing, because the extra complexity of their mechanism and the difficulty of lubricating them added to an engine’s cost. Also, in racing, power at higher rpm was more important than middle-rpm pulling power. There was no easy way to implement the squish concept in a hemi-chambered OHV engine, so side-valve engines continued to be produced for non-sporting vehicles


for many years. Ford’s famous flathead V8 and BSA’s M20 are outstanding examples.

was the combustion chamber of choice through most of the “V8 era” of the US auto industry.

When water-cooling was adopted on racing engines, the tilted valve chamber—whether with two or four valves—was retained because this arrangement allowed all intake valves to be operated by one cam and all exhausts by the other.

As piston speeds rose, three schemes presented themselves as means of providing the necessary increase in valve area. The simplest was to increase the bore and reduce the stroke, providing a wider circle in which to place the valves. Slightly more complicated was the “cantvalve” scheme, which by tilting valves slightly and adjusting rocker-arm angles to suit, could fit in slightly larger valves and improved port shapes. Finally, adoption of a “spherical segment” chamber allowed much larger valves but required a complex two-shaft rocker-arm assembly—or overhead cams.

The lack of squish in OHV chambers, like the hemi, for a time limited their compression ratio, and made them liable to knock at lower rpm. They thus acquired the reputation of being “rough.” Gradually through the 1930s it was discovered that turbulence could be generated by other means than squish— angling the intake port imparted a rotary swirl to the incoming charge, speeding up combustion and allowing safe use of higher compression ratios. Something new was added just after WW II. A Polish engineer, Leo Kuzmicki, working at Norton in England, added squish to a hemi chamber by building up those areas of the piston not directly under the valves or spark plugs. As these areas approached the head on compression, they generated squish jets. This speeded combustion even more, allowing another round of compression and power increase. When the British auto industry sought to assert itself in Grand Prix racing, the 4-cylinder Vanwall racing engine was based on Norton’s squish hemi cylinder head. The Vanwall’s success made Britain again a center of F1 development, which it remains to this day. Traditional US auto engines with OHV have usually, for cost reasons, placed all their valves in a row with stems parallel, so that both valves must be small enough to fit into the bore circle. This simplifies machining and allows a single rocker-arm shaft or axis to serve both intake and exhaust valves. Over time, this evolved into a “bathtub chamber” of roughly oval shape, containing both valves and a side-mounted spark plug, and surrounded by flat squish area. This

Chrysler had systematically pursued higher engine efficiency through higher compression ratio beginning in the early 1920s, and the advent of the effective anti-knock fuel additive tetraethyl lead reinforced this trend by making better fuels available. Ultimately such work reached a dead end. A side-valve combustion chamber is inherently inefficient because of its extra surface area, and cannot reach a high compression ratio without limiting the flow area between valves and cylinder. The best compromise between the competing claims of compression and airflow was reached at about 6:1. At the same time, General Motors was known to be conducting its own high compression studies with overhead valve test engines that could easily reach ratios of ten or even twelve-toone. Higher compression meant lower fuel consumption as well as increased engine torque. In the later 1930s, despite the dampening effect of the Great Depression, Chrysler engineering pursued similar studies, seeking also to improve high-speed engine breathing and combustion by studying the hemi chamber that had long been the norm in radial aircraft engines. At this time most auto makers built in-line engines, but the longer a crankshaft is

78

made, the more vulnerable it becomes to torsional oscillations. So long as auto engines were limited to rpm barely above 3000, in-line sixes and eights were okay; but it was clear that more power would require either larger, heavier engines, or new designs capable of safe operation at higher rpm. Ford had already produced a V8 with a short and stiff crank with only four crankpins. Even though the Ford V8 was still a side-valve, its rpm-capable crankshaft pointed to the future. Chrysler engineers had been told that the spherical segment chamber was rough-running and suited only to high speeds. This information was not wrong—just out of date. With what had been learned in the later 1920s and ‘30s, Chrysler engineers were able to make their hemi test engines operate smoothly while using less fuel than any side-valve. When war began in Europe, the US government accelerated military preparation. In 1940 Chrysler began design of a liquid-cooled aircraft engine in accordance with guidelines of the US Army Air Corps. The Army believed at the time that future fighter aircraft (they were then called “pursuits”) would need liquid-cooled engines of minimum frontal area, operating at high combustion pressure achieved by supercharging. Chrysler’s plan called for placing even more than 12 cylinders behind a tiny frontal area circle of only 33 ½ inches. Their IV-2220 design consisted of two 60-degree V8s placed end-to-end in a common crankcase. This would neatly solve the problem of crank torsional vibration by taking power from a single central gear to which both V8 cranks would bolt. To provide the engine with the stiffness its length required, they made the crankcase very deep, such that almost the entire length of each cylinder was submerged in it, leaving only the cylinder head projecting. Even so, the 2220’s crankcase was long and vulnerable to flexure, being less than 12 inches deep at the sides and 16 inches in its center. Along each row of heads would be bolted a cambox-and-rocker


assembly to operate the two large sodium-cooled valves in each cylinder. Valve included angle was a thoroughly modern 45-degrees. Early development of this unusual engine did not move quickly, as there were chronic problems with crankcase and cambox cracking. Although by 1944 the engine did reach the design power of 2500-horsepower, and flew at nearly 500-mph in a special P-47-based test aircraft, it was clear by then that existing engine types were enough to win the war and that postwar development would shift to jet engines. In any case, on March 20, 1942, Chrysler contracted to build thousands of air-cooled radial engines for the B-29 in a huge new facility near Chicago—work that absorbed large resources. There was little left for the IV2220, which was therefore cancelled. Once the war ended, auto engine development resumed. Oldsmobile released its “Kettering” Rocket 88 OHV V8 in 1949, and Chrysler was close behind with its 331 cubic inch “Firedome” V8 in 1951—but with the hemispherical segment chamber that had come from wartime 2220 work. In place of that engine’s side-mounted spark plugs, single overhead cams and roller rockers, the Firedome had pushrods, twin rocker shafts, and single central spark plugs. Covering all this mechanism were the distinctive wide, slope-sided valve covers, each pierced by four spark plug tubes. Today, such valve covers shout Hemi.

identical-looking small lozenge-shaped cars (re-read page 6). Establishing a brand identity is almost impossible in such a sea of sameness, so motivational researchers have turned to classic identities such as the VW Beetle, the BMC “Mini,” and Ford’s original Thunderbird. If a great name already lives in the minds of millions, that’s free bacon for whoever chooses to use it. This lies behind Chrysler’s re-introduction of their premium 300C automobile and its “Hemi” powerplant. The newly-designed Hemi now being marketed by Chrysler employs a considerably modified combustion chamber. This design continues the pair of angled valves, twin rocker shafts, and central spark plug of the original, but fills in those parts of the chamber not occupied by valves to reach high compression without a tall piston dome. This is another way of accomplishing what Norton did with its squish piston, but the extra material becomes part of the head rather than of the piston. The basic ideas behind the hemi chamber are very old, and have made it useful in many quite different applications. Its usefulness continues. Turbo Diesel Register Issue 46

We revere the name “Hemi” because of its use in Chrysler’s famous Firedome and later V8s, which turned out to be uniquely suited to oval track and drag racing. To this very day, special racing hemi engines (containing no Chrysler parts) continue to be produced for Top Fuel drag racing. Supercharged and fueled with nitromethane, such special hemi engines are claimed to produce 6000-horsepower during their sub-5second runs down the 1320. Auto marketing people know that distinction is hard to achieve in a world of

79


Staying in One Piece As you put your foot down and hear the thin rising whistle of your turbocharger spooling up, do you ever wonder how the rotor, incandescent from its high temperature, can stand the centrifugal stress of spinning over 100,000-rpm? The simple answer is “nickel-based jet engine superalloys,” but that doesn’t tell us anything about how such materials work. Just having a high melting point—as such materials do—is not enough. There are other problems to be solved. One is simply strength. Metals are composed of a jumble of tiny crystals, usually oriented every which-way. Metal atoms bond to each other by sharing their more or less plentiful bonding electrons. Because these electrons form a kind of gas in metal crystals, the forces that hold atoms together are delocalized—that is, the material holds itself together even if its shape changes. In a crystal, the atoms assume ordered positions, but yielding takes place as sheets of atoms slide across each other, or as defects in the crystal’s order propagate through it. One way to strengthen metals is to mix in atoms of one or more other metals, having a different size from those of the basic material. The local strain created by their presence makes it more difficult to make sheets of atoms glide past each other, or to push crystal defects from place to place. Adding other metal atoms in this way is called solution strengthening. Unfortunately, alloying usually reduces the melting point. This brings us to another problem of high-temperature materials subject to stress—creep. Creep is the slow yielding of a material under stress, at temperatures that are more than halfway to that material’s melting-point. The classic example of creep is the slow movement of glaciers. Snows fall on the glacier, increasing its thickness, being compacted over time by its own weight into ice. Gravity acts on this thickness of ice, causing it to slowly spread like silly putty. How can it flow without melting? In the lattice of each ice crystal, water molecules cling to each other, tending

to keep the material solid. But each molecule must also resist the force of gravity, adding a small bias to the forces acting on it. On the average, the vibrational energy of a given molecule is much less than what it would take to dislodge it from its position. But, when through the statistics of energy distribution, a given water molecule does get enough energy to change position, it usually changes in the direction that relaxes the forces acting on it. It “makes itself more comfortable.” Summed over the millions of tons of ice in the glacier, the result is a slow net motion—creep. The higher the temperature of the material, the more rapid is the creep in response to strain. In early jet engines, creep was so rapid that new turbine blades were required after as little as 25-100 operating hours. Blades grew in length until they either scraped on the housing in which they spun, or necked-down so much that they failed in tension. The blades weren’t melting, for they were operating far below the actual melting point of their materials. They were creeping. Metallurgists soon discovered means of making materials resist creep. They could, for example, include in the material enough carbon to form zillions of tiny, dispersed particles of metal carbides. Because these particles are extremely hard (you may have heard of TiC, titanium carbide, used as a wearresistant hard coating on metal-cutting tools), they act as physical barriers to the motion of crystal defects through material. They can also make it very brittle—a very un-useful quality. A major reason for the usefulness of metals is that they can yield under stress, rather than just snap off. This allows metals to survive sudden loads well, and makes it possible to bend, forge, and otherwise form them into desired shapes. Losing this quality is usually a disaster for a metal alloy. Another method of providing creep resistance was to cause a second phase to form within the matrix of the metal as a whole. So-called “intermetallic”

80

compounds such as nickel or titanium aluminide form small islands within the metal, acting like extremely hard bricks in a matrix of softer, more ductile “mortar”—the original alloy. This kind of material acted very rigid and strong up to a very high level, then very slowly yielded as the “mortar” permitted some motion—without allowing cracks to suddenly shoot through the material. Promising though this was, it too had problems. In service, the material would at first display all the strength and creep resistance designed into it, but after some hundreds of hours at operating temperature, failures occurred that should not have taken place. The material had lost strength. Samples of the failed blades were cut through and the cut surfaces were then highly polished. Special reagents were used to etch the polished surfaces, revealing the microstructure of the metal to the metallographic microscope. It was found that during long operation, the islands of hard intermetallics were growing in size and becoming fewer in number. This is a familiar thing for anyone who keeps ice cream in the refrigerator too long. Ice cream is cold-worked during freezing to prevent the formation of large ice crystals that would give it a coarse, granular texture. Kept too long in the fridge, and fairly close to its melting point, the larger of these tiny crystals grow at the expense of the even smaller ones. By the time I think “Hey, I could have some ice cream!” the large crystals may have grown so big that the formerly creamy stuff has acquired a gritty texture. Something similar happened to the islands of hard intermetallics in turbine blade materials. Since the strength of the material depended on having a great many tiny “bricks” to give it hardness, when those bricks became larger and fewer from prolonged exposure to high temperature, the material lost strength. At one time, and because of this, the turbine blades of some British jet engines had to be removed and re-heat-treated every 900 hours. This heating caused the intermetallics to go


back into solution, and the following cooling schedule would cause the reformation of the desired size and number of precipitated intermetallics. Later, means were found by which to greatly slow such changes, giving the resulting alloys much longer service lives. Another problem was that while a given alloy might perform very well when very hot—as in a turbine blade—when used in a cooler part of the engine, such as in a turbine rotor disk, that same material would be unacceptably brittle. This is one reason why, in aircraft turbine engines, the blades and their rotor disks are seldom cast in one piece. Instead, the disk is made from a material whose properties are optimum for its operating temperature, while the blades that are fitted into the fir-tree slots in its rim are made from something quite different, whose properties become useful only at higher temperatures. For commercial power-generation turbines, special materials had to be developed to allow blades and disks to be cast in one piece. It is such materials that have been used to make turbocharger turbine wheels. Anyone who has been around automotive engines has heard the term “nodular iron.” Crankshafts for low-to-moderate duty are plain old cast iron, but to resist higher stress levels nodular, or ductile iron will be specified. For the highest duty crankshafts, forgings are almost always employed. We all know that ordinary cast iron doesn’t bend much before it breaks—it is brittle. Nodular iron has more ductility—the ability to change shape under stress rather than simply to fracture. The difference in the behaviors of these two materials arises from how they deal with small cracks. Cast iron contains a lot of carbon. In the molten state, this carbon is dissolved in the iron, but as the melt cools after casting, the iron becomes less able to hold this much carbon in solution. The carbon is therefore precipitated out of the melt, forming long, needle-like crystals that branch in all directions. This “acicular” carbon is like a superhighway for any

crack that forms under stress, giving it an easy path of least resistance. This is why, as we sat in study hall, our desk lids propped open against the tops of our heads, perusing a concealed Hot Rod magazine, we learned that if we wanted to hop up certain engines, we had to look for the particular manufacturer’s symbol on the crank that would indicate that it was of the more durable nodular iron. In nodular iron, changes are made to the chemistry of the melt and to the heat treatment, which cause the carbon to assume the form of ball-like nodules rather than long needles. This eliminated most of the easy pathways for cracks, causing them to remain dormant for much longer periods of time. Similar undesirable needle-like phases can also form in high temperature superalloys, rendering them brittle. As was done for acicular carbon, means were found by which to convert the crack-inviting needle form into a less harmful blocky form. In such ways it was found possible to create alloys which could be cast to make turbine wheels with integral rather than separate blades. It is from such materials that turbocharger turbine rotors are cast. The design of metal alloys is very much like tire engineering. Because everything affects everything else, it is seldom possible to go after an increase in a single property (such as ductility, or creep resistance, or oxidation resistance) without having a potentially harmful effect on one or more other, equally essential properties. Therefore, once a useful alloy is developed and engineers become skilled in using it successfully, it tends to remain in production for many years. Material properties also depend on processing during manufacture. Making a flaky pastry requires that the shortening be added cold, so after blending it remains in the form of small fragments rather than melting uniformly throughout the dough. When baked, the fragments separate the pastry into layers, causing your baking skill to be

81

admired. Some superalloys are melted just as in the case of general-purpose steel alloys. The right amounts of the various elements are put into the pot, the induction heating is switched on, and presently the material melts and can be brought to the desired temperature for pouring. After cooling, it can then be heat-treated as desired. Other advanced materials contain elements that would, if melted in this way, not dissolve into the others. They would either remain separate, or would tend to form undesirable phases. In these cases, the material may be prepared as an extremely fine powder mixture of its elements, then pressed into final shape, followed by heating (“sintering”) to cause it to fuse into a solid part. This, by eliminating outright melting, avoids the segregation of the insoluble elements. As long as this kind of material is used at temperatures too low to bring about segregation of the insoluble elements, it can retain remarkable properties. This is powder metallurgy. Alloying and heat-treating can create within a metal the desired phases and can precipitate out useful small particles that act as pins to prevent movement. If that works, then why not just make the kind of particles you want, and then add them to the material? This is done— very hard, temperature-resistant oxide particles are prepared in the desired form and can be added to “powder parts” during manufacture. This usefully widens the range of strengthening mechanisms available to the metallurgist. The spinning, glowing little wheel inside your turbo is the result of decades of research and service experience with many families of superalloys. In jet engines, higher fuel efficiency has required a steady increase in hot gas temperature, and this has driven a continuous process of materials development. This has provided a menu of proven materials options from which to make cost-effective and durable turbocharger turbine wheels. Turbo Diesel Register Issue 47


Issue 48’S Theme – Historical Perspective: China’s Development In the modern world of e-mail and immediate responses, Kevin Cameron responded to the Issue 48 theme idea with the following: “The historical perspective topic makes me think of the truculent (best puns come naturally) Harley owners who tell the world how much they hate rice-burners, then switch on their Hitachi ignitions and ride off on their Showa-suspended Milwaukee vibrators, their cylinders inducting air through Porsche-designed ports as fuel is injected by Italian made Marelli injectors. Globalization is more than just Third-world countries being creatively kept down by the World Bank and IMF—it’s all brands of manufactured goods turning into each other as everyone tries to get the best performance/price ratio and ends up identical. And why do F-15s and Su27s look so much alike? They do the same job. “In the background is the rumble of fork trucks as the very latest in research and development and automated production equipment is delivered to fast-growing Chinese firms with world ambitions. Sounds a lot like Japan in 1954. And, son-of-a-gun, China is the world’s second-largest oil user. “I’m going to have another look at rapidly-developing Diesel technologies and see what is out there that your readers might like to hear about. Any suggestions?” Yours truly responded: “On behalf of the TDR audience, we’re always interested in diesel development. And I’m thankful that we’ve got your industry expertise to share with the readers. Also, as a part of Issue 48, I will have an article on the upcoming 2007 diesel exhaust emissions legislation and all of the industry buzzwords: EGR, SCR, Caterpillar’s ACERT, and other emissions-type abbreviations. So, my look at the industry, as gleaned from trade publications, should nicely dovetail into interesting industry trends.

“Speaking of industry trends and developments—how about China? Developed countries should learn from the past, but I often wonder if our elected officials have access to a history book. Perhaps we can all flip burgers, manage retail outlets or entertain one another with our reality shows to keep America strong and prosperous. Geez . . . .” Kevin shied away from my cynical comments about America. He is a smart gentleman. He stayed on-topic and provided a bit of global historical perspective. Thus he completed the “Theme for Issue 48” assignment with this look back in time.

China’s Industrial Revolution England had the original Industrial Revolution, and that first time it happened almost by accident. Other countries saw England’s experience and tried to do better—most notably Germany, where Bismarck established a system of free public education, workmen’s compensation, and technical and scientific training to build up Germany’s industry. The French did much the same, but began with technical schools under Napoleon. When Japan emerged from WWII with its 66 largest cities burned out by B-29 incendiary attack, they were mostly starting from zero; so they could begin with the very latest in production and R&D equipment. At first, people in the West sneered that “Life is cheap in the East—they work their people 14 hours a day for practically no pay—that’s why their stuff is so cheap.” But years later it was clear that Japanese cameras, measuring instruments, and machine tools were the equal of any you could buy, and then after that came the cars and the electronics. Today the labor content of a Japanese car is down to 18-20 man-hours—the rest is performed by automated systems. Meanwhile, Japanese industry purposely

82

concentrated on high value-added work such as development and began to leave actual production to the South Koreans or whoever would have it. Currently Japanese companies are moving parts production to China, whose industrial revolution is in its early-middle stages, as fast as they can. The first step on the way to innovation is to master existing technologies. That’s where China is right now. Once they have done so, their cleverest people will, as so many did before them, look at those methods and say, “Why didn’t they do it this way? It would be better and also cheaper.” Here in the US, steel production stayed with processes and equipment that had been paid for back in the 1930s and ‘40s, but in Japan continuous casting was developed which allowed improved process control, lower costs, and a better match to downstream processes. All this is bound to affect the Diesel engine somewhere down the line, just as the Japanese have. Turbo Diesel Register Issue 48


Gas to Liquid—GTL Diesel Fuel Much of the hydrocarbon energy trapped in subterranean petroleum deposits is gas, either by itself or dissolved in liquids. This is convenient when the gas field is near gas consumers as it can be piped directly to them and used nearly as it comes from the earth. Otherwise, gas is an inconvenience because it is bulky; hard to transport except as LNG compressed liquid, held at 250° below zero. Ships for such transport exist, but construction of liquefaction and port facilities to serve them meets with local resistance. (What if that stuff leaks out and explodes? Will my neighborhood become “Hindenburg Acres”?) This has stimulated research into methods of chemically converting natural gas into a substance that is liquid at room temperature. Much research has already been performed—the Fischer-Tropsch process for coal liquefaction was developed in the 1920s at Germany’s Kaiser Wilhelm Institute. A synthesis gas is prepared from the raw hydrocarbon (natural gas in this case, but it can also be coal or biomass), consisting of hydrogen and carbon monoxide. This is adjusted to a 2:1 ratio by removal of some hydrogen, and this gas mixture is bubbled up through a slurry reactor consisting of a mixture of petroleum wax and an iron or cobalt catalyst. Electric fields on the surface of the catalyst accelerate the combination of hydrogen with carbon atoms to form hydrocarbon chains of various lengths. When this process is operated at 630°, the output consists of light fractions— gasoline and olefins—and this was essentially how Germany produced synthetic fuel from brown coal during World War II. Operated at a cooler 360°480° the output assumes the molecular weight and boiling temperature range of Diesel fuel, with some waxes. The wax can later be “cracked” (its molecules broken down by heat or catalysis to reduced molecular weights) to yield more Diesel. The present commercial process was developed by Sasol in South Africa during that country’s isolation over the issue of its apartheid racial policies. South Africa had no petroleum of its own, but plentiful coal.

GTL Diesel costs 10% more to make than conventional Diesel fuel, but it is a highly superior product. Conventional Diesel fuel contains a high percentage of aromatic compounds—those based on rings of six carbon atoms with attached hydrogens. These ring structures are extremely stable—which also means they are not so easy to ignite. It is an irony of nature that natural gasoline, which we desire to be highly stable for detonation resistance, is mainly made up of knockprone straight hydrocarbon chains, while Diesel, which we want to auto-ignite easily, mainly consists of very stable, auto-ignition-resistant ring compounds. GTL Diesel, on the other hand, consists of straight and branched chains with a very high cetane rating over 70. During Diesel combustion, fuel droplets evaporate, heated by the compressed hot air around them and by infrared from nearby combustion flame. Each droplet radiates a cloud of vapor which diffuses into the surrounding compressed air, burning only as the local ratio of fuel to air reaches a nearly chemically correct value. In this diffusion flame, the droplet acts as a distillation apparatus, boiling off its lighter fractions first, followed by the heavier components. The lighter fractions naturally diffuse more rapidly, and therefore reach the flame zone first. The heavy, stay-at-home fractions arrive later. As a result, some of the heavy stuff may not actually have time to burn thoroughly, but instead releases carbon atoms that clump together to form— you guessed it—the dreaded exhaust particulates. Carbon is attractive, sticky stuff—that’s why it’s used to mellow whiskey and to filter cigarette smoke. Therefore onto the surfaces of these carbon particles are attracted any unburned carbon-ring fuel fragments that happen by. It is these “PAHs”— Polycyclic Aromatic Hydrocarbons—that are implicated in the carcinogenicity of Diesel particulates. This is one of the big barriers to wider acceptance of otherwise super-efficient Diesel power in the US. When GTL fuel burns, its less stable chain molecules light up promptly, then

83

burn quickly—and more completely. Compared on a combustion pressure versus time graph, the pressure begins to rise sooner in a GTL-fired engine (as much as 4 degrees earlier, according to Daimler-Chrysler research). There is also a reduction of HC and NOx emissions on the order of 1/3. HC emissions are reduced by the more complete burn, owing to the complete absence of stabler carbon-ring compounds. NOx emissions drop because, with GTL’s higher cetane rating, the fuel can be ignited even in the presence of higher rates of inert EGR. The key here is that NOx formation is linked to high temperature. Use of increased EGR at part load reduces flame temperature. Better yet, nearly the same performance and emissions reductions result from cutting the GTL with normal Diesel fuel 50/50 (in this case, a Euro reference fuel). Finally, GTL contains zero sulfur (sulfur gets in the way of projects to clean up Diesel emissions by use of exhaust catalysts) and zero heavy metals (same objection). GTL Diesel has been described as crystal-clear and odorless—another big difference. All of the above reveals why a large plant is now under construction in Qatar for the synthesis of GTL from natural gas, ramping up over the next six years to 600,000 barrels per day. Just when you get that tingling-under-the-collar anxiety that world events are coming to take away your glorious mobility (feeble socalled electric cars, “voluntary” shifts to public transportation, etc.), along comes a bit of encouraging news like GTL. There is a lot of life left in the internal combustion engine, and you can bet oil companies wouldn’t be building big GTL plants if they didn’t agree. Turbo Diesel Register Issue 49


Turbocharger History The turbocharger’s long history partly conceals a very important fact —that rapid progress depended upon how much money was being spent on materials research. Early turbochargers, such as those of Sanford Moss, during and after WW I, had short lives. Early altitude records resulted from the most careful use of such fragile machines, as the primitive materials of which their blades were made quickly stretched and broke. Captain R.W. “Shorty” Schroeder, using such a GE turbocharger, flew to 33,000 feet on February 27, 1920. Those of you familiar with the Boeing B17 bomber of WW II know that the turbine wheels of its turbochargers were visible from below, as they were essentially mounted flush with the nacelle surface, in full view. At night, their glow was clearly visible from below. This mode of mounting came about through a snap decision taken by engineers seeking any means of providing extra cooling for the hard-pressed turbine blades. Turbochargers had an easier time of it on Diesel engines because of their lower exhaust temperature, and this is why turbos became a commercial product in this application first. A brief look at the performance of exhaust valves in aircraft and other heavy duty engines reveals why the turbocharger problem was so difficult in the early years. Because aircraft engine development was the leading technology, high temperature materials were developed primarily for exhaust valves. An exhaust valve spends less than 40% of its time open, being intensely heated on both faces by sonicspeed hot exhaust gas. The rest of its time it spends on its seat, to which it rapidly transmits the heat it has collected from the previous exhaust event. It thus “rests” between exposures to heat. Yet from the beginning of internal combustion through the mid-1930s, exhaust valve cupping, stretching, and cracking remained serious problems. If the best heat resisting materials performed this poorly on a 40% duty

cycle, think how rapidly they deteriorated when subject to the continuous severe stretch resulting from the 100% heating duty cycle and centrifugal force acting on a turbocharger blade! The turbocharger had to make do with whatever the current best exhaust valve material happened to be. For example, for a period in the 1930s it was the exhaust valve stainless steel KE965 that was chosen for US turbocharger blade manufacture. Only when it was later discovered in a routine materials search that certain alloys, originally developed for corrosion resistance in chemical engineering, were also highly temperature tolerant did a new avenue of blade material development appear. Very soon after this the development of gas turbines was taken up by the governments of Britain and Germany, and later by the US. Sanford Moss had been advised early to give up his gas turbine research in favor of the more practical turbocharger. In England, RAF officer Frank Whittle had been dismissed for years by supposed “authorities” in the engine field, on the grounds that no materials could survive the necessary stress and temperature to make a workable gas turbine. Although scientists and engineers claim to be open to innovation, new ideas are too often rejected by them when they conflict with long-held attitudes. Such attitudes then take on the appearance of natural law, when in fact they are just uninformed opinions. Such attitudes can hold back progress for years. A respected researcher writes the definitive text in a certain area, becoming its leading authority and becoming a full professor. When other researchers present conflicting views for publication by technical journals, peer review boards reject those views as too radical for publication. The leading authority, having made his contribution at an early age, spends the rest of his life stifling dissent as an obscurantist old fossil. Frank Whittle, who had most carefully made the calculations and knew the temperatures and stresses his gas

84

turbine would produce, was thus unable to persuade others even to look at his reasoning. Therefore he had to find private money to finance construction of a prototype. When it operated successfully, it was obvious at once that this was the way to move past the “propeller barrier” (around 500-mph), and move on toward supersonic flight, Whittle was summarily pushed aside by the British government, which then focused powerful resources on jet engine development. Decades later, Parliament voted Whittle a $160,000 “tip,” and arranged for him to have a joyride on Concorde. He lived out his life in Florida. When USAAF General Arnold was made aware of British jet engine work, he ordered crash programs instituted at GE and Westinghouse. US materials research went into high gear as well. With the best minds at work on these problems, progress was steep. The new engines moved out of prototype stage, needing overhaul at 15-25 hours, into squadron military service near the end of WW II with predictable lifetimes of hundreds of hours. Materials research was the major basis of this success. Materials technology does not result from mad scientists having brainstorms. It requires millions of dollars to pay for the construction and around-theclock operation of hundreds of “creep cabinets,” apparatus in which material samples are electrically heated and held at constant high temperature under steady stress, while their slow stretch is optically monitored to high accuracy. In the spring of 1944 the vast gamble that was the Boeing B-29 bomber development was teetering on the point of failure as a result of the delays and defects in its Wright R-3350 engines. Just at this time a revolutionary refrigerated high altitude wind tunnel was completed near Cleveland, cooled by 100,000 horsepower of Carrier air conditioning located in vast machinery spaces beneath the tunnel. Despite the toppriority needs of the B-29 program, it was not the ailing 3350 piston engine


that was first to be tested in that new tunnel. It was the GE I-16 turbojet engine. General Arnold was making sure the US never got left behind in aircraft propulsion again. Only governments have the money and the do-it-today power to make things happen in months, rather than years or decades. One result of this work was the development of heat-resistant materials that would make turbocharging fully practical for commercial applications. Despite this, the cloud of ignorance that had almost defeated Frank Whittle was still affecting decision-making in the engine world. The new official belief was that jet engines, while powerful, were so fuel-hungry that they would be suitable only for defensive fighter aircraft for many years to come. Makers of traditional piston engines assumed their products would be carrying the freight in the meantime, so they laid plans to phase-in the new turbine technology as a part of what they were already making. This was almost a repeat of what had nearly stopped Whittle. No one could believe that materials and design would advance as rapidly as they did and therefore it would be necessary to advance by baby steps. The first step was to integrate the turbocharger into aircraft piston engines in new ways. In textbooks written at that time, the piston engine was pictured as becoming smaller and smaller as its turbocharger became larger and larger—until in the end, the turbine would be all that was left. Previously, the turbocharger had been an add-on unit, supplying compressed air to an engine’s normal mechanically driven supercharger as a first stage of compression. But now the plan was to use the turbine to recover power from the cylinder exhausts and send that power back to the crankshaft. This was a process known as turbo-compounding, and it was considered the appropriate first step because existing turbine materials could handle piston engine exhaust gas.

Wright Aeronautical Corp. (WAC) developed a compact system using three turbines, each served by six of their radial engine’s eighteen 186 cubic inch cylinders. Each turbine extracted 120 horsepower from the exhaust flow for a total power recovery to the crankshaft of 360 horsepower. This TC-18 engine, at powers up to 3700 horsepower, set records for long range and, in the lovely Lockheed Constellation and Douglas DC-7, made trans-oceanic commercial flights routine in the mid to late 1950s. Pratt & Whitney envisioned much more elaborate schemes in which their 28-cylinder radial 4360 engine would supply exhaust gas to multiple turbines, in some cases operating as turbochargers and in others being turbo-compounded. In the vast array of insulated stainless steel ducting, turbines, valves, compressors, and nozzles, the actual power section with its 28 cylinders seemed to shrink into insignificance. Such complex powerplants—part piston and part turbine, containing an unbelievable numbers of parts—were expected to power updated versions of the B-50 and B-36 bombers. Fortunately for Pratt & Whitney, these versions were canceled, forcing the company to make a new plan—to begin licensed production of the British Nene turbojet. P & W today remains one of the world’s major producers of turbine engines. In England, Napier designed a flat-12, two-stroke, Diesel turbo-compound piston engine. It looked like a piston engine giving birth to a jet engine, and was named the Nomad. Why all this complexity? No one at the time believed that materials science would make possible fuel-efficient jet engines as rapidly as it did. Therefore the correct and prudent path to progress was to use a piston engine as a hightemperature gas generator and first stage of expansion, and to complete that expansion—or power recovery for range or speed—by use of a less temperaturetolerant turbine.

85

Napier expected Nomad-powered propeller aircraft to cruise more slowly than the then-new, all-jet De Havilland Comet. But it would reach New York from London sooner than the Comet because its greater fuel efficiency let it fly non-stop. Meanwhile the turbojet Comet, devouring fuel like a military fighter, would need to refuel in Gander, Newfoundland, and probably in Ireland as well. Wright, makers of the radial turbocompound engine, had a run of commercial and military sales successes with it and persuaded themselves they could go on selling such engines for years. But when 1957 came, all the airlines placed orders for the radical new Boeing 707 with its improved turbojet engines—and sales of piston and turbo-compound engines dropped dead. Suddenly aviation was jet-powered. The driving force had been the Cold War, which persuaded Congress to flood aviation development with money. Boeing had produced the all-jet B-47 swept-wing bomber, then the eightengined B-52 and the air refueling KC-135. Having learned from all those successful designs, the logical next step for Boeing was the commercial 707. Just before Christmas in 1959, my college roommate, who had arrived by DC-7 with piston turbo-compound power, decided to fly home for the holidays by jet. But the revolution wasn’t over yet—new generations of castable high temperature alloys were in development in the late 1950s through early 1960s. On the news one morning in 1964 I heard that a DC-8 had suffered a turbine break-up on take-off from Boston’s Logan Airport. In the parking lot at work I would find a small piece of that turbine. Today, failures of that kind are extremely rare. Almost as a footnote to all this, the resulting fall in the price of high performance refractory metals made reliable truck engine turbocharging common at last. Turbo Diesel Register Issue 50


SCR, Fuel Economy and Two Stroke Diesels Selective Catalytic Reduction The US EPA seems determined to stick to its policy of maximum protection of the atmosphere rather than shift to a more European agenda of trying to minimize the amount of fuel burned by encouraging the use of Diesel. It had been widely hoped that some relaxation of Diesel emissions standards would aid the US auto industry, so much of whose profitability has recently come from sales of heavy SUVs and light trucks. Powering such vehicles with fuel-efficient Diesel engines might have preserved their popularity with buyers during the recent upsurge in fuel prices. It was not to be. Instead, if wider use of Diesel power is to take place in the US, advanced emissions technology will have to be adopted. At present it is possible to filter particulates out of Diesel exhaust and to periodically burn them off the filter. This eliminates much of the smoke and smell of Diesel exhaust, and the newsworthy carcinogenic particles that notoriously absorb onto such particles. Because Diesel engines burn their fuel in the presence of excess air, they emit very little in the way of carbon monoxide (incompletely burned carbon) and UHC (unburned hydrocarbons). The really difficult problem for the Diesel is how to achieve high power density—a lot of power from a small package—and at the same time control oxides of nitrogen (NOx). This is difficult because, in general, the higher the power density—usually from high pressure turbocharging—the higher the combustion temperature. Nitrogen, making up 79% of the atmosphere, is normally very stable but high temperatures can induce it to form nitrogen oxides. Therefore the harder we try to get power from a Diesel engine, the more NOx it tends to make. A standard measure against NOx formation is to reduce combustion temperature by diluting the air charge with inert exhaust gas—exhaust gas

recirculation, or EGR. But for every percent of such EGR dilution, we lose a percent of power. One answer to this tangle is to accept that a powerful Diesel will produce NOx—and then deal with it in exhaust aftertreatment. This can be done by selective catalytic reduction, or SCR. Because similar problems exist in the electric power generation industry, methods have already been developed to transform NOx in flue gas back into harmless chemistry. A catalyst is an agent that promotes and participates in a chemical reaction without itself being altered in the process. The principal idea is to provide a source of extra nitrogen so that the strong affinity between the supplied nitrogen and the nitrogen in NOx breaks up the oxide, forming N2 (ordinary atmospheric nitrogen). Usually some hydrogen is supplied along with nitrogen (combined with the nitrogen as ammonia or urea) so that the oxygen in the NOx combines with this hydrogen to form water. The overall effect is to transform the potent smog former, NOx, back into harmless nitrogen and water. In the absence of high temperature to drive this reaction (Diesel exhaust is much cooler than spark ignition engine exhaust because of the Diesel’s high compression/expansion ratio), a catalyst must be used. This is usually a metal whose strong electric field distorts the target molecule in a way that makes its combination with specific reactants much more likely. Think of the catalyst as a “mugger” who pins the victim so his pockets can be ransacked. Once the target molecule has reacted, it is no longer as attracted to the catalyst atom and so goes on its way, leaving the catalyst atom ready for business with the next NOx molecule. What this may mean in the future is that Diesel users will simultaneously tank up on Diesel fuel and urea—possibly by use of a co-fueling nozzle that simplifies and speeds fueling while keeping fuel and urea separate. Urea consumption will be about 4% of the volume of Diesel fuel burned.

86

Diesel Fuel Economy One principal underlying many approaches to improving fuel economy is the fact that oil films are most efficient when they are loaded almost to the point of breakdown. One way to see this is that it is more efficient to use a small engine, operating on full throttle, to make the 50 horsepower necessary to keep a truck rolling at highway speed (as an example, 50 horsepower), than it is to operate a much larger engine on part-throttle to make that same power. The small engine has small bearings, and they are loaded heavily, so they generate less friction than do the larger, more lightly loaded bearings of a bigger engine on part throttle. In WW II pilots discovered they could extend the range of their airplanes by reducing RPM, increasing propeller pitch (same as fitting a taller rear gear in a truck), and making the necessary horsepower by increasing supercharger or turbo boost. This worked because it reduced the speeds of all moving parts, thereby saving considerable power that would otherwise have been consumed shearing oil films at the higher speed. Many a pilot, low on fuel, growled his way back to his carrier on low revs and high boost, crossing his fingers that this technique would actually work. This same principle was applied after the 1973-74 oil “shortage”, when the peak power rpm of highway truck Diesel engines was reduced from 2250 to 1800rpm. It was possible to do this because at the time, the use of turbocharging had become common. Thus, engines could make the same power at 1800-rpm that they had formerly made at 2250 simply by turning up the boost. The large marine Diesels described above carry this to an extreme, rotating at heartbeat speed.

OPOC Two Stroke Diesel When the price of fuel goes up, we dream of super-efficient engines. When it goes back down, we prefer what we have, because it’s at least partly paid for. The


natural fear is that, sooner or later, fuel will go way up and stay there. What then? Engineers dream about this even when fuel is cheap, because (a) new features give a competitive edge and (b) engineers retain a child-like desire to make new things happen. FEV Engine Technology, a German technology developer with US offices in Auburn Hills, Michigan, has shown a new two-stroke Diesel engine of unusually high efficiency. While a normal efficiency range (work output as a percentage of fuel energy supplied) is around 33%, FEV’s new OPOC engine pushes that number above 40%. This is highly significant because the most efficient prime movers now in existence—large marine two-stroke Diesels—recover just over 50% of their fuel’s energy as work. Those very large engines are especially efficient because (a) they have little heat loss surface area in relation to their huge displacement, and (b) their friction loss is low because they hardly move—operating at 60-90-rpm. OPOC stands for Opposed Piston, Opposed Cylinder, and it seeks to raise efficiency in an ingenious variety of ways. Of special interest is that many of the ideas implemented in OPOC are very old. Opposed Piston means that two pistons operate in each cylinder, compressing air between them. Many types of opposed piston engines have been built in the past, such as the German Junkers aircraft Diesels of WW II, and US-made, Fairbanks-Morse locomotive and submarine Diesels. Adopting this construction does away entirely with cylinder heads and all their heat loss and attendant valves and other parts. Thus, opposed pistons do away with a major source of heat loss, and reduce complexity. Without mechanical valves, such engines operate on the two-stroke cycle. Of the two pistons in each cylinder, one controls a ring of exhaust ports and the other, a ring of fresh air ports. Thus,

cylinder scavenging is from one end to the other, which engineers term uniflow. Fresh air is supplied to the cylinders by a blower. In most engines, force from the pistons is delivered to the crankshaft from one or two rows of cylinders above it (in-line or V construction). This forces the crankshaft down forcibly against its main bearing caps, generating fluid friction in the main bearing oil films. In the OPOC engine, the OC stands for Opposed Cylinder—meaning that this is a flat engine with its crank down the middle and a cylinder to the right and one to the left of it. In current parlance, it is a “180-degree V-engine”. The makers cite Ferdinand Porsche’s flat, opposed-cylinder “boxer” engines as inspiration for this, but the idea of the flat engine goes all the way back to the Benz “Kontra,” designed by August Horch in 1897. How can there be two pistons in each cylinder, if there are cylinders to the right and left of the crank? The two pistons nearest the crank are operated conventionally, by connecting-rods. The pistons at the far end of each cylinder are joined to the crank by long operating rods passing to either side of each cylinder. As the crank turns, pistons on right and left approach and recede from each other in their respective cylinders, phased and counterweighted to give excellent balance. There is another advantage: with one of each pair of pistons pushing on the crank and the other pulling (by means of its operating rods), there is almost no net force on the crankshaft main bearings. The best way to free an engine from friction loss is not to generate the loss in the first place, which this engine cleverly achieves by its balancing of right- and left-bank forces against each other. A prototype turbocharged FEV OPOC Diesel was shown at the SAE World Congress in April, 2005. It is said to generate 325 horsepower and 590 lb-ft of torque at 2000-rpm, while weighing 270 pounds. This is more than 1.1

87

horsepower per pound—an extremely impressive result for a Diesel. Its two cylinders and four pistons give two firing events per crank revolution—the same degree of propulsive smoothness as an in-line four. Turbo Diesel Register Issue 51


Toolbox Diary Everyone’s toolbox tends over time to become a diary of past life—every tool has its story. For thirty years I carried a no-brand, #3 Phillips screwdriver that I liked. It had a fluted amber plastic handle with a black stripe. It had been with me longer than most marriages last. Then one day its accumulated stress history maxed out and a chip popped out of it. Now I have a new NAPA replacement but haven’t really got to know it yet. I love Snap-On wrenches—smooth and organic-looking, they fill me with desire when I step up into the salesman’s truck, and I can feel the money pour out from my wallet. But I still have a few Craftsman items—perhaps just because I perversely don’t want my toolbox to become a one-brand, add-a-pearl affair. Somewhere in my shop is a short, Sshaped double open-end that bears the classic “Winchester” name. Its origin is unknown to me but here it is. Because for years I was involved in building racing motorcycles, I have a pair of Robinson safety-wire twisters to which I treated myself in 1971. Before that I had either twisted wire by hand (time-honored but a serious waste of time) or had hobbled along with clappedout twisters I’d bought at a junk sale. Robinson for years served the aircraft industry—aircraft used to be covered with fasteners secured with twisted stainless wire. But gradually aviation shifted over to deformed-thread, elastic stop, or other forms of security. When I bought those twisters I felt equipped—I loved having them at last, having them in my hand, being able to expertly wire five sprocket bolts in series so they looked as they should.

Click, the wire is cut through, and it’s not necessary to mash again and again because the first and second bites didn’t quite do the job. I have a little two-ounce ball-peen hammer that has been just right for setting dowel pins. I could use a bigger hammer, but this one has always been a small pleasure. Because my tools had to go with me to races, I bought only what fit the fasteners on the equipment I was servicing—the odd sizes could stay in the Snap-On truck, and the cash they would have cost stayed in my wallet. At this point I want to interject a remark I heard when I was a callow beginner. I was in the queue at the parts counter of Boston’s long-ago Triumph motorcycle dealer when I heard loud talk coming from the shop. One voice was that of sweet reason and adult compromise, saying that Sears tools were guaranteed and a good value for the money. I agreed—my box was full of them. The other voice was more strident, less reasonable—and doctrinaire to boot. “In my book there are only two kinds of tools—Snap-On and snap-off!”

For a time, I earned most of my income with die grinders and a gas welder. The one was for cylinder and head porting, and the other for making exhaust pipes. I can’t say I had a special affection for either tool, but they did become very familiar.

Even today, I can hold both opinions simultaneously. I have a cracked “Wizard” brand spark plug socket that belonged to my maternal grandfather. He was a natural mechanic and so was my mother, so I think of them both when I reach for that 13/16” tool. I have SnapOn wrenches to fit the things I worked on in the 1970s, and other stuff to fill in. Two of my old Craftsman open-ends have electric pencil initials on them—each belonged to a friend I worked with at one time, and each somehow became incorporated into my box. I know items of mine have diffused away in like manner. There are a couple of hand tools that came to me in the frame rails of cars —set there for a moment by some busy line mechanic who had to take a phone call and forgot the tool he could no longer see.

A particular pair of diagonal cutting pliers is dear to me because I picked them from among many. Their jaws meet perfectly.

At one time I worked in a shop that built experimental equipment. Being associated with universities, the

88

management tried to institute an open tool board, with a painted black silhouette of each tool to show its place. Good luck. It didn’t take long until the only remaining tool was the 11/16 open-end. One of the technicians made an experiment. He went to the industrial hardware store and bought a gross of cheap #2 slotted screw drivers. Then he put six of them on the tool board. A week later they were gone, and he put out another six. Why was he doing this? He wanted to derive from it a Law of Tool Diffusion. According to one hypothesis, as soon as every one of the execs in this outfit had taken a screwdriver for home, another for his boat, one more for each car, and so on, the demand for screwdrivers would taper off—and some would remain on the tool board. This might be called the Law of Tool Saturation—stating that people stop stealing available tools when they have enough. The other theory was a Law of Tool Availability, which states that tools will be taken as long as there are any remaining. Naturally, and as you would expect, screwdrivers disappeared from the board until all 144 had been consumed. This is why toolboxes have locks and idealistically motivated tool boards are empty. It’s not that people are venal natural thieves, but that no one can resist available tools. Tools are attractive. And it’s easier to “borrow” a tool, fully intending to bring it back, than it is to actually remember whose it is and where it belongs. I am promising myself a cordless screwdriver but haven’t gotten one yet. I hate my pile of decrepit power hand tools, each with its long cord that inevitably tangles with the others. Yet I refuse to wrap cords around tools—this creates a spring that constantly fights the user just as too-stiff gas welding hoses do. Maybe I’ll treat myself. Some of my favorite tools are those for special purposes. These are the characters of the toolbox. One is an offset box wrench that fits the 16 base nuts on each cylinder of aircraft engines


that have come to live in my shop. Another was a product of desperation. One day as I closed the trunk on my ‘70 Olds with 10:1, 350, V8, the link from lock to latch fell off. Now the key went ‘round and ‘round without effect and the trunk was locked forever. I found that by prying up the cardboard shelf under the rear window, I could shine a flashlight on the situation back there and see that a five-foot ½” socket extension would do the job. An el Cheapo socket from the not-so-hot-but-too-good-to-throw-away tool bin, some electrical conduit, and a little brazing soon produced a special T-handle. It worked a treat. I drove that car until its chassis rusted in two. The special T-handle stands in a back corner just in case that car is reincarnated.

out in front of a diesel shop. It was a 12’ tall, steel I-beam gantry crane with 6000 pound capacity, standing on four giant casters. Lower the tackle, slip the lift chains onto the hook, and PRESTO, not only can I lift just about anything that will fit in my shop, I can roll it around to wherever I want to put it. In the modern psychobabble, this is personal empowerment. The truth is, tools are power. They multiply comparatively weak flesh into irresistible force. Turbo Diesel Register Issue 52

My great-grandfather Fred farmed in Indiana until he died in 1946 at age 92. My dad brought me a ball-peen hammer from Fred’s farm and a new handle restored it to full usefulness. I don’t use it often, but it’s good to have it. From the other side of the family comes an old 19th-century brass-tube microscope. When connecting-rod rollers in my Kawasaki 500 race bike began to fail in 1971, that microscope revealed tiny surface pits on the rollers as a result of an hour’s use. After a little study, I was able to make a chart of surface damage versus hours of use. That, plus some study at a nearby engineering library, allowed the problem to be handled. Optics are a tool too. When all I did was motorbikes, nothing was heavy. Later, when I was attacked by big-block dreams, I bought a shop crane. I like my little orange crane very much because it turns certain back injury material into easy jobs. After a few years that changed when large piston aircraft engines began to collect at my shop door, looking in at me reproachfully. Those machines have a special beauty, inside and out, that I can’t resist, and they weigh only one pound per horsepower. That’s a heavy problem when take-off power is 3500 horsepower at 2700 rpm. I saw the solution on a road trip to my 40th high school reunion, standing

89


Unlimited Energy from Carpet Fluff? I’d love to be able to tell you that all the recent talk about achieving US energy independence via biomass fuels is not only true but about to happen. However, based upon available figures I can’t do that. Let’s just consider US cropland, variously estimated to be 375-450 million acres. If current US population is about 295 million, using the higher cropland acreage we get roughly 1.5 acres of cropland per person. Total US land area is just under six million square miles, or 3840 million acres, so the above cropland acreage means that about 12% of the US is under cultivation. Every year that figure decreases because land that is prime for agriculture is also prime for housing and business construction. Every day, the US consumes about 19 million barrels of petroleum, of which about 10 million must be imported. Let’s say that the average US motorist uses 750 gallons of gasoline per year in his/her auto (that’s driving 15,000 miles a year, getting 20 miles per gallon). If we plant a mix of rapeseed and soybeans and harvest 75 gallons of oil per acre, this tells us that each car will require the use of 10 acres of cropland (assuming one crop per year). How many cars are there in the US, we wonder? Ah, here is a 1995 figure of 136 million, so at ten acres per car, that would require 1360 million acres to produce the necessary fuel—an amount that is three times the total of US cropland. And, of course, using all US cropland for fuel production would require us all to go on zero-calorie diets. Okay, I’d guess we aren’t going to fuel our auto fleet (to say nothing of the 65 million trucks) on our hypothetical mix of rapeseed and soy oils. Let’s look at ethanol produced from corn. A quick scan suggests yields of 160 bushels of corn per acre are achievable, with 2.8 gallons of ethanol being recovered from each bushel. This is encouraging—448 gallons per acre!

Now reality gives us the eye—a gallon of ethanol contains only 2/3 as much energy as a gallon of petroleum-originated fuel does, so that 448 gallons per acre, times .66 = 296 gallons. But even this looks pretty good. With this figure we could power our national auto fleet of 136 million cars by planting only 345 million acres, or 77% of US cropland. I don’t think this is going to happen, but it would at least allow us to eat something. Or does it? As it turns out, some energy has to be used to raise and process all this corn, and one estimate is that for each gallon of ethanol produced, the energy equivalent of half a gallon of petroleum must be consumed. This puts a dent in our plans, for it means that instead of each acre yielding the energy equivalent of 296 gallons of petroleum, we have to subtract half of 448, or 224 gallons, from that to get the net yield. 296 take away 224 gives us a new net per-acre yield, in petroleum-equivalent gallons, of 72 per acre. Now we are back to needing roughly ten acres in cultivation to fuel each automobile. As we saw above in the section on oil crop biomass fuel, this again would require three times the total of US cropland. Maybe we can make corn cultivation and processing into ethanol twice as efficient? Okay, now we need only 1 ½ times the total of US cropland to power our autos (again, we are leaving out trucks and buses, aircraft, home heating, and so on). Newspapers and magazines have presented upbeat stories about powering Diesel vehicles with waste oils. Let’s have a look. US annual production of vegetable oils is said to be on the order of three billion gallons. Total annual US transport and heating fuel use is more like 300 billion gallons, so present vegetable oil production is about one percent of what we need annually. Go for it—how much do you suppose is recoverable from that total? Half? A quarter? Less? I know two families in my area that drive all over the county, begging used fry oil from restaurants. (“Hey, ya shoulda bin here an hour ago—the Petersens were

90

just here and I gave ‘em five gallons. But I’m all out now—sorry.”) How much fuel must be burned in this way, per gallon of fry oil scored? I could take a personal approach. At my house we buy about five gallons of olive oil per year, and about 1/10 of this is not consumed by the five of us on salads, sautéed vegetables, and the like. That leaves us with half a gallon of waste oil per our five people, per year, or multiplied out to a national basis, 29 million gallons. Compared with the annually consumed 300 billion gallons of petroleum fuels, that is a fraction of 1/1000. This tells me that used fry oil is unlikely to fuel US energy independence. Shall we go on to consider the possibilities of composting dryer and vacuum cleaner fluff? No, I’ll stop there, and I will admit that “every little bit helps.” But, I do insist that a little bit helps only a little. Anything that is going to replace a significant fraction of the energy we now get from petroleum is going to have to be very large scale. Therefore let us consider some largescale possibilities. Buried in the western US are thick deposits of oil shale, in which there are many times the proven petroleum reserves of Saudi Arabia. In the tar sands of Alberta are similarly large amounts of petroleum. Underlying Venezuela’s Orinoco basin are vast reserves of heavy—perhaps hard to pump—crude. Likewise let us consider coal, of which we have much. Coal can be transformed into petroleum-like liquids by heating it in the presence of steam and a catalyst, using the FischerTropsch process. The key to exploitation of any of the above is continued high oil prices. To get oil from oil shale, either the shale must be brought to the surface, then crushed, and the petroleum vaporized out of it; or some kind of clever and efficient subterranean process must be devised to do so without mining. Ditto


the Canadian oil sands. All this costs money, and no sane person would have invested a dime in any of it back in the days when petroleum was at $10 a barrel. But at $70… Some people are scraping up all their assets to invest in such things, reasoning that the Western US and Canada will become energy boom areas vibrating with economic power and radiating pipelines full of black gold. It may be—just hop on the Internet and read all about it. It will be interesting to see just what the outcome will be. It gives us all an incentive to live long. Meanwhile, ardent word-duelists are thrusting and parrying over the issues surrounding global warming. One large group of scientists says it’s real and it’s already killing us. Another, smaller group asks how much we know if (a) we don’t know what causes Ice Ages, nor are we able to predict them and (b) if Ice Ages are indeed caused by some kind of solar cycle associated with sunspots, do we know what causes such cycles and can we predict them? If we are already killing ourselves by so much combustion of fossil fuels, how much must we cut back to prevent glacial melting, coastal flooding, tempest storms of wind, drought with killer famine, and who knows what-all else? Big discussion—nobody wants to put a number to this one. Shall we tell the billions of Indian and Chinese people, now industriously raising their standards of living by scouring the globe for energy supplies, “Sorry guys, your better life is cancelled—forever”? And if we did say that, what would they reply? (And with what? Both nations are nucleararmed—operators are standing by, warheads are in stock and available for immediate delivery anywhere on earth in 20 minutes.)

way. I’ll keep my house at 48 degrees instead of 68 in the winter. Who needs hot water? I’ll eat rice and beans instead of meat, and all our factories (what’s left of them) will cut their production of everything exactly in half. Not practical, you say? You bet it’s not—without Diesel fuel and plenty of it, we can’t even get food to all the people who live in our cities. This is not a choice—we need energy to live. Hmm, looks like we’d have some trouble putting over the “cut everything in half” plan. Especially at the Kiwanis. Maybe the thing to do is forget it—and then think up some catchy hydrogen slogans or propose ritualistic things people can do to make themselves feel better about all this. I’ve got it—we’ll stop using certain paper products at home. Yes, and then we’ll put two bricks in the toilet tanks. And we’ll park the Hummer except on weekends, and drive the Civic to work (better put a bicycle on the car-top carrier, too—looks “green”). Let’s all go on vacations to put our minds at rest. As we are wafted on our way, our Boeing 747 burns 50,000 pounds of fuel per hour. At take-off for a trans-pacific flight, a wide-body aircraft is carrying enough fuel to heat my house for 50 years. I can’t think about all this. Let’s hope we humans muddle through somehow. Turbo Diesel Register Issue 54

Or should we nobly and voluntarily cut our own petroleum use in half? Would that do? After all, the US uses some 25% of the world’s energy. I’ll walk to work half the time—hell, it’s only 12 miles each

91


More Harping on an Old Tune EPA’s NOx limit for 2007 Diesel engines is 0.5 grams per horsepower-hour, dropping three years later to 0.2-gm. There are ways to meet these standards but none of them is easy. Oxides of nitrogen are created in the hottest regions of combustion, where fuel and air happen to be ideally proportioned. There are always such regions in Diesel combustion because, with fuel droplets being injected at over 1000 feet per second into hot compressed air, all proportions of fuel and air must exist, from 100% fuel in the droplets themselves, to 100% air at the outer edges of the combustion chamber. Wherever the proportions are right, combustion may be hot enough to compel nitrogen and oxygen to combine as NOx. I say “may be” because even chemicallycorrect combustion can be cooled, simply by diluting it with inert exhaust gas from previous cycles. This exhaust gas recirculation, or EGR, is an anti-NOx technique used in both Diesel and sparkignition engines. EGR effectiveness can be increased by cooling the exhaust gas through a heat exchanger—but too much EGR flow can delay or prevent ignition. Eight hours of operation at 200 horsepower under the 2007 NOx limit produces just under two pounds of the stuff. Why is it so important? Go to http://chem-faculty.ucsd.edu/trogler/ CurrentNitroWeb/Section4/Section5. shtm and read about it in detail. A critical step in the production of photochemical smog is the conversion of nitrogen dioxide by sunlight into highly reactive ozone, with volatile organic compounds and carbon monoxide involved. This (sort of!) explains why gasoline tanks have pressure caps and activated carbon absorbers—so that volatile organic compounds (gasoline is a large collection of them) do not rise in a great cloud from all the gasoline-powered vehicles in the nation. It also tells us why there are limits on the amounts of unburned hydrocarbons our vehicles may emit—because this category includes carbon monoxide and volatile

organic compounds. You might also expect, reading the above, that there should be legal limits on the amount of incoming sunlight, but they haven’t yet figured out how to regulate that step in the smog-making chemistry. Joking aside, this is serious business. My first trip to California was in 1971 and as I stood next to pit wall at the once grand Ontario International Raceway (now a housing development), looking straight up, I could see long sinister fingers of green smog drifting overhead—gaseous sewage that put a catch in everyone’s throat and a sting in the eyes. Looking to the east one morning, I was astonished to see snow-capped mountains—which had been invisible previously, obscured by the air. Two basic approaches to NOx reduction exist—to improve the combustion process so that less NOx is produced, and having taken that as far as possible, to employ aftertreatment—to remove NOx from the exhaust gas produced. We’ve all read about the benefits of high pressure, common-rail fuel injection, using ultra-fast injectors. The finer the droplets produced by injection, and the further they penetrate into the dense compressed air in the cylinder, the more fuel will evaporate before combustion begins. Even on full load, Diesel engines include about 20% excess air, so on a bulk basis, Diesel combustion is lean. If, as is the case in spark-ignition engines, the mixture were fully mixed before ignition, the NOx problem would be greatly reduced by the simple fact that lean combustion is cooler than chemically-correct combustion. It is the combination of a fully mixed (and in some cases, lean) mixture plus EGR that gives spark-ignition engines their lower NOx emissions. Alas, every fuel droplet evaporating inside a Diesel engine pushes out rich fuel vapor that gradually diffuses into the surrounding air, feeds the flame, and wherever the resulting mixture happens to be close to chemically-correct, it burns hot and generates NOx.

92

Another modern technique for improving combustion is to inject the fuel not all in one squirt, as it had to be with the old, mechanical-type, piston injectors, but in a series of ultra-short, timed bursts. This is what the new piezo-electric injectors make possible by their speed of operation. If the ability to deliver from five to nine bursts per combustion event seems impossible, just watch the sheets pour out of your ink-jet printer, covered with text and color pictures. Humans are good at dreaming this stuff up and making it work. By breaking up the fuel delivery into pulses, this system makes it possible to distribute the fuel more equally throughout the air charge. The faster the fuel evaporates and mixes with air, and the more uniformly the droplets are placed though the charge, the larger the volume of fuel that will burn on the lean, cool side of chemicallycorrect—and the less troublesome NOx there will be created. NOx aftertreatment now centers on two methodologies. Around 2003, the EPA favored trapping the nitrogen oxides (which are acidic) on an adsorber—a basic surface for which they have an electrical affinity. When the adsorber was close to being fully loaded with NOx, a hydrogen-rich gas (prepared from Diesel fuel by various methods) would be injected, reacting with the NOx to form harmless N2 and H2O. The EPA were suspicious of the alternative technology which is called selective catalytic reduction (SCR). SCR had been in use for many years on stationary engines and was a proven technology, but what EPA did not like was its need for a continuous supply of nitrogen and hydrogen, in the form of ammonia, NH3. To employ SCR, Diesel vehicles would need to carry a small tank of urea (which breaks down to ammonia and carbon dioxide at temperature, which would be injected into the exhaust. The resulting reaction with NOx would yield harmless nitrogen gas and water. EPA feared that, with no infrastructure in place to supply urea to users, Diesel


engine manufacturers would shrug their collective shoulders and say, “Our engines are certified under your new law. No skin off our noses if there’s as little urea on sale for Diesels as there is hydrogen for fuel cells.” Off the trucks would go with their empty urea tanks. Hence the EPA favored the trap technology, which uses only Diesel fuel and requires no second fuel tank. The SCR method raises questions: (1) If the urea tank runs dry, what motivates the operator to refill it? (a) A warning light illuminates (b) The engine continues to run, but at limited power (c) The engine stops automatically and cannot be restarted until the urea tank is refilled (2) Where will the national fleet of millions of Diesel-powered trucks find the urea in the first place? A run on the chemistry labs at local high schools? Now there are suggestions that all may be well, as one truck manufacturer also owns a chain of truck stops which plan to stock the vital fluid. At its present level there remain problems with the NOx trap technology. Honda has revealed that they will bring a Diesel automobile to the US soon, with an NOx system based on trapping. In their remarks they revealed that the adsorbers used in traps prefer lower temperatures in the range of 200°. Above some critical temperature any trap adsorber will spontaneously desorb its NOx, causing the exhaust stream to violate the new emissions law. This makes them currently suitable only for an automobile’s very light duty cycle, but not so good for heavy duty applications. Over time, more temperature-tolerant trap materials may be developed, enabling this technology to be applied to medium and heavy-duty engines.

of what this will cost them and their customers, so maybe they are motivated to over-emphasize the costs and difficulty of compliance, in hopes of getting more time from the EPA. The EPA, having played the game a long time too, decides how tough or otherwise to sound in public announcements and private negotiations. Whom should we believe? Meanwhile executive government offices have phones too. Who knows who says what, and to whom? After all, wasn’t it a past president who said, “Gentlemen, the business of this nation is business.” Just try to imagine business without Diesel power—it’s an oxymoron. Last I knew, Mercedes were waiting for EPA to decide on what basis—if any— they would allow the new Mercedes Bluetec, 3-liter, Diesel auto to become eligible for sale in the US—with its SCRbased NOx system. Will we soon see legions of Mercedes-driving doctors and lawyers, disguising themselves in Red Rose Animal Feeds caps, flocking to truck stops to buy their urea? The technology to meet 2007 and likely 2010 Diesel NOx levels exists and is being refined. We’ll be able to drive. We just don’t quite know all the details of the systems that the above complex negotiations will require on our engines. Turbo Diesel Register Issue 55

All of this sounds to me like an application of sophisticated game theory. EPA calls for NOx emissions to drop in stages. The engine makers consider estimates

93


Adding Up Small Gains I was just an rpm-worshipping motorcycle guy in 1973-74, when the first “oirushokku” hit, so I did not at first understand the response of the Diesel world to a future of potential oil shortages. Formerly, heavy, over-the-road Diesels operated at 2250-rpm, but in new designs this was reduced to 1850. There are at least two good reasons for this. First, friction loss as a percentage of horsepower output drops with rpm. The most efficient operation, therefore, takes place at the lowest rpm and highest cylinder pressure compatible with smooth running. During WW II Japanese naval pilots learned to employ this method as a means of extending their flying range. When ace Saburo Sakai was badly wounded in air combat, and was fading in and out of consciousness, he was able to use the low-rpm, high boost method to coax his Zero fighter more than 600 miles over water, back to his base. American fliers in the Pacific were trained in this same method by none other than Charles Lindbergh. Dick Veach’s B-29 lost oil pressure on one engine over Japan. Because he was aware of Lindbergh’s low-rpm, high boost method of range extension, he decided to try it, cutting rpm on the remaining three engines and keeping aloft by running up the manifold pressure and prop pitch to compensate. They flew slowly, but when they arrived back over their base they had more fuel still in the tanks than aircraft with four good engines turning.

business, which must keep track of every cent to stay in business. The same has been true at sea, where in 1950 the standard form of large ship power plant was the steam turbine with gear reduction drive to the propellers. When Diesel power showed it could compete, early installations also turned the engines faster than the propellers, still requiring expensive, accurately manufactured reduction gears. Today the typical marine Diesel installation is direct drive, with engine rpm reduced to propeller rpm so that these large engines turn only 60-90 rpm. These are twostroke engines with mechanical valves, heavily turbocharged, recycling every possible scrap of waste heat to achieve overall cycle efficiencies significantly above 50%. This makes them the most efficient prime movers on the planet. Makers of truck and auto engines have long been eyeing the power consumed by accessories such as water pump, oil pump, air conditioning compressor, etc. The opportunity here arises from the fact that traditional water pumps must be geared to flow comfortably more than the engine needs most of the time to make sure it never gets too little at any operating point. A more rational scheme would use variable-speed electric motors to pump cooling water only fast enough to deal with the heat actually being generated at the moment.

Had it not been for the growing use of turbocharging, a reduction of truck Diesel rpm from 2250 to 1850 rpm would have required a 20% increase in cylinder displacement, with a comparable increase in weight—not acceptable. But with turbocharging, the airflow of the 2250 rpm engine could easily be blown into the same-sized 1850 rpm version, making even more power than before because of the reduction in friction.

Likewise oil pumps are sized and geared such that most of the time a large fraction of the oil delivered cannot be used, and is short-circuited back to the sump through the oil pressure relief valve. A safe but rational scheme would provide a smaller mechanical pump, supplemented by a variable-speed electric pump. A side advantage of this scheme would be the ability to pre-oil the engine before starting. The electric pump would pressurize the oil system before the starter turned the engine, thereby greatly shortening the time engines run before oil pressure reaches vital parts.

The driver of such measures is the extreme competitiveness of the trucking

Engineers envision future vehicles having all-electric accessories, powered

94

by large direct-coupled alternators, operating on a higher 42-volts. How do you decide at what pulley ratio to drive air conditioning compressors? If they spin fast enough to provide adequate cabin cooling in stop-and-go traffic, surely they are turning much too fast at highway speeds. This is normally handled by clutching the compressor in and out, approximating the needed duty cycle. Therefore in the future they must be driven electrically, and at the most efficient speed for the actual existing cooling load. Electric power steering is already a part of many vehicles, eliminating the constant windmilling of a hydraulic system as the vehicle goes straight down the road. A problem to be faced by super-efficient future vehicles is the need for cabin heat; for the more efficient the powerplant, the less usable waste heat is available in the system. This has long been a thorn in the sides of trucking operators, who try in unsubtle ways to prevent their drivers from idling the main engine to provide heat as they sleep in parking areas. When I asked a group of electriccar experimenters what they were doing about cabin heat, they replied “Nothing.” It is claimed that in winter commuter operation, 25% of auto fuel consumption may be charged to cabin heat. Automotive Diesels can face a related problem, for the heat rejected to coolant by a small engine can be on the feeble side. A simple approach is a stove or “heat battery.” A more complex fix might be a smaller version of the new cogeneration units being proposed for home power and heat. A small combustion engine turns a generator to produce electric power, and its waste heat (exhaust heat plus cooling system heat) heats the house. Efficiency concerns are driving the design of transmissions. My little 2.2liter gasoline-powered Chevy Cobalt automatic drops to 1500 rpm as soon as 40 mph is reached. At lower road speed, the transmission keeps the engine spinning faster because it is “betting” that I need to accelerate, and for that the engine must be turning a


much less fuel efficient 2000-3000 rpm. Therefore, even though it takes less power to push a vehicle at lower speeds, its fuel consumption nevertheless tends to rise. Only hybrids actually realize the low fuel consumption that ought to exist at low road speed—in part because they generate their power at an efficient mainengine rpm and deliver it electrically at low speeds. The other part of their advantage is in their ability to regenerate energy in deceleration. There are now proposals that suggest even heavy trucks, used only in city delivery, could recover as much as 60% of braking energy by use of a compressed air braking/energy storage system. Automatic transmissions experience losses because their torque multiplication arises from the pumping of fluid. To eliminate this torque converter loss, at least at highway speeds, lock-up functions are programmed in. The Cobalt performs a third-gear lock-up at 32-35 mph as well as the top-gear lock-up at 40, thereby extending its range of efficient operation a bit over older transmissions that lock-up only in top gear. Despite this, the four-speed automatic in my little car returns significantly poorer fuel economy than the manual five-speed. With a gnash of my teeth, I must acknowledge that my late parents were doing it right when they shifted their 1951 three-speed Kaiser sedan (with its savage 115 hp Continental flathead six engine) into top gear at the lowest rpm, commensurate with reasonable smoothness. Best economy!

with up to eight speeds, thereby keeping engine rpm at more efficient numbers more of the time, there is nothing with equally high torque capacity to beat the efficiency of well-made gears, generally estimated as “more than 98%.” Even things as mundane as headlights are being upgraded to conserve energy. As the fluorescent-light crowd never tire of telling us, ordinary incandescent (glowing wire filament) bulbs turn only 5% of the supplied energy into the light, the rest being wasted as heat. Adding a smidge of halogen and making the bulb out of high-melting quartz allows the filament to run hotter without evaporating (yes, that’s just what happens), making more light. Most efficient of all are the new gas discharge lamps filled with noble gas (incapable of forming chemical compounds), excited by showers of electrons. By varying the gas pressure or voltage across the device its emitted spectrum can be varied. Damn, those humans are clever. And they never let up. Turbo Diesel Register Issue 56

During WW II torque-converter transmissions were developed for tanks (Who has the cool detachment to fiddle with clutching and shifting when there might be a German Panther tank about to open up with its 88 out of those woods over there?) and it was assumed, by at least some, that large trucks would soon go automatic as well. Economics makes those decisions today, which is why clutch/shift/throttle automation has become the preferred direction. Even though car automatics are being built

95


Coffee Table Engineering – My Too-Real Experiences Today’s engineers can design a whole virtual machine in ProE or SolidWorks, then examine it in three-dimensional renderings that can be rotated to any desired angle of view. They can also subject it to simulated stresses with finite element analysis (FEA), or check internal or external aerodynamics with computed fluid dynamics (CFD). Engine simulations allow many bad ideas to be evaluated very quickly. Life was simpler in the 1960s. Our winter sport was to order in all known and relevant catalogs of speed parts, and then to pore over them while the frost grew thick on the window pane and logs crackled on the fire. Cams were a favorite and it was impossible not to be inflamed by the descriptions of the various grinds. “The Cruiser: mild, streetable grind gives tractable boost over stock.” “Three-Quarter Race: cuts loose big power and acceleration.” “Super-Stomp Double-Throwdown FullRace: the cam for outstanding track performance.” And then the Grand Finale, “All-Out TopEnd-Only: ultimate power for Bonneville and long straightaways.” After reading through this pulseaccelerating litany, who could order any of that pedestrian stuff at the top of the page? That was for nancyboys. We HAD TO HAVE the “Trigger Burke Killer Super Eliminator,” even though childhood memories associated elimination with something more mundane than camshafts. One of the good things about surviving early adulthood is that one may actually learn a few things. Serious racers never use any items that actually appear in catalogs. Little did we know in 1966, but actual races were won with conservative

cam timings much like those of the rejected “Cruiser,” but with a lot more lift. It would take years to learn this—years spent stalling at stoplights, clashing valves on overlap, and finding that the actual timings of the cam that came in the box were nothing like the numbers on the timing card. Desperate closingtime phone calls brought the helpful advice to, “Just line up the dots and run it. Forget the timing card.” The truth was the junk in the catalog paid the cam grinder’s overheads cost of doing business. He lavished his real attention on the people who knew what they were doing. That’s how it has to be. Those were the guys who thought nothing of setting up the pistons in the milling machine and sinking their valve pockets an extra .020”. Nothing to it. Talking on the phone while doing valve drop checks with light springs. Skimming the head to compensate for the lost compression volume. Hot licks. What shall I say of ten years spent wandering in the wilderness of my own ignorance? That getting there was half the fun? That I would have made more money in real estate? The 1970s were the big decade of twostroke motorcycle racing in the United States. Two-strokes are a mystery to a lot of people, but they wanted to feel they were moving toward higher performance—somehow. One outfit made a super-light swingarm that flexed so much that the bike would hardly go straight on top end. Another made a radical seat with a flipped-up tail section, based on a casual observation that some cars have decorative spoilers on their rear deck lids. It surely increased drag over stock, but many were sold. Stock brake lines had to be replaced with dash -3 braided stainless flex from the aerospace surplus place. Hot. And how about this Lexan windscreen? On and on it went—stuff that sold well, but

96

never made anyone even a tenth of a second quicker. Every enthusiasm has its self-appointed preachers who greet anyone who has visibly modified anything on his machine by saying, “You know, it’s completely meaningless to test more than one modification at a time—because you never know what’s working and what isn’t. It could be anything”. It sounded convincing. So one wellheeled California enthusiast ordered forty replacement cylinders for his 250cc engine (roughly $11,000-worth). He took them to a noted cylinder porting specialist and had just one change made to each cylinder. Like, raising the exhaust port 0.5-mm on this one, widening the port 1.0-mm on the next one, and so on. Then all the cylinders were run on the engine, one after the other, in an exhaustive test program. Notebooks were filled. When the data were all in, the conclusion was that stock is best. Not one of the modifications produced more power. This was not satisfactory, because people with stock engines were all backmarkers. Stock manifestly did not win races. Therefore I went to someone who knew something, and asked his advice. “Well, you know, you could kinda raise the exhaust a little, but if you did that, the engine would want to rev more, so you’d hafta take maybe 20-mm of length out of the exhaust head pipes to let it do that. And if you raise the ports, that shortens up the compression stroke, so you probly oughtta take maybe .025” off the head to get the compression back. And then, if the engine’s turning faster, those little stock carbs aren’t going to cut it, so maybe you could go up a couple millimeters.” It took him less than a minute to say this. His discourse was all very folksy and casual, but the message was clear.


Everything you do to the engine has to work cooperatively with everything else—it has to be a package. An engine is a system, not a parts list. So we tried it—at Daytona. In first practice our brand-new bike was a droner, so out came the die-grinder and I raised the top edges of the exhaust ports a millimeter, and widened the tops of the ports almost 10%. Then we took the hacksaw to the pipes and shortened and re-mounted them. On went a spare head, pre-modified for higher compression. In the next practice the revs went up by 300—on the same gearing. Then I decided to retard the ignition timing from 2.0-mm to 1.7. Another 200 extra revs. That night we put on 36-mm carbs and did all we could off the track to get them responding properly. The next day—another 200 revs. By the end of practice, we were third fastest. One bike ahead of us was a factory entry and the other belonged to one of the real wizards of 250 racing. In the race, the bike ran like an airliner and as the final laps ran out, I sang to it from where I stood behind pit wall, and at the end we were third. We were dizzy. How did we get here? We had got there by finally having learned some basic things about engines. We had read the signs on the spark plugs and on the piston crowns, and had corrected the fuel mixture accordingly. We had been bold to the edge of foolhardiness, but the gambles were supported by good probabilities. We had learned to talk to the engine. And to listen to it. We also learned that the stock engine’s specification was a disguise created by the manufacturer to protect the amateur racer from himself. Everything was conservative. It was like all those backnumber cams in the catalog—intended only to pay the overhead cost at the cam shop while the serious business of racing would be carried on by people who knew what they were doing. We were, by degrees, becoming more like those people.

We found the same when seeking advice from “The Champion Man”—the tuning advisor sent to the events by the spark plug maker. If you were running a droning stocker, he would look at the business ends of your plugs. If oily black liquid did not actually drip out of them, he’d say, “That looks good. I’d run that.” This was his role as “he who keeps the plug user from having a bad time with our brand.” But we later discovered that he had another personality as well—if he knew your bike was running in the top three in practice. Then he’d look into the plugs and say something like, “I think you could advance the timing maybe a quarter-degree. Maybe a half.” “How can you tell?” He handed me the magnifier. “Look at the end of the center-wire. They cut those with a shear at the factory, so when they’re new, the edges are sharp. When your engine is running just right, that center-wire should get hot enough that those edges just start to soften a little—they look a little rounded. Yours don’t—they’re still sharp.” The point here is that there is always more to be learned, more ways to recover information from engines that will point to how they can best be improved. It takes time to learn, and it takes curiosity and careful observation. Dyno room people know a different set of things about engines from what the engineers upstairs know. Users in the field have yet another set of experiences and conclusions. All of this is useful. Once a dyno operator friend phoned and asked me this question, “When an engine starts to detonate (knock), what happens to the exhaust gas temperature?” There was a long silence on the line as I thought about this. Practically every go-kart racer in the world has an EGT gauge on his/her engine, based on the idea that if the temperature goes too high the engine must be detonating. Can fifty-

97

million Frenchmen be wrong? “The EGT should drop,” I said, a little anxiously, hoping reality would agree, yet fearful that it might not. “Well, it does, but I want to know why.” So we discussed it. We both knew that when a liquid-cooled engine starts to detonate, its coolant temp goes up maybe five degrees for no apparent reason. That energy has to come from somewhere, and the hot combustion gas is the only source of energy in an engine. Therefore the EGT should fall. Why does the coolant temp go up when detonation begins? Normally, hot parts of engines are insulated to a degree by a natural boundary layer of stagnant gas that lies near all surfaces. It is stagnant because its molecules lose energy in colliding with the surface. Normal combustion is too smooth to disturb this layer, but the sonic shock waves of detonation scour it away. Unprotected, the metal heats up more than normal, and we see this on the temperature gauge. Experience gradually builds us a model of how things work in engines, and that model helps us plan changes by roughly predicting their effects. Our first thoughts are not always reliable—it’s worth taking a minute to run it past the experience we have behind our eyes. When there’s someone to ask, do it—there’s no shame in admitting that none of us knows as much as he or she would like to. Wisest of all are the engineers and others who have broken lots of parts. After each disaster come hours of staring at pistons, valves, pieces of turbine wheels or other parts, hoping for clues. They are there, and the Wise Ones can see them. And there are volumes of books that belong on the conference room table—inspirational reading that can add meaning to what we’ve seen. An engine is a system, not a parts list. Turbo Diesel Register Issue 57


Diesels at Sea Rudolph Diesel (1858-1913) came to his idea for a compression-ignition engine as a result of theory rather than of cut-and-try. This was a natural matter for a German of this period. Otto von Bismarck created a unified industrial Germany from a collection of agricultural principalities. As a basis for future national power, an industrial revolution was required. Such a revolution had happened by accident in England, but Bismarck planned Germany’s version. To entice people away from agriculture to industrial employment in cities, he provided free public education and workman’s compensation. To provide the knowledge necessary for industrial leadership, a system of higher education was created, based upon first principles. Dr. Diesel and the other German internal combustion engine pioneers were products of this system. Their work was based upon well-understood physical principles rather than on back-yard intuition. Diesel graduated from Munich Technical University in 1880, and in 1893 published a booklet outlining his theories of how a new and more economical engine type could be designed.

because combustion knock on early and low-grade gasolines required them to operate at low compression ratios of three or four-to-one. Diesel’s engine was quickly very successful and so was he. Some historical accounts suggest that Diesel then mismanaged his wealth. He mysteriously disappeared from a channel steamer making a quiet crossing to Harwich, England, in September, 1913. We are offered our choice of three explanations—that he fell, jumped, or was pushed into the sea. An accident on a calm night? A suicide? The third choice requires that we believe agents acting for der Kaiser disposed of Diesel to prevent his knowledge from serving on the British side in World War One. Britain had actually built steam-powered submarines, and several nations had tried to power subs with gasoline engines. The steam sub had a disqualifying window of vulnerability—the motionless half-hour needed to raise steam after surfacing. Gasoline fumes incapacitated crewmen even if they failed to blow the boat to pieces from any stray spark. Subs needed a better powerplant to become something more than curiosities.

One of Diesel’s early engines—that used for original acceptance testing—can be seen in the Deutsches Museum in Munich. It is a beautifully-made vertical single-cylinder machine ten feet tall that suggests that Diesel and his backers were very sure of what the results would be. His backers were influential indeed—Fritz Krupp and Augsburg Maschinenfabrik. He had told them, “The whole of my engine must be made of steel.” At its first test, a violent combustion detached an accessory part at high speed. Those present were not disappointed; something big had happened. Now the job was to control it. Soon it was operating, delivering power at a specific fuel consumption of 0.52 lb/hp-hr.

Germany changed all that. Although Germany made great efforts to keep up in the 1890s-1910 naval race with Britain, there was ultimately no hope of equaling the British surface fleet. The submarine, on the other hand, had all the ingredients of a completely novel war strategy. England, an island nation, was highly dependent upon sea commerce.

A new engine type was very much needed. Otto had demonstrated that the four-stroke principle could greatly improve upon the heavy fuel consumption of the early gas engines. Yet even four-stroke gasoline engines needed improvement

Existing Diesel engines weighed hundreds of pounds per horsepower and were not obvious candidates as submarine powerplants. Yet there are always those who see things not as they are, but as they may be in future.

Most naval authorities dismissed submarines as coastal vessels at best, unable to keep up with fleets because of their limited speed, and lacking the range for ocean crossings.

98

Diesel development took many forms in Germany, quickly running through many concepts—two-stroke, four-stroke, even double-acting, with combustion taking place on both sides of each piston. Ways had to be found of creating the necessary strength in engine structure to support a long, slender crankshaft—but without excessive weight. This development was a very large undertaking but it was successful—German submarine Diesel engines became the models for US development after World War One. In this early period fuel injection was accomplished by blasting the fuel into the engine cylinder using compressed air. Fuel was metered into a pre-chamber, then carried through multiple small orifices into the combustion chamber. This system achieved excellent atomization because the fuel must pass through two sonic shocks on its way into the cylinder. As a result ignition was prompt. Later marine Diesels typically carried two or more air pumps for this, one of which would be in service while the other(s) were on standby or in repair. The air pump injection system was bulky as well as troublesome. In 1904 MAN (Maschinenfabrik AugburgNuernburg) Diesels weighed 75-100 pounds per horsepower, and were too heavy and bulky for consideration as submarine powerplants. At this time the leading shipyard Germaniawerft (GW) was using Korting gasoline engines in its submarine experiments, and was enjoying considerable export trade because of the wide interest in the submarine. A year later GW again asked MAN for engines and was shown a four cylinder four-stroke Diesel of 300-hp at 500-rpm, to be ready for demonstration in 1907. After consideration, this engine was ordered in 1908. Meanwhile, GW, much as the Wright Brothers had done, decided it would have to design its own engine. This was a two-stroke of four cylinders and 300hp, made to be reversible. It was run in March, 1908. In 1906, requests for bid were made to FIAT, Korting, and MAN. German navy authorities were at this time attracted to the two-stroke because


it had no exhaust valve problem. Valve materials were in a primitive state and required frequent re-grinding even if they did not fail outright by cracking or breaking. At this time “All firms experienced great difficulty in manufacturing lightweight diesel engines.” This isn’t surprising. The problem of submarine power was clearly one of adding cylinders, resulting in longer crankshafts, because of the limited diameter of submarine hulls. The first auto engines with six cylinders in-line had plenty of “difficulty” as a result of torsional oscillations—the back-andforth twisting of the crank as a result of the applied twisting pulses from cylinder firings. MAN’s first proposal as a sub engine was, like so many auto engines of that time, an in-line four. When Montague Napier had run his first sixcylinder automobile in 1903, crankshaft torsional vibration created noise in the camshaft drive. Napier’s able salesman S.F. Edge simply called the noise “power rattle” and turned it into a selling point. Another problem was materials. The heaviest part of any engine is its structure, but cast iron isn’t known for light weight or fatigue resistance. This was a time of discovery in steel alloys, and Germany and France were the most advanced nations in this endeavor. In the US, steel mills had made great strides in raising profit by economies of scale and labor saving, but their product remained plain old carbon steel. Everything had to be learned for the first time—all the smallest details, for example, of how to design caststeel pistons that did not have stress concentrations that would cause cracking. Engineers could design engine crankcases strictly according to first principles derived from bridge or ship design, but nature always had the last word. Much of this work had to be accomplished by trial and error, and such expensive means of research and development could only be afforded by the largest organizations. MAN now commanded world-wide sales.

MAN had a “lightweight” engine ready for test in August of 1910, after a two-year build period. GW’s engine was also of 300-hp and four cylinders, but was a reversible two-stroke. FIAT and Korting were not quick enough with product to be considered, so the four-stroke MAN and the GW remained. The four-stroke’s economy was 0.42 lb/hp-hr, while that of the GW two-stroke was 0.48 (compare these numbers with the 0.35 - 0.38 of modern truck Diesels). The four-stroke was less noisy but poorly balanced—probably the result of torsional vibrations, which were only kept tolerable by use of a thicker, heavier shaft than the engineers would have liked. A fourstroke’s power pulses come half as often and are therefore roughly twice as large as those of a two-stroke, thus being better able to elastically twist crankshafts and so to set them into potentially destructive vibratory motion. The two-stroke turned a bit faster, was noisier and smoother, and started more easily. In 1912 eleven Diesel boats were ordered from GW, powered by the GW two-stroke of 925-hp at 430-rpm. The decision to try GW first may have been related to the problem of torsional vibrations. Yet in the same year four boats were ordered with MAN fourstrokes of 1000-hp. By July of 1914, with the beginning of WW I only a month away, the 850-hp GW two-stroke was judged unsatisfactory and the MAN found superior, but GW improved its product steadily. The MAN

four-stroke SV4/42 engine of 1200-hp would become Germany’s foremost WW I submarine engine. After the war, examples of this engine would be studied in the US and a very similar engine produced for US submarines. Other engines removed from German U-boats would see postwar service driving electricity-generating plants in England. Why such German primacy? Engineers worldwide might understand the principles of Diesel engines as well as the Germans did, but it was the accumulated experience of the build/test/improve cycle, energetically pursued, which were unique to the German engines of this time. From a position trailing in number of submarines attached to its fleet, Germany achieved a revolution in propulsion. As Eberhard Rossler observes in “The U-Boat,” “It was the diesel engine that changed the role of the German U-boat from a defensive to an offensive one and made possible its successful application in a war of blockade.” It is the weight and bulk, not of the engine alone, but of the engine and the fuel required for the job at hand, that determine the choice of powerplant for vehicles of long range. Before the end of the First World War, German engineers would be planning submarines of 13,000-mile range, thanks to the fastdeveloping fuel economy and reliability of the Diesel engine. Turbo Diesel Register Issue 58

Bibliography “The U-Boat; the Evolution and Technical History of German Submarines”, by Eberhard Rossler. Arms & Armour Press, London 1981 ISBN 0-85368-115-5 “Marine Diesel Engines”, by C.C. Pounder, Newnes-Butterworths, London, 1972 ISBN 0-408-00077-5 “Engines Afloat”, Vol. II; the Gasoline/Diesel Era, by Stan Grayson, Devereux Books, Marblehead, Mass., 1999. ISBN 0-9640070-7-x “Internal Fire”, by Lyle Cummins, SAE, Warrendale, PA, 1989 ISBN 0-89883-765-0

99


Getting It Right When a semiconductor manufacturer begins pilot production of a newgeneration computer chip with a radically smaller feature size, the engineers know they must weather a period of difficult development, high costs, and a yield of usable chips that may be at first less than one percent. In this work they have these advantages: 1. They have been through this process before and have come through it to achieve profitable production. They know the process works. 2. The company has earlier products in profitable production, providing the financial resources necessary to push through and solve the problems of the next generation. When prospective Diesel engine manufacturers of the 1930s tackled the problem of airless Diesel injection, they lacked the comfort of (1) above, and in all cases the difficulties they encountered made at least a severe dent in (2) as well. All modern Diesel engines employ airless (often called solid) fuel injection, but when Dr. Rudolf Diesel did his original development at Maschinenfabrik Augsburg, his many ingenious attempts at solid injection ended in failure. In real desperation he resorted to using the injection pump to meter fuel quantity into a pre-chamber. Then the fuel was atomized by blasting it into the main combustion chamber by means of a secondary injection of highly-compressed air. Two- and even three-stage air pumps were therefore an added expense and complication of all early Diesel engines. Because of the very high pressure required, such pumps generated a lot of heat and had to be cooled if they were not to fail. Lines, storage tanks, and check valves all had to be of the very highest quality if the system were to work. All such details had to be worked out by long testing. It is interesting to note that one of the technologies that was successfully used to create very low-emissions two-

stroke gasoline engines is a reprise of this air-blast method developed by Dr. Diesel. The Orbital Engine Company of Australia found in the 1980s that it could produce very fine fuel particle size by metering fuel into a pre-chamber, then blasting that fuel through an orifice into the main combustion chamber with a shot of compressed air from a small poppet valve. Several two-stroke makers including Mercury marine bought Orbital licenses and built two-stroke engines using this type of direct fuel injection.. The early history of true Diesel engines has been complicated by the existence of another engine type—the “oil engine”— mainly produced from 1890 to 1900. While the Diesel cycle requires that the compression of air in the working cylinder generate heat sufficient to ignite the injected fuel, the oil engines of that early period were Otto cycle engines adapted to burn heavy fuels by means of a hot evaporator of some kind. The incentive to devise such engines was the lower cost and greater safety of heavy fuels in that time, as compared with gasoline. A great many thousands of such engines as the Akroyd Stuart and the Priestman were produced for stationary or marine power. After Dr. Diesel’s vindication and success in 1897, such engines were sometimes referred to as “semi-Diesels.” They were nothing of the kind. In our own era this confusion is fostered by such things as the annoying sparkless running-on of car engines of the later 1970s after the ignition was switched off, or the occasional running-away of a twostroke, which does not stop even when its spark plug wire is pulled off. People refer to such running-on as “Dieseling,” but, in fact, the compression of air to heat it beyond the ignition temperature of the fuel is not involved. The cause of such run-on is always the retention of hot gas or reactive chemistry in the cylinder from the previous cycles. The need for high-pressure air was a great drawback for early Diesels because it was anything but trouble-

100

free. A skilled mechanic was required in attendance to keep all systems in operation. Air-blast injection Diesels were not turn-key power systems. When in the later 1920s this drawback was addressed by attempts to develop solid injection, Mother Nature was her usual generous self in doling out problems and failures. Fuel injection lines flexed, causing fuel to “dribble” at the end of injection, as the dilated steel lines contracted. The fuel oozing from the injectors carbonized there, building up deposits that soon blocked normal injection. Sharp injection cut-off seemed impossible to achieve. Sound waves bounced back and forth up and down through the fuel in the injector lines, causing cylinder-to-cylinder variations in the amount of fuel delivered. Individual adjustments failed to correct this because the variations themselves varied with engine speed. Fuel lines were eroded from within and punctured by cavitation, focused on certain spots by combinations of wave reflections and injection line geometry. We tell ourselves that fluids are not compressible—but they are. The English manufacturer of aircraft landing gear, Dowty, makes a device called a “solid spring” which makes use of the compressibility of oil. The more fuel there was between the injection plunger and the spring-loaded injection valve at the engine cylinder, the more the springiness of the fuel had to be allowed for as well as the springiness of the injection line. The desired result, of course, was to inject fine sprays of fuel at very high speed—750 feet per second was a common number. As such fast-moving droplets hit the dense compressed air in the Diesel engine’s cylinder, they flattened, punched in, and broke up into circlets of sub-droplets. The largest of such sub-droplets were broken up in their turn, the final product being a huge increase in total droplet number and surface area. This greatly accelerated fuel evaporation. Evaporation is a cooling process, so droplet evaporation’s first effect is to


cool the compression-heated air in the Diesel cylinder. This is the cause of the celebrated “Diesel ignition delay” of up to ten crank degrees from the time of first injection of fuel to the time of measurable pressure rise from combustion. The usual description of Diesel ignition is that fuel ignites as it is injected into the compression-heated air in the cylinder. In fact, the evaporation of fuel droplets takes time and initially cools the air around them. Only as fuel-rich vapor comes in contact with hotter regions does ignition actually take place. One solution to the problems of irregular injection was to make all fuel injection lines between a plunger pump and the injection nozzles the same length. Another was the “unit injector”—to give each cylinder its own injection pump/ nozzle unit mounted directly on the cylinder head, and each operated by its own pushrod and rocker from its own lobe on the camshaft. Injection plunger to bore clearances measure in millionths, so plungers initially seized to bores frequently. Any contaminants in the fuel had the same effect. The lubrication characteristics of the fuel varied, so a given experimental system might work well on fuel A and seize after a few hours of operation on fuel B. Many materials and surface treatments had to be tested. The experiments gulped money and time. A body of knowledge was being developed at each company that was seeking an airless injection technology. The greatest single attraction of the Diesel engine is its fuel efficiency, so makers knew that reliable operation at one speed and load wasn’t enough— although this was the best that some early marine Diesels could deliver. Standard engine test methods would reveal any deficiencies. Few buyers wanted an engine that was economical at ¾ load but used as much fuel as a gasoline engine at ¼ load. Today electronically-controlled commonrail injection systems perform five or more separate injection events per

cylinder combustion, but these concepts are not new—just their successful application. Early attempts at solid injection often began with a common-rail system, in which a high pressure pump maintained injection pressure in a delivery pipe or “rail” that supplied all injection valves. Unfortunately, early injection valves could not cut off flow sharply enough, allowing them to “dribble” and to carbon up, ruining the injection spray pattern in just a few hundred hours of operation. Cam-driven mechanical injectors have been designed to deliver a small pilot injection before the main fuel spray. Because of ignition delay, if an injection system simply begins spraying fuel, quite a bit has been injected by the time some part of it actually ignites. The result is a sharp thump as the considerable fuel in the cylinder lights up at once. This is the traditional Diesel knock which makes older Diesels so loud. A pilot injection sprays a tiny amount of fuel into the cylinder, and when it lights up, the main spray can be ignited promptly by it. As a result, without sudden ignition of quite a bit of fuel, operation is much quieter. No doubt you’ve noticed this as the Cummins engine in your pickup has evolved from 12-valve, to 24-valve to the 5.9 HPCR and now the even quieter 6.7 HPCR. Modern electromagnetic or piezoelectric injection valves also provide a pilot injection, and once the flame is established, they are able because of their speed to follow it with multiple main injections. In engines using an exhaust catalyst, there may be a single late injection as well, whose purpose is to make the exhaust gas hot enough to keep the catalyst “lit” (that is, hot enough to promote complete burning of unburned hydro-carbon in the exhaust stream). Answering all the myriad questions on the way to successful solid injection took serious amounts of time and money, which is why it usually happened in

101

large organizations like MAN, GM, or Daimler-Benz. In this game, having good ideas was just a beginning. Then you needed ultra-precise machining facilities to turn ideas into hardware, plus the cash to afford long series of instrumented running engine tests. With the dual constant demands for greater economy and lower emissions, this work never ends. Turbo Diesel Register Issue 59


Hitting the New Number What is the price of fuel in your area? We naturally hope that gasoline will hover around $3 a gallon, but each time it has risen above that, we have wondered “Is this the last time? Will we, in three years, look back on $3 gas as we now do upon gas at $1.65?” A rising chorus of voices speaks of “Peak Oil”—the year in which maximum world oil production will be/has been attained, and after which production will drop. Read the books and articles and see what you make of them. We’d like to believe we’ll always be middle-aged, healthy, and happy too, but we know that everything changes. If Peak Oil is now, our future could see fuel rise to $4, then $5, and then who-knowshow-high. This is a high-stakes game in which there are bidders prepared to go as far as they have to. China, India, and Indonesia need oil to fuel their fast-industrializing economies. The US and Western Europe, accustomed to hundreds of years of a split have/havenot world with themselves comfortably on top, may be on a collision course with highly productive and ambitious new economies in the East. Have we all the understanding and discipline to negotiate equal access to energy?

by as Japan was reduced to a third-rate power by energy starvation. War would come soon, but where?

The Pacific War, 1941-45, was fought over resources. The US supplied Japan with oil and scrap metals between WW I and WW II, while Japan carved herself ever-larger pieces of Manchuria (coal and iron ore) and China (food and labor). The US warned Japan that China must not become a “greater Japan.” We did this so often and so toothlessly that Japanese planners dismissed the US as a paper tiger, its citizens decadent from soft living. Must we call this appeasement? US hands were tied by the Great Depression of 1929, and a strong national mood of isolationism.

This historical example reveals that energy is deadly serious business. Today there isn’t quite enough to go around, and there may be even less in the future. That drives the price up. Congress has decided that conservation can be a useful tool in making energy go farther. One result is the new 35-mpg CAFE standard.

When the Japanese seized control of Indochina from the French in July 1941, the US finally took action, cutting off Japan’s oil. US leaders and anyone who read past page one in national newspapers knew in that moment that there would be war. Japan’s strategic oil reserve was small. Japanese leaders said clearly that they would not stand idly

In truth, US planners did their share of underestimating their potential enemy. The idea of a Japanese trans-Pacific naval strike was inconceivable to them because the American “Plan Orange,” for similar action against Japan was itself known to be unworkable. If we couldn’t do it, how could they? Japanese carrier pilots arriving over Pearl Harbor could not believe their luck – here were the paper tiger’s World War I battleships drawn up in a neat row, and over there were similar rows of equally vulnerable B-17s and other aircraft. Simultaneously, Japanese forces easily took over Dutch oil fields in what is now Indonesia, brushed aside US forces in the Philippines, and erased local British power by seizing Singapore and sending two British battle-cruisers to the bottom in minutes by long-range air attack. The Japanese strategy was to hope that their new access to resources would give them the strength to repel Western responses.

Corporate Average Ffuel Economy (CAFE) for autos and light trucks is now mandated by Congress to rise in 2020 from the current 27.5-mpg cars/22.5mph light trucks, to a new standard of 35-mpg average for both. There is no serious technical problem in meeting this standard. Low-emissions small turbo-Diesel autos and some of the new gasoline direct injection (GDI) autos will meet this standard now. The 2020 date allows plenty of time to consume the value of existing production tooling and to make a transition to the new

102

equipment that will be required. The real problems will be social. We like our largish cars, SUVs, and pickups as they are, so manufacturers will strain every fiber to create a mix of large and small vehicles that will average out to the 35-mpg standard. Even now, electric motors are being integrated directly into new-design automatic transmissions, so that existing SUVs and pick-ups can be hybridized without much change other than finding space for the battery and extending the floor’s tranny bulge even farther aft. Many of us have now driven the new breed of European turbo-Diesel auto. Never revving over 2600-rpm, they accelerate hard and deliver wonderful economy, while having European design flair. Such cars have been late arriving in the US, where tighter emissions requirements prevail. Currently, EPAcompliant new Diesels from Mercedes, BMW, Honda, and Audi are either beginning to arrive or will soon be here. For US makers, the trick will be to find ways to manufacture small, economical cars in numbers sufficient to let them earn most of their profit—as usual—from larger, more fully-equipped models. They will get some help in this by such controversial measures as the exceptions for flex-fuel vehicles. The economy of such machines— able to burn E85 gasoline/alcohol mixture—can legally be multiplied by five before inclusion in the CAFE mix. In general, Detroit has chosen to make large, expensive, relatively fuel-hungry vehicles perform this role, thereby improving their numbers. No doubt there will be other, similar fudging provisions in future emissions law. Diesels were the heroes in the early 1980s because they were economical (very important in avoiding long fillingstation lines in the second “oiru shokku” of that time) and emitted little unburned hydrocarbon. Then came a fresh understanding of Diesel particulates— which become visible as black smoke when injector sprays deteriorate or when an owner-operator racks-out an


old-tech injector pump in hope of extra power. The smoke is not just clumps of carbon atoms, left over from the cool fringes of combustion. It also carries a burden of adsorbed multiple-carbon-ring hydrocarbon structures, some of which mimic biological molecules and may be metabolized in humans. Some of these structures are proven promoters of cancer. Hero to zero. Since then the price of US admission for Diesels has included filtering out the particulates and removing or preventing the formation of nitrogen oxides. None of us wishes to be bundled off to the cancer ward or to die coughing in an NOx-facilitated smog inversion, but we don’t exactly welcome the idea of fuel so expensive that all vehicles must sit still until all seats are filled with paying car-poolers. We like our mobility and privacy! Particulate filtration is becoming a mature technology, which leaves the problem of nitrogen oxides. The best plan is not to make them in the first place, which is why new-design Diesels incorporate heavy, cooled EGR. The more cool inert gas we can mix with the air in our engine, the lower will be its combustion temperature, and the lower the production of NOx. I see the pages of auto engineering mags filling up with ads for Diesel exhaust coolers so I know this is nicely making the leap from theory to practice. That leaves the problem of removing enough NOx from the resulting exhaust stream to drop the content comfortably below the current standard—and keeping the value down there as the vehicle ages over a specified lifetime. This kind of painstaking and detailed work is why the automotive industry is the number-one consumer of research and development funding in the world. Not spaceships. Not bio-tech. Not weapons. I recently attended the riding test of a new model of Italian motorcycle where their engineer, discussing the problems of fuel and ignition calibration for racing, said, “Of course for racing this is so

easy, but for production, it becomes very complicated.” The German automakers have presented their Blue-Tec system to the US EPA. It injects urea which reacts with the NOx to produce harmless atmospheric nitrogen and oxygen. The problem is that the urea tank must be refilled, and EPA doubts drivers have the self-discipline to do this. Does the engine ECU therefore shut the engine down when the urea tank runs dry? Does it sound a beeper or illuminate a red warning light? Does it reduce power to a “limp-home” mode? Honda and others (your Turbo Diesel is included in this group) have chosen the other leading technology, which adsorbs nitrogen oxides on one surface of a multi-layer catalyst trap. Periodically the engine’s ECU orders it into brief rich operation, providing fuel to power a reaction that reduces the NOx to harmless form. No urea is carried. As experience with these systems is gained, we can hope that economies of both scale and of improved practice will cut their cost so that compliant Diesels lose their laboratory character and become enduring and affordable solutions. Not so fast! Engineers currently speak of a possible $60007000 surcharge for 2020-compliant vehicles. But wait—is this the truth, or is it game theory? Are they really speaking to Congress, hoping the standard may be back-pedaled for economic reasons? Everyone knows that the US economy would stagger (and might even fall) if the domestic auto industry collapsed. Other Diesel technologies that will contribute to this possible future are very high pressure, multi-squirt common-rail injection systems, their injection valves actuated either by electromagnetic or piezoelectric means. It is easy to visualize a Diesel’s high compression ratio contributing to its efficiency, but a practical matter intrudes; how long does it take to inject the fuel? If it takes many degrees, the last part of the fuel to enter the cylinder will burn after the piston has descended some distance from

103

TDC. In other words, the late-injected fuel will burn at a lower compression ratio—and the expansion of the resulting combustion gas will begin at a lower pressure and will expand a shorter total distance. This motivates engineers to use faster injection, which calls for more pressure. The nature of this problem becomes somewhat clearer when we reflect that peak combustion pressure in a rifle cartridge may be 50,000-psi, while current common-rail injection systems are at 24,000 - 29,000-psi. Multiple squirts—as many as five per firing cycle—are employed. A smallvolume pilot injection quiets combustion by limiting the amount of fuel present in the cylinder when light-up takes place. This is why recently-designed Diesels no longer produce the traditional and noisy “Diesel knock,” either at idle or under way. Then three main injections disperse fuel in the combustion chamber, each time stopping short of driving the spray plume all the way to the cylinder walls. Finally a late-cycle injection adds heat to keep the exhaust catalyst hot enough to function. High injection velocity—over 1100-ft/ sec—ensures break-up of the fuel spray into particles so small that their huge surface area translates to maximum evaporation and rapid burning of remaining droplets. Diesel combustion is described as a diffusion flame, in which each fuel droplet is surrounded by a halo of evaporation from its surface, and the outward diffusion of fuel molecules brings them into contact with the oxygen of the air charge. Combustion takes place in this mixing zone. The presence of cooled, recirculated exhaust gas in this process lowers its flame temperature— enough, ideally, to prevent the formation of much NOx. The greater the percentage of fuel droplets that are consumed in the diffusion flame process, the smaller the residue of pyrolyzed fuel residue there is left to cluster as particulates (like the black stuff in the bottom of the toaster). Nothing is perfect! Engineers strive for uniform conditions throughout the


chamber, but rich and lean zones are inevitable. If combustion is improved with a view to cutting particulate production, it burns hotter, increasing NOx generation. If temperature is successfully lowered enough by cooled EGR to result in very low NOx, combustion is less complete and particulate formation accelerates. Pick your poison. In development, engineers make a best estimate of the relative costs of dealing with the two kinds of emissions, and steer their combustion compromise accordingly. Truth here is relative—tomorrow a fresh technology may shift the equation, and new choices must be made. Want uniform mixture above all? Maybe the thing to do is wait for the coming HCCI engine, which promises near-Diesel efficiency with very low emissions. That might mean big savings, for such an engine would need much less exhaust aftertreatment. This is Homogeneous Charge Compression Ignition, a process in which a pre-mixed charge is compressed at a specific high temperature, such that it auto-ignites quickly but not explosively, burning to peak pressure at the usual 14-degrees ATDC. Late stock-car great Smoky Yunick played with this concept years ago, using a heated intake manifold, but today’s implementations of HCCI revolve around the recirculation of just the right amount of uncooled exhaust gas. When this is done right, the result is that the added heat of compression in the cylinder is just enough to cause auto-ignition at the right point in the cycle. Because the charge lights up everywhere in “a thousand points of light,” there is no flame front as there is with spark ignition. Without a flame front, there is no compression of end gas ahead of it, so there can be no detonation. That allows use of high compression ratio for efficiency—without knock.

Therefore the current developments center on employing HCCI in constantload devices such as stationary generators, or using HCCI to achieve high economy in vehicles that spend most of their time at highway cruising speed. The concept is too valuable to ignore because it is able to operate so lean that economy is very high, and that with very low emissions. There will surely be a place for it—perhaps in a “multi-combustion-mode” system—in meeting the 2020 standards. We humans want and need to know everything, but nature resists strongly. Her most potent weapon is that with every new thing we learn, we also uncover new ignorances. These are Don Rumsfeld’s famous “unknown unknowns,” and there are enough to last the lifetime of our species. In the meantime, we are going to learn a lot more about the tiniest details of Diesel combustion. That may get us down the road just ahead of the EPA. Turbo Diesel Register Issue 60

Unfortunately, HCCI presently cannot work at either very low or high loads. At idle there is too little heat to achieve ignition, and at high load there is too much fresh charge in the cylinder to be heated to eventual ignition by EGR.

104


Legal Force and a Crazy Question Before we know it, 2020 will be here, and with it will come legal force behind Congress’s new 35-mile-per-gallon Corporate Average Fuel Economy (CAFE) standard. The old standard had been 27.5mpg. This law essentially requires that the fuel economy of an automaker’s model year, averaged over all vehicles produced and measured by a specified driving cycle, must equal or exceed the mandated number. Manufacturers are fined in proportion to deviations from this rule. Naturally, there has been a lot of fiddling, for organizations as large as the auto industry can afford a lot of lawyers. One of the most notable is the “FlexFuel” provision, which grants amazing relief to those who produce vehicles able to switch automatically between gasoline and E85 (85% alcohol, 15% gasoline) fuels. Outwardly, this sounds like a fine thing—encouraging makers to produce vehicles which can be fueled in this mostly renewable-energy manner. Now for the fine print. The measured fuel economy of a FlexFuel vehicle may be multiplied times five for its inclusion in the CAFE average. Therefore the industry includes this technology mainly aboard its most fuel-hungry and profitable-tosell machines—large SUVs. Instead of computing with their actual fuel economy (let’s say it’s something like 17mpg), they become part of the maker’s CAFE as five times that—in our example, 85mpg. Lookin’ good! Poof—a painful financial fine to the automaker disappears! The influence of such work-arounds pales when compared with today’s $4.25/gallon Diesel fuel and $3.80 gasoline.* No legal sleight-of-hand can transform a $130 tank of fuel into one that costs yesterday’s $50. The extra we are now paying for vehicle fuel is large enough that, unless we are very comfortably well-off, we are having to give up something, somewhere, to be able to afford to continue driving as we must. That hurts. Before petroleum busted through the $100-a-barrel level, we thought of 30mpg cars as economical. Now that it costs $40 to fill their dinky gas tanks, we are suddenly interested in the

details of fuel economy and what is, or could be, done to increase it.

Should we not demand more from our elected officials?

We’ve all heard about checking tire pressure and wheel alignment, avoiding “jackrabbit starts” (lovely PR phrase from the 1950s), driving 10mph slower than usual, and not carrying wintertime’s traction sandbags around all summer. What will all that get us? One or two miles per gallon—maybe. And yes, we’ve heard all the blather about “If all Americans would turn down their water heaters, dry their clothes outdoors, and wash with a damp rag instead of taking showers, we could save 80 zillion gallons of this or that.” Just try getting all Americans to do anything.

Excuse me while I ponder the national fuel tax holiday idea.

Another red herring is overblown talk of “alternative energy.” Get on the Internet and do a little research. You soon find that the energy categories are oil, gas, coal, nuclear, and “other.” Other is a very small number, and includes wind, tidal, geothermal, composted carpet fluff, and fry oil begged from greasy-spoon restaurants. Other increases very slowly. Politicians talk loudly of hydrogen and crowds applaud just as loudly. But do we see televised ceremonies, showing giant coal-fired electricity generating plants being shut down because so much wind, geothermal, and other has come on-line that their “dirty power” is no longer needed? No, and you won’t see such a thing for a long time, if ever. That’s because it takes a lot of wind farms, carpet fluff, and fry oil to equal the energy in two trainloads of coal—the 24,000 tons of black rocks that a big coalelectric plant burns in one day. Energy use in any industrial nation with a high standard of living is huge. Oil is not an “addiction”—it is an absolute necessity for such nations. If they cut down on oil use, they just have to burn more coal, find more gas, or build more of the everpopular and easy-to-manage nuclear plants which are perfectly safe. Editor’s note: Kevin’s hit upon an idea that I believe makes sense to the majority of TDR readers. But, we’ve pounded this drum before—where is the US’s national energy plan?

105

What will/can the vehicle industry do in response to the new CAFE standard? Naturally, they get their lawyers, lobbyists, and other types of good buddies on the case to see what kind of interpretive hair-splitting will buy them how much time before the full force of the law collides with their absolute need to keep right on selling their most profitable vehicles—namely, large SUVs and pickup trucks. Some arm-twisting is likely to work here for at least a while, as experienced folk in government know that the last thing the current economy needs is to hear that BMW is buying Cadillac division of GM, or that Chrysler has closed or been sold to a consortium of Chinese buyers. Therefore the deal will probably not be as clean as the 35mpg number makes it sound. Every system of laws—national, religious, sporting—generates an active system of hair-splitting that softens its effects. Every engine now in production has been through specific fuel consumption testing. The result of such testing is summed up in a performance map, relating specific fuel consumption (called brake specific fuel consumption [BSFC] it is measured in pounds burned per horsepower developed, per hour of operation) to engine rpm and load (throttle opening or averaged combustion pressure). The result of such mapping is a series of curves of constant BSFC, and roughly at their center is the “island” of minimum SFC. This is the engine’s most efficient point of operation, and it can for a variety of reasons be as much as 2 ½ times more fuel efficient at this point than it is at its least efficient points. Why should this be so? Isn’t compression ratio the principal determinant of fuel efficiency? Here’s how it works. Friction rises rapidly with rpm because inertia loads on pistons and bearings increase as the square of speed. That means


operation at higher rpm sacrifices everrising power to friction. We also have to keep away from very low rpm—when heavily-loaded parts such as cam lobes and valve tappets operate at the bottom of their speed range, there is time for most of the oil film between them to be squeezed out. Why doesn’t this ruin the parts immediately? Modern oil additives take up where the oil film marginally leaves off—but friction rises because the additives are solids, not liquids. This is why automakers are turning to roller tappets—to avoid some of the friction rise that occurs at low rpm in this way. An engine’s friction curve is sometimes described as a “bucket”—low in the middle, higher at the sides. That leaves us somewhere in the middle. How about throttle opening, or “load,” as the engineers term it? Well, again the news is that the middle is better than the extremes. At very high load, heat loss increases. At very low load, much of the moderate combustion pressure being generated is used up in overcoming piston-ring and other friction. Worse yet, in gasoline engines, intake throttling causes so-called “pumping loss” to increase—the power consumed in pulling a partial vacuum in the throttled cylinders. Incidentally, you can see from this paragraph why it is more efficient to turbocharge a smaller engine than it is to either rev the engine higher or make it bigger as a means of getting more power from it. The higher the engine revs, the more power it loses to friction, and a bigger engine has bigger and heavier everything, which also increases friction. Turbocharging is morally good! The result of all these pushing and pulling variables is that there is this island of minimum BSFC somewhere in the roughly left-middle of the performance map. Wouldn’t it be lovely if we could somehow operate the engine only at that speed and load? The obvious problem with this is that the vehicle’s transmission has a finite number of speeds, so the engine’s rpm must rise and fall—often across

a rather wide range—in the process of accelerating the vehicle from rest to cruising speed. Economy cars are geared to operate their rather small engines right on their island of minimum BSFC at highway speed. This is part of the reason why such a car gets its best fuel economy on the open road, not on local two-lane roads where average speeds are lower. A radical solution would be to connect the engine to a generator, operate it only at its point of best BSFC, and use the resulting electric power to both drive the vehicle and charge an on-board battery. When the battery was fully-charged, the combustion engine would simply shut down, starting up again only when needed. This is a rough description of so-called “series hybrids” such as Chevy’s awaited Volt. They attempt to operate their combustion engine only at or very near to its most efficient operating point. Another approach is to use an on-board battery-electric drive to power the vehicle mainly in conditions in which the combustion engine’s efficiency is especially bad. This is roughly how “parallel hybrids” such as Toyota’s Prius operate. Either power source can drive the vehicle separately, or both may do so together. The combustion engine’s fuel efficiency is poor at low part-throttle, so the electric drive operates at such low loads, with the combustion engine taking over as the load required moves into its more efficient realms. In both cases, the combustion engine is downsized because reduced peak power can be at least partly made up by using both the combustion engine and battery power during times of peak load, such as acceleration up highway on-ramps. This downsizing helps to avoid the worst feature of traditional V8-powered American cars and light trucks—namely, that of their typical 300 horsepower, only 10% was used at highway speeds. This pushed their operating point down to an inefficient zone. Think of it as pushing 300hp worth of friction to make only 30hp-worth of actual power.

106

Much is made of the hybrid’s ability to recover some energy by so-called regenerative braking. Rather than always relying on normal dry friction brakes for deceleration, such vehicles can employ their drive motors as generators, converting vehicle kinetic energy back into chemical energy stored in their battery. The less-shiny truth is that only about 30% energy recovery is currently achieved in this way. Another truth is that your use of the brakes is minimal as you practice conservative driving techniques. The most notorious problem of hybrids is that the buyer must purchase two engines with the vehicle, not one as formerly. The cost penalty for this is currently estimated at $6000 for a small automobile. If hybridization boosts the average fuel economy of a small car from 25mpg to 40, with gasoline at $3.80 a gallon that amounts to a savings of $.057 per mile, or on a 15,000 mile driving year, $855. Sounds pretty good! But when we divide that $855 into the hybrid’s $6000 cost premium, we find we must keep the car seven years to break even (because $6000, divided by $855, = 7). We fret about the tricky politics of oilproducing nations (Saudi Arabia, Iran, Iraq, United Arab Emirates, Kuwait, Venezuela, Russia, just for starters), but the batteries presently used in hybrids (nickel metal hydride) contain fair amounts of cobalt and lanthanum. There is some cobalt in the US but larger amounts are found in Russia and the Congo. A principal source for lanthanum is China. Why does everything have to be so complicated? Good luck to us all. The automakers don’t like the options they have for increasing fuel economy. Hybrids cost more because they have two engines instead of the traditional one, and Diesels cost more because their turbochargers are made of trick metals, their structure and moving parts have to be extra-beefy and made of premium materials, and their fuel injection system operates at huge pressures and so costs


more to make than does gasoline fuel injection equipment. Oh, and don’t forget all the emissions problems—Diesels need particulate exhaust filters and some means of trapping or reacting to harmlessness the nitrogen oxides they naturally generate in abundance. Only a few European Diesel cars and the planned Honda Diesel meet current US emissions requirements, but high fuel prices will encourage others to believe they can afford to sell compliant Diesels into this market. Meanwhile considerable research and development is being expended on a combustion scheme that might deliver near-Diesel fuel economy but at lower equipment and emissions cost. The current direction in gasoline engine development is toward very lean operation, but because uniformly lean mixtures are so difficult to ignite, the easy way to success has been with mixtures that are non-homogeneous— rich enough to ignite in the vicinity of the spark plug, but lean overall. The trouble with this is that the hotter combustion where the mixture is richer always generates some hard-to-remove nitrogen oxides. Isn’t there a way to ignite uniformly very lean mixtures? Until the early1990s, the official answer was no, and professional engineering societies were tempted to reject for publication papers that suggested otherwise (ignorant foreign crackpots!). When the dam broke, people realized that if a very lean mixture of gasoline and air were properly heated before compression, it would reliably and rapidly ignite at a certain point in the stroke. (One of the people who worked on this idea was the late Smoky Yunick.) This is currently called HCCI, which stands for Homogeneous Charge Compression Ignition, a process very different from Diesel combustion. In Diesel combustion, pure air is compressed until it is hot enough that fuel injected into it as a spray will ignite. In HCCI, it is a heated and uniform lean mixture of air and fuel that is compressed.

Because Diesel combustion is a process of droplet evaporation, with fuel molecules diffusing away from the droplet until they encounter enough oxygen to begin burning, all fuel mixtures from pure fuel (in the droplet) to pure air (far from the droplet) are present. That means that at least some very hot, chemically-correct combustion will take place, which is what generates the troublesome nitrogen oxides in Diesel exhaust. Because fuel and air are uniformly premixed in HCCI combustion, combustion is very lean everywhere— lean and therefore cool. This can essentially stop nitrogen oxide production, saving the engine developers from having to tack on expensive exhaust post-treatment devices to remove it. There’s always a catch. HCCI is made to work by adding measured amounts of hot exhaust gas to the fresh charge, so that it ignites at the proper point near the end of the compression stroke. But at idle or low load the tiny amount of fresh charge can become so diluted that it doesn’t ignite. And near full throttle there is so much charge in the cylinder that it cools the added hot exhaust gas, again causing ignition to cease. So it appears—at least for the moment—that HCCI may have to be used in a dualmode engine that switches to spark ignition and normal mixture at the bottom and top of its load range. After a while, the constantly increasing complexity of what must be done to meet emissions begins to seem self-defeating. To clean up Diesels or spark-ignition engines, or to make HCCI work, it seems we must create whole new industries and technologies, whose factories, product shipping, and commuting workers consume power and materials while generating waste. Can we say for sure whether this results in a net gain in quality of human life, and in benefit to the environment? Maybe it’s just a crazy question—like “Is it hotter in the city, or in the summer?” and I should just shut up, get with the mainstream, and

107

help to make this magazine, too, just an anthology of cheerleading presskits. Turbo Diesel Register Issue 61


Shooting Up For Diesels? While we await the results of independent testing to confirm or deny the value of water-methanol injection for Diesel power and economy (see Doug Leno’s “Water Methanol for Fuel Economy,” page 126), let’s have a look at the long history of such injection. First of all, the use of alcohol as fuel for internal combustion engines goes back a long way—to special ethanol races held in France around 1908, as a possible means of boosting the profitability of agriculture. (Sound familiar?) Something more substantive was done by engine pioneer Harry Ricardo around 1920. He was struggling with the cooling and combustion problems of spark-ignition engines, and his associate Frank Halford commissioned a special top end, to be fitted to Halford’s single-cylinder Triumph 500-cc racing motorcycle. At this time almost all motorcycles had air-cooled iron heads and cylinders, and ran pistons cast of iron or steel. The low heat conductivity of iron made these engines run very hot, and that, combined with the low knock resistance (we now call this Octane Number) of available gasolines, dictated that compression ratio stay at low numbers like 4 or 5-to-one. Ricardo attacked the problem from all points of the compass. He dealt with engine temperature by making piston and cylinder of aluminum and the head of aluminum-bronze (a 48% improvement in heat conductivity over iron). The aluminum cylinder had an iron liner. He dealt with octane number by brewing up an alcohol-rich fuel that strongly resisted detonation. The resulting “Triumph-Ricardo” race engine was able to run 8-to-one compression from which it derived high torque and good fuel economy, beating machines twice as large in both short and long races. Naturally everyone wanted some of that, so Ricardo craftily arranged two sources for his patent fuel—both making an identical blend. In a manner reminiscent of the arguments of party politics, racers endlessly debated which of these two fuels was the better—little knowing that they were the same.

If we now pull out our “Handbook of Chemistry and Physics” and compare the energy content of gasoline with that of alcohols, we will be puzzled. The alcohols contain only about 2/3 of the energy of the gasoline hydrocarbons, by volume. Alcohols are structurally hydrocarbons with substitution of an OH- group for one of the hydrogens. The lower energy of alcohols as compared with their corresponding hydrocarbons arises from the presence of this oxygen—in effect, an alcohol is a partially burned hydrocarbon. Some of its original energy has been used up by combining with oxygen. Now we’re even more puzzled. If alcohols contain only 2/3 as much energy as gasoline, how did the Triumph-Riccy win all those races? And why do dragsters in alcohol-fueled classes make more power than their gasoline-fired equivalents? And alcohol-class racers report that their engines run much cooler. How can alcohol yield less energy, yet make more power, while resulting in lower engine temperature? The answer lies in a special property of alcohols: their high latent heat of evaporation. It takes some heat to evaporate gasoline—that’s why, if you get gasoline on your fingers, they feel cool. But much more heat is required to evaporate alcohols. Now it gets a bit more complicated. Because alcohol contains less chemical energy than gasoline, we have to use a lot more of it to burn up all the air our engine is pumping. This need to use a lot of alcohol adds even more to this heat-ofevaporation affair. Indeed, the complete evaporation of the alcohol necessary to make a chemically-correct fuel-air mixture refrigerates the air by more than 400-degrees F. The extra power from the use of alcohol fuel comes from this refrigeration effect, which shrinks the fuel-air charge so that much more of it can be fed into the engine’s cylinders. The cool running of alcohol-fueled engines come from their lower flame temperature—also the result of the severe refrigeration of the initial fuel-air charge.

108

Now let’s jump to the 1930s. Aircraft engine makers are struggling with the problems of heat and detonation more than anyone else because their engines must give maximum power for a full five minutes during take-off, then give something like 85% power during climb to altitude. If an engine is ever going to overheat and detonate itself to pieces, this is when! This was made all the worse by the coming of supercharging. Those were desperate times—so much so that many engineers thought the Diesel engine—its torque not limited by detonation-dictated low compression ratio—might be the only way forward. Then new technologies came to the rescue. Thomas Midgley, over at Delco, came up with the anti-knock agent tetraethyl lead, and S.D. Heron at McCook Field came up with the sodiumcooled exhaust valve. Those advances eased the detonation situation a lot. As supercharging raised engine temperature and pushed conditions toward detonation, engineers hit on the idea of enriching the fuel mixture during take-off. The extra fuel couldn’t burn—there wasn’t enough oxygen in the cylinders for that. But its evaporation would reduce the temperature of the charge reaching the cylinders—and that allowed more supercharger boost to be used without pushing the engine into detonation. There were limits to this. The biggest of them was that mixtures of gasoline and air richer than about 10-to-one cannot not be ignited by a normal spark. That meant that you could enrich the mixture by about 40%, but make it any richer than that and misfiring would begin. The Handbook of Chemistry and Physics has been on a lot of shelves for many years, so it wasn’t long before engineers looked into it for something they could inject into engines running on takeoff power that would have an even greater cooling, anti-detonant effect than ordinary mixture enrichment. Water was a leading candidate because it requires a whacking great amount of energy to


evaporate (that is, to boil). For each gram of water evaporated, we must supply 540 calories of heat. Ah, but airplanes may fly up high where the air temperature is very low. It wouldn’t work too well to have the water/anti-detonant system freeze up, or even burst its tankage and plumbing. An anti-freeze was needed, and the convenient one was methyl alcohol—in a 50/50 mixture. Now as pilots eased their throttles forward to take-off power, the control diaphragm of a water-injection regulator sensed the high manifold pressure and began to flow water. Despite the supercharger, stuffing the cylinders with extra mixture and heating the charge air by compression, no detonation occurred because evaporation of water-methanol was pulling down the temperature of the charge air so much. Once take-off and initial acceleration were complete, the pilot made the first power reduction and, at the new conditions detonation became less likely so the water regulator shut off the watermethanol injection. Even more recently we have the case of the Reno air racers, with their 4000-hp P-51s and 4500-hp Bearcats. To make more power, they perform miracles of machine-shop improvisation to adapt a three-gear supercharger drive to an 18-cylinder radial, enabling most of 1000-hp to be sent to the supercharger, to compress and cram mixture into the cylinders. Now, how do they keep this overstuffed engine from detonating? All that compression has heated the intake charge a lot. To avoid detonation it must be cooled, and by cooling the charge, it is also “shrunk” in volume and made easier to push into the cylinders. How shall we cool it? One way is to inject ever-more water-methanol. The other is to directly cool the charge with an air-toair intercooler. Inject too much water and you become the fire department—every gram of water takes away 540 calories that could have been making power. But

scooping air through an intercooler can very easily cost hundreds of horsepower in aerodynamic drag. Damned if you do, damned if you don’t. Okay, now we’ll leave those guys sweating that problem out in the desert, and we’ll turn to our own Turbo Diesel trucks. Why would you inject watermethanol into a Turbo Diesel engine? Diesels don’t detonate, so we’re not doing it to suppress detonation. What other reason might we have? Well, Diesels designed to meet emissions regulations burn their fuel in the presence of about 20% excess air. At Bonneville and other racing venues, where there are no cleanair teams from Washington DC and Diesel racers adjust their fuel systems to deliver extra fuel, some power is made with that 20% extra air. You can tell when they’re doing it because, as the song says, their “exhaust is blowin’ black as coal.” The year I went to Bonneville, it took most of an hour for the long black cloud to drift away after a V-16-powered streamlined truck made a 250-mph run. Instead of adding extra Diesel fuel, you could add any other fuel that wouldn’t detonate, to combine with some of that excess air, release extra energy, and make a bit more power. Another reason you might have is that your charge air temperature is still high despite filling your whole under-hood space with intercoolers and hot and cold air ducts to serve them. Like the air racers, you might want to “shrink” your charge a bit by cooling it with watermethanol injection. Or, if you’re a racer and have run into some special piston or exhaust valve temperature problems, you might inject the stuff as a general coolant. And, last but not least, light truck Diesels don’t run on full throttle 100% of the time, and when they don’t, they have even more than 20% excess air present in their cylinders. It would be tempting, with Diesel selling for $4.50 a gallon, to squirt in something that is (a) cheaper, (b) contributes useful power, and (c)

109

doesn’t detonate as a result of the Diesel engine’s very high compression ratio. Objections? The first would come from the predictable emissions standpoint. Everything in Diesel combustion today is most carefully monitored so that the fuel control can perform its series of 4, 5, or more separate injections per power stroke, timed and sized to result in emissions within legal limits. Water-methanol injection will change at least some of the variables (charge temperature, timing of first ignition, rate of heat release), and it would be a miracle if everything stayed the same. A lively debate at the moment concerns the EPA’s skepticism about the automakers’ claim that buyers of Dieselpowered vehicles with the upcoming Selective Catalytic Reduction (SCR) emissions control will always faithfully remember to fill up the urea tank as well as the main fuel tank. Some say that a warning light will do the trick. Others call for a power reduction and limphome mode if the urea tank becomes empty. Hard-liners want the engine (and its emissions) to stop when the urea runs out. If Diesel engines were emissions-optimized to operate with water-methanol injection, why not just have them switch to a Diesel-only program when the W-M tank empties? Because that requires two expensive emissions development programs instead of one. Who’s paying? I’m not saying this can’t or shouldn’t be done—just that there will be a few objections. Until the police take to patrolling the highways with remote emissions sensing equipment, or until mandatory on-board systems report via cellphone all the details of operation to the dealer or manufacturer, you will remain free to do as you wish with your own equipment—except at motor vehicle inspection time. Turbo Diesel Register Issue 62


Hoping Back in the old days we could make power by turbocharging an engine to the desired air density and injecting fuel up to just short of the smoke threshold. Then we discovered the effects of particulates and the smog-forming power of nitrogen oxides. The first response was to use after-treatment devices to filter out, burn up, or chemically convert to harmlessness the unwanted emissions. But as it is more efficient not to make the unwanted stuff in the first place, major effort has gone into finer, higher-velocity fuel injection sprays, multiple spray events, and accurate process control by means of electronics. This was good, but there was more to come. Now they tell us that future engines must operate at much higher pressures. This is because (a) engines will be made smaller while remaining just as powerful as a means of cutting friction’s share of the action, and (b) lots more cooled exhaust gas will be added to the cylinder charge as a means of lowering flame temperature in order to cut nitrogen oxide generation. Lots of gas in a smaller engine adds up to increased pressure. I hope that the necessary research and development (R&D) work will be performed here in the US by American engineers. VW ran a $200 million program to develop the necessary rugged but low friction bottom end required for this kind of operation. Do American companies have what it takes to push through such programs? Doctors now routinely e-mail their X-rays to India, where they are interpreted at lower cost by Indian doctors. Are American industries looking at proposals from Indian or Chinese universities or consulting firms to handle US engineering R&D? Here we are split against our own rhetoric. On the one hand, 100% redblooded Americans are required to believe that the free market uniquely delivers the goods at the lowest possible cost. That may mean giving the R&D contract to well-trained and hard-working (not to mention deserving) persons at Shenyang Engineering Institute, and telling our US engineers their price is too high.

On the other hand, common sense tells us that if too many American incomes take the down elevator in this way, there are fewer people able to buy what is manufactured. What is worse, if valuable knowledge-centered employment is offshored as much of our manufacturing has been, our nation will become less and less technically capable as time passes. When young people seek career planning advice today, they are no longer told to study engineering and the sciences. They are told the hot areas are nursing, law enforcement, and business computer services. We had it good for 25 years after WW II because while other industrial nations had been smashed by war, our productive capacity was never higher. The US remained the producer of lowest cost—selling easily all over the world— until around 1970. That is approximately how long it took European and Asian producers to pick themselves up, rebuild at first by hand (cheap labor!), and then use what they’d earned and saved to install the very most efficient automated production systems and R&D equipment available. Then in one area after another, foreign nations replaced the US as the producers of lowest cost. The result was reduced sales and income for many US producers, and bankers responded by red-lining whole areas of manufacturing. Hmm, maybe we could return to profitability by replacing our outdated 1950s equipment? “Nope, sorry, we can’t lend you money for that sort of thing—the returns can’t even beat inflation.” Just as engineers work to solve the immediate problems stopping their progress, investors responded by shifting their money into new activities offering higher returns. Mergers and acquisitions heated up. Buy companies, use their cash to pay the note, sell what can be sold and close down the rest—a few losses provide useful red ink at tax time. But be careful—bankers watch your stock price.

110

When puzzled citizens asked why so many familiar industries were closing down, they were told we’ve entered a sensible new era of “globalization.” “Y’see, if the South Koreans have the best price for steel, they become steelmakers to the world. If Japan or Germany has a genius for auto making, they build all the cars. Nations with less efficient steel or auto industries will naturally close theirs down. Here in America, we’ve moved past your granddad’s smokestack, rust-belt industries to a fresh, sunny upland—the ‘service economy.’ Sure, there’ll be momentary hardship for some of us, but in the long run it’ll all come out just right. Wait and see.” We’ve been waiting. One piece of advice we’ve heard is that we must “all become entrepreneurs.” Haven’t I heard something like this before, reverberating through history? Why yes—it was former president Herbert Hoover, implying that Americans thrown out of work by the Great Depression of 1929 might “sell apples on street corners.” After the mergers and acquisitions came our present day, when so much money is to be made simply by speculating in money—derivatives, currency speculation, and the mysterious “CDOs,” or collateralized debt obligations. They tell us that international flows of capital exceed world trade by about 50-to-one. And history suggests to us that when so much money-making is disconnected from useful products and services, little market oscillations or shifts in confidence can quickly develop into big golly-wobbles. I want very much to see the coming developments in Diesel engines, and I am hoping that this fascinating work will be published in a language I can read. I am proud to see that Cadillac gathered itself up, shook off its past of 6000-pound road-rollers with 8 mile-pergallon 500-inch cast-iron pushrod V8s, and developed entirely new engines and systems that earned public trust and recovered a decent market share. If—as appears likely—the US needs fleets of


moderately-priced small, highly-efficient autos powered by low-emissions turboDiesel engines, I hope to see them developed and manufactured here in the US, by American engineers and factory workers. While there are still enough of them with the education and experience to get the job under way. Turbo Diesel Register Issue 63

111


Disel Alternatives – Making the Choice Over the past few years a number of competing combustion systems have entered the arena of choice. Originally the choice was only between sparkignition and Diesel, but now we read about gasoline firect injection (GDI), homogeneous charge compression ignition (HCCI), stratified-charge, and lean-burn. At first, research seemed to be seeking ultimates—the lowest possible fuel consumption, the minimum of emissions. Today, with cost looming ever-larger as the controlling factor, and under the continuing pressure of such initiatives as the corporate average fuel economy (CAFE), ultimates mean less than finding the least expensive way to power whole fleets of vehicles across a range of weights and applications.

upstream in the intake flow, as is the case with ordinary electronic fuel injection. GDI increases volumetric efficiency by taking in only air during the intake stroke—no fuel. At one time it was believed that adding fuel to the intake air well upstream from the cylinder refrigerated the air by the cooling effect of fuel evaporation, allowing more air mass to fit into the cylinder. But in fact it turns out that the presence of liquid fuel in the intake stream efficiently gathers heat from manifold interior surfaces, heating the charge and reducing, not increasing, its density. Air—an excellent insulator—picks up less heat by itself. The resulting cooler charge is more tolerant of higher compression ratios, making possible increased torque.

Diesel engines are highly fuel-efficient and generate mighty torque, but they are also fairly heavy for their power, require an extremely high-pressure fuel injection system, and need complex and expensive technologies with tongue-twister names to clean up the nitrogen oxides that their hot combustion produces, and to remove particulates.

Cadillac, as part of its program to gain a younger market with higher-tech engines and vehicles, developed its CTS V6 engine around GDI. The higher air-taking ability of this engine gave it the power of a 20% larger non-GDI powerplant. Now Ford is combining GDI with turbocharging in what they are calling “EcoBoost.” Making an engine smaller reduces friction by cutting the number and size of frictiongenerating components such as pistons, piston rings, bearings, and valve gear—but it also reduces power and torque as well. Adding a turbocharger allows the lost power and torque to be recovered, so there can be a net savings of friction. The result is a more fuel-efficient, but just as powerful engine. Combining such a downsized, but turbocharged, engine with GDI recovers even more power. The result, Ford claims, is V6s with the power of V8s, but with “up to 20%” better fuel economy. Because such engines are still fairly economical to build, the overall result can be valuable, affordable gains in CAFE for the manufacturer and attractively lower fuel consumption for the end user.

HCCI promises Diesel-like economy from a lighter, cheaper engine—but when? The basic concept is to mix just enough still-hot exhaust gas with fresh charge, then let compression auto-ignite it almost uniformly. Because ignition takes place throughout the charge volume and not at a single point, there is no flame front and therefore there can be no detonation. Because of its exhaustdiluted charge, HCCI combustion is cool, generating little NOx and so requiring only moderate emissions technology. But progress in extending the range of its ability to fire in this way down to idle and up to full load is slow. Such engines may always be dual-mode, with spark assist at full load. Development continues— which means producible engines are at least a couple of years away. GDI is attractive because it can be adapted to existing engines, but it does require a fine-particle-size injector that is also fast enough to form the mixture inside the combustion chamber—not

The stratified-charge engine achieves economy gains by making it practical to operate at very lean air-fuel ratios. Sad to say, as combustion is made more and more intense—for example, by supercharging a spark-ignition gasoline engine—we don’t gain power in direct proportion to the mass of charge burned.

112

What happens is that the conversion of combustion heat into cylinder pressure becomes less efficient the more intense that combustion becomes. We normally think of temperature as the average energy of the zillion colliding, zooming gas molecules in the hot combustion gas—but this is only an idealization. This zooming motion of the molecules is what translates into the pressure than drives pistons—the sum of all the molecular collisions with the piston crown. But in fact, as the combustion gas is made hotter, more and more thermal energy goes into rotation of the molecules, and into their internal vibrations. Because these motions translate less well into pressure on pistons, hotter combustion is less efficient than cooler combustion. And the more we dilute our fuel with air (provided we can still make it burn) the cooler will be the resulting combustion, and the more efficiently this cooler combustion heat will translate into piston push. Then trouble begins. Because a leanburn engine adds less fuel to each cylinder-ful of air, it needs to be bigger to equal the power of a conventional engine. That means extra weight. And a lean mixture is hard to ignite, requiring special technology. The answer was to stratify the charge—to make it quite lean overall, but locally rich enough to be ignited by a conventional spark plug. That leads to more potential trouble, because any time you burn a normal, chemically-correct mixture you burn hot—and generate NOx that is troublesome to clean up. Round and round it all goes. The game is to find the technology or combination of technologies that requires the least overall cost—cost of research, cost of materials, cost of production, cost of fuel consumed. Today, the scarcity of money makes basic research less attractive than somehow improvising ways to adapt what is already in production to perform in new ways with a minimum of technology. The probable result is that basic research will continue, but at a reduced pace, while companies


concentrate on staying afloat and solving their technological problems in the cheapest ways and in the shortest time. Later, when (if?) economies revive, basic research can return to a faster pace. Turbo Diesel Register Issue 64

113


Cummins, Chrysler, Fiat According to a recent press release, Fiat hopes to triple its auto production, which is now 2.2 million per year. The company already has branch or joint venture production operating in South America, India, China, Russia, and Eastern Europe. The obvious goal is to emerge from the present turbulent times as one of a reduced number of very large surviving world manufacturers. Their pending plan to take a 35% share of failing Chrysler would give them expanded access to the US market, and would bring to Chrysler Fiat’s know-how in small car construction. Fiat withdrew from the US market in 1983. Fiat’s truck division, Iveco (Industrial Vehicle Corporation), operates in 19 countries worldwide and has 31,000 employees. Annual production is 200,000 commercial vehicles and close to 500,000 Diesel engines In 1996 Fiat, Case New Holland and Cummins entered into a joint venture. That joint venture ended in 2008 with Fiat’s engine development operation, Fiat Powertrain, taking complete control of the European Engine Alliance (EEA), of which Cummins formerly had a 1/3 share. The goal of the alliance was, according to a 2008 piece by Mike Brezonick in Diesel Progress, to jointly develop “a new generation of 4, 5 and 6-liter diesel engines.” Brezonick’s piece continues “…It became increasingly clear that changes in Fiat’s direction, particularly its growing focus on its own engine sales, would affect the long-term viability of the alliance. “The first writing on the wall may have come as early as 1999, when New Holland N.V., owned by Fiat, purchased Case Corp. as part of its foundation of CNH (Case New Holland) Global. Then in mid-2005, Iveco’s engine and powertrain activities were merged into Fiat Powertrain Technologies (FPT), which more aggressively began pursuing growth in engine sales. That put Cummins and FPT in competition for much of the same business—hardly an ideal situation for alliance partners.”

Quoting from a late-January piece by Alan Bunting in AutomotiveWorld.com: “By all accounts, relations between the EEA partners were far from smooth. Cummins allegedly accused the Italians of reneging on an agreement that Fiat would not tout for outside, onhighway business. Although not publicly acknowledged at the time, cooperation was strained and the EEA was quietly dissolved about five years ago. And when Cummins embarked on an ISB update program to meet Euro 4 emission laws—which involved, fundamentally, an increase in bore and stroke—the company bent over backwards to prevent Fiat engineers getting access to the finer details, including those cylinder dimensions.” As of this writing (5/6/09), Chrysler owed Cummins $44 million and Cummins engine sales to Chrysler are down 45% as compared with last year, according to a report on Indystar.com’s business page. A later report suggests a Federal program will pay Cummins what Chrysler owes them, but sales of Cummins midrange engines do appear under threat from Chrysler’s difficulties. The current turbulence in the world economy is only accelerating a process that has been materializing for years— the consolidation of large excess production capacity in the world auto industry. To grow or to shrink appears to be the choice. Scratch-and-scrabble will be the method. Fiat is one of the great original auto companies, dating back to July 1899 when former cavalry officer Giovanni Agnelli, two counts, and a banker joined forces for the purpose. In 1900 the Corso Dante factory opened, employing 150 people and building 24 cars. In 1903 their 12-horsepower four-cylinder car sold well in France, England, and America. There would follow classic years in auto racing before World War One, the construction of many types of aircraft engines, and of cars and trucks. Fiat expanded steadily to become and remain one of Europe’s largest vehicle builders. Fiat currently owns Ferrari,

114

but they are known best for building small, economical automobiles on a large scale. US auto makers have been unable to resist the money to be made in large vehicles like SUVs and pickups, while knowing that ultimately, the world must turn to smaller cars. When European and Asian economy autos first appeared in the US, industry heavies were amused. “The sooner these beginners learn that small cars equal small profits, the more likely they are to survive.” Companies such as VW, Toyota, and Honda did learn how to earn a profit on small cars, but each time high fuel prices drove Detroit to consider doing so themselves, crude oil would drop and the quick profits were seen to be once more in large vehicles. This became a cyclic process, operating rather like a ratchet. When gasoline jumped in price in 1974, Honda responded with the Accord model, which was a small but feature-laden car. Many Americans saw the Accord as the small car with big-car luxury—and they bought it. When fuel was again plentiful, many of those who bought Accord remained Honda buyers, while others returned to buying large American cars. Japanese and European cars had by now earned the reputation of being reliable and of good quality. When fuel prices jumped again in 1979, a new group of Americans bought smaller, more fuelefficient cars, and when prices moderated, some of them remained smaller-car buyers. When US makers decided they must after all offer smaller cars, their initial offerings of the 1980s were hastilydesigned and many of them performed poorly and were indifferently constructed. Valuable brand identities forged over many years were sacrificed in a mad scramble to produce generic “Detroit Toyotas.” Some of the least-popular of these were the Chrysler K-cars and GM’s “Chevrolac.” While the imports offered higher-performing engines


and chassis with modern handling and “feel,” many of the new US offering were powered by stodgy two-valve fours or V6s that were conceptually adaptations of antique iron V8s. In the 1990s Detroit began to mend fences by developing more modern technologies, but in the meantime the damage had been done. Americans had come to regard import cars as of generally higher quality, and American cars as second best. The upand-down ratchet effect of repeated fuelprice crises alternately forced Detroit to tool up for smaller cars, then tempted them to drop small-car plans in favor of a return to quick profits from big vehicles. In the course of these cycles Detroit lost about half of its market. Meanwhile, the import brands refined their ability to earn a profit on smaller cars. Fiat’s long survival in the European marketplace indicates that they have that ability as well. They appear to be betting that the current period of moderate fuel prices won’t last forever. If prices jump again, whoever is best prepared to offer attractive small cars at affordable prices has a chance to earn money. Could that be Fiat, building what they know best, in Chrysler factories? Turbo Diesel Register Issue 65

115


GTL Revisited I have discussed gas-to-liquids (GTL) Diesel fuel in this column before (Issue 49, August ‘05). Geological oil deposits contain some liquids that assume gaseous state once released from below-ground pressures, and of this what we call natural gas consists mainly of methane. A moment’s thought about what might happen to petroleum over millions of years of underground storage reveals that less-stable arrangements of carbon and hydrogen atoms are gradually broken apart by thermal motion and reform into more stable forms. Among the most stable are methane (one carbon atom, joined to four hydrogens) and the ring structures or aromatics (which make up so much of Diesel fuels). GTL is made under process conditions which favor the assembly of multiple carbon atoms into chains. Chains, having loose ends, are more easily broken apart or their hydrogens knocked off by heat, and so GTL Diesel ignites more promptly (higher cetane rating) and burns more completely than does the more stable ring-structured Diesel fuel. It also costs about 10% more. More about that cost later. Most of the combustion and emissions benefits of GTL are preserved even when it is cut 50% with conventional Diesel fuel. Gas is a problem in the oil fields. What do you do with it? For many years it was flared off—simply burned up—rather than solve the difficult problems of getting it to market. Gas forms explosive mixtures with air, so there is hazard in its presence. Compressing and compactly storing it requires expensive and specialized equipment—and LNG ships for overseas transit. But piping it short distances to GTL conversion plants is also possible. Currently, I learn, the two marketing options cost about the same. For a time, gas was regarded as the miracle fuel of the future. It burns relatively cleanly and can be used to power compact gas turbine-driven electricity generating stations. It has less carbon in relation to its heating value

than do heavier fuels, so it is attractive on the basis of reduced carbon dioxide emissions. Lots of powerplants—both thermal and gas turbine—began to burn the stuff. Oh joy. Then the price of gas went up—a bunch. Power companies depend upon stockholders who move their money to more profitable investments if the numbers start to look bad. To keep the numbers good, expensive gasfired plants were quickly made less numerous and coal-fired plants more numerous. Stockholders may drive Priuses and help their school-age sons and daughters recycle bottlecaps on Saturdays, but they are very serious about stock prices. As one long-departed US president once said, “Gentlemen, the business of this country is business.” Currently, the miracle trusted to optimize the compromise between costs and emissions is called “cap and trade.” It has to do with buying and selling rights to emit carbon. I have no idea what cap and trade will do for us, but it surely will not exist for long if it fails to earn serious money for some folks. GTL is very attractive, but gas is expensive, and in many cases it is far away, across oceans. And there are other capable bidders. The scale of energy use in this country is enormous. Please keep this in mind the next time you read about how wind energy use has doubled in the last two years. Bravo, but wind energy was supplying one-tenth of one percent of this nation’s electricity. Now it is supplying two-tenths of one percent. Coal now supplies 50 percent, and one large coal-fired station burns two 10,000 ton trainloads of coal every day. That is one ton of coal every four seconds, burned in each such big plant. And there are hundreds, so coal mining is fast and furious. Hand-wavers tell us the solution is to get out of our cars and onto bicycles, to shut off our air conditioners, and to put on warm cashmere sweaters while keeping our houses at 55-degrees in winter. You

116

say you have a twenty-mile commute? Move closer to work. Live efficiently in emissions-optimized urban dorms. Get serious. The cities and industries of north and south alike are made possible by air conditioning and space heating. Twenty percent of our nation’s yearly electricity consumption goes for air conditioning. Many of this audience are old enough to remember whole office buildings-full of people sent home in July because people were fainting in their offices. Old or infirm people in un-airconditioned apartments died or required hospitalization. Only a few can remember trying to get their manufacturing jobs done in sun-roasted plants, or heating only the kitchen of the farmhouse in winter. Only a small number of well-to-do super-environmentalists can afford such retro changes. Sure, we can save some fuel by judicious economy, but we can’t just shut off life as we know it and switch on some new, better, and more responsible life overnight. Who pays the large capital costs? Who would make all the brandnew details mesh? Who or what—short of a draconian World Government— would make us all do this? A vague sense of responsibility? That sense of responsibility hasn’t put an end to drunk driving or cheating on taxes. Why would it change our oil consumption overnight? Well, then, isn’t it true that vast energy resources await us as oil shale, deep under the Four Corners region of the American Southwest? And aren’t recovery operations in progress in Alberta, Canada on their vast tar sands? Yes, in both cases, but guess what? Price rules. What does that mean? It means that oil companies will keep pumping the easiest oil first for as long as it is cheaper than the alternatives. (Remember those Prius-driving stockholders, all reading the Wall Street Journal.) The oil shale is hard to get at and requires lots of water (an availability problem in the Southwest) and large physical plant for its conversion to liquid fuels. The tar sands have to be heated to make the


glop run out, and once they have the stuff, it requires quite different refinery methods and equipment from traditional petroleum. The land is wrecked by the extraction and its waste products. All this tacks on extra costs. Stockholders— even as they public-spiritedly switch from paper towels to washable linen napkins—hate that. So the search for pumpable oil is where it is at—for the foreseeable. GTL is just a minor, if interesting, detail in that larger picture. And little though we may like it, the US is just one player among players in today’s international oil quest. Whoever offers the best deal gets the goods. Regrettably, the game is made harder by history. Just after WW II the US won the day in Saudi Arabia over British oil interests because the Arab world had long negative experience with British colonialism. (After WW I Britain parceled out the remnants of the Muslim Ottoman Empire, keeping the best bits for herself.) Today, the US is looked upon by many foreign nations as an interfering uncle, just as Britain was then. We earnestly try to be BFFs (that’s kidspeak for best friends forever—groovy, eh?) simultaneously with Israel and the Arab world. Good luck. We have to live with all this. Meanwhile, China finds fair success playing the “good cop” role, building infrastructure in oil-rich nations that agree to supply her swelling industry. It’s really annoying to discover that all nations have clever, resourceful business minds who negotiate just as hard as ours do. Chinese businessmen are not weak-minded Communist ideologues, brandishing Little Red Books. What they want is money, which salutes no flags. Just forget the idea that one of these Presidents will finally deliver on his promise to end US dependence on imported oil. They can’t. The stockholders won’t let them. Can’t let them. This is business. Also forget the crackpot idea that liberal love for spotted owls or baby seals is all that keeps us from “unlimited oil” from

vast, off-limits offshore or Alaska fields. Oil is serious business and the strongest player wins. Hundreds of thousands have died to prove this. Hitler insisted that his Army Group South keep pushing toward the Soviets’ Baku oil fields—despite three Soviet armies assembling to cut them off. Hitler finally relented, but not in time to save 300,000 Germans encircled at Stalingrad. Because Germany was unable to seize Baku oil, it became necessary to synthesize liquid fuels from coal for German tanks, submarines, and aircraft. It was not the cheapest solution. It was the only one. Many a US B-24 or B-17 crewman met his fate attempting to destroy the resulting German synthetic fuel plants. Hitler complained constantly that his generals “had no understanding of economics.” In the Pacific, Japan went to war against the US, Britain, and Holland when the US—supplier of 80% of Japan’s oil—cut off the flow to show extreme displeasure at the Japanese occupation of Vietnam in the spring of 1941 (French Indochina at the time). As Japan had cut herself bigger and bigger slices of China through the 1930s, the US sent her disapproving notes, but could take no action because it was the depths of the Great Depression. History books don’t call this “appeasement,” but it had the same effect. If you read the history that Japanese school children read today, it says that the US action forced Japan to choose between (1) accepting the status of a poor third-rate nation or (2) taking military action to secure a reliable petroleum supply. Japan had seen the industrial powers cut up a weak China (while taking plenty herself—Manchuria, rich in coal and iron). Japan had no desire to become anyone’s colony. This had been the driving force behind rapid Japanese industrialization. In their view, oil was power, and they meant to have it. The Japanese therefore planned to seize the rich Dutch oil fields in what is today Indonesia, but that would instantly trigger strong and probably

117

unmanageable military responses from the western powers. To delay or soften those responses, the Japanese sent a large task force of seven aircraft carriers to knock out the US naval base at Pearl Harbor, and sent Japanese Army units to seize the British naval base at Singapore and US bases in the Philippine Islands. The point? Nations are deadly serious about oil. Spotted owls? Read the papers. In less-jolly parts of the world journalists meet with mysterious accidents when they uncover embarrassing information about the high bid, who made it, and what the numbers were. We get to see the ripples on the surface, read about the various maneuverings over Niger, Somalia, Chechnya, the Spratly Islands—all the places either producing oil now, or with promising seismic reports on file at Slumberger, Halliburton, or other respected petroleum survey firms. Most of all, we see the petroleum and stock prices. You’ll know oil is nearing its real end when Shell, ExxonMobil, Lukoil, and the others shift major investment money—many, many billions—to other energy schemes. Until then, we can confidently assume there’s plenty to come. We can hope as a detail that Diesel power finds a place in current US transportation planning. (There’s a lot of talk about electric cars, but try to imagine what an electric truck would look like. What could it carry, besides its own batteries?) And somewhere within all that planning may be room for some refreshing, crystal-clear, clean-burning GTL Diesel fuel. Turbo Diesel Register Issue 66


Smoke I wonder if there is any coherent “Diesel lobby” in this country. The domestic automakers’ interest in this is proportional to the amount of their business that is Diesel-powered—which is quite small. The truck business has little reason to push its position because it’s not as though there is any competing powerplant for them. (Try to imagine Yellow Freight’s payload fraction after switching to electric.) So the result is that Diesel power is pretty quiet in the US. Meanwhile Honda and the German automakers are moving forward with US-compliant Diesel autos, but they are swimming against a tide of public opinion that thinks “Diesels are dirty,” or “Diesel fuel is an inherently polluting hydrocarbon.” I want that kind of public opinion to consider that all the latemodel Diesels operating in the US meet the laws of the land with respect to noise and emissions. Diesel-powered vehicles sold in the US are 100% okay with public policy and are equal partners with gasoline-powered counterparts in cutting emissions. Indeed, the results produced by 2007-compliant Diesels are said to be much better than planned. But there is no Diesel lobby bringing this information to the attention of the whole public. Yes, the greater the average molecular weight of a hydrocarbon fuel, the more difficult it is to burn completely, which is why Diesel engines have traditionally had the problem of exhaust particulates. Current technologies work on this problem from both ends. Ultra-highpressure, multi-strike fuel injection works from the combustion end to drive fuel droplets through compressed charge air at close to the speed of sound, causing rapid droplet breakup and evaporation. These are keys to improved and more complete combustion, for the closer the injected fuel comes to the vapor state at the time of combustion, the likelier it is that each and every hydrogen and carbon atom will be married off to oxygen. That means less left over in the form of the feared polycyclic aromatic hydrocarbons (PAHs), riding on the clumps of uncombined carbon atoms

that we know as particulates. And what there is of those leftovers must now pass through particulate filters where the legally-mandated fraction is trapped, held, and then burned. That means no more black exhaust, and for our lungs it means greatly reduced numbers of airborne PAH molecules, with their potential as carcinogens. (Think of PAHs as a kind of Tinkertoy, made up of hexagonal 6-carbon rings, linked together by shared sides. Carbon rings are not “dirty” and they are not “evil,” as they are employed by plants as structural elements of cell walls. Inconveniently, some structures are carcinogenic. Curare, a deadly poison, is “all-natural” and it is demonstrably “organic”—but it’s poison nevertheless.) I had a look at the faculty parking lot at the local community college. No pickup trucks—zero. But only two Honda Insight hybrids. What I did see was lots and lots of small economy sedans—the kind that the Europeans power with small turbo-Diesels, enabling them to use 30-40% less energy than gaspowered equivalents. Maybe this just means community college faculty aren’t paid enough to afford many Lincoln Navigators. Currently there is much discussion of “80 in ‘50,” which means reducing carbon dioxide emissions by 80% by 2050. Setting aside the question of global warming itself, what does this imply? My present car—a Chevy Cobalt—averages 27mpg in mixed driving. If I traded it for a two-seat Insight and drove only on four-lane highways, using the “pulseand-glide” ultra-mileage technique of full-fanatic econo-drivers, I might get 95mpg. Sorry, not good enough—I’d have to get 135-mpg to hit that 2050 goal. I live in the Snow Belt and have to heat my house October to April. How will my descendants cut their heating bill by 80%? I have six inches of fiberglass insulation on the south side and ten inches on the north. Will I have to increase that times four? Two feet of

118

insulation on the south and three-and-ahalf on the north? And fill the attic right to the roof with layers of batts? Impractical. My great-granddad’s generation had a simpler way—they lived only in the kitchen in the winter, and wore woolen union suits to bed. Is this the future American Way of Life? Now how about the cities? Everything city-dwellers eat or otherwise consume comes in by truck or train, and is distributed by Diesel trucks. Can we cut their fuel use by 80%? At this point the informed environmentalist chimes in, “Of course we don’t mean starving the cities by cutting truck fuel 80%. We mean converting to clean, zero-emissions electric power for these uses. By that time, electric power generation will have switched to nuclear or to gas, clean coal, and carbon sequestration. Why, even as we speak, a giant coal plant in West Virginia is running an experimental program to remove carbon dioxide from stack gas, cool and compress it to liquid form, and pump it miles deep into the earth where it can never escape to the atmosphere.” Let’s take the items one by one. Electric trains are fine, and have existed for 2-3 generations, allowing goods to enter cities in smokeless fashion. But electric trucks? Unless they are only going a few miles on each trip, it’s fair to ask what they could carry in addition to their own batteries. A BMW Mini, converted to battery power and with approximately 100 miles range, gives up two of its four seats to batteries, and its range is cut significantly if the batteries have to supply cabin heat in winter.. In-city delivery might be an electric application, but long-distance trucking certainly is not. Aircraft are not. Ocean shipping is not. Arm-waving about magically bringing back the railroads flies in the face of the thousands of miles of roadbed that has been torn up to make “rail-trails” for snowmobilers, four-wheelers, and mountain-bikers. Clean, zero-emissions electric power? Yes, at the point of use there is only


a faint smell of hot insulation. But at the point of power generation at this present moment coal supplies 48.5% of the nation’s electricity, with nuclear and gas vying for second at 19 or 20% each. Gas is attractive, especially for cities because a bunch of compact gas-turbine-powered alternators can be brought in on railcars, plugged-in, and started up without the usual twenty years of wrangling over siting, permits, environmental impact reports and long series of wonderfully boring public meetings. But gas is expensive. Only coal is cheap, which is why it supplies essentially half of our electricity. At this point the super-environmentalist cuts in and says, “Coal is green.” Our collective, hydrocarbon-burning jaws drop. How? He explains that “Hydrocarbon fuels have hidden environmental costs, such as offshore politics, refining, disposal of wastes, and transportation half-way around the globe. Coal is here and requires almost no processing. Coal cuts carbon emissions. Coal is green.” Before we get sucked into this one, take a deep breath. Most of what passes for environmental debate is slogans which neither side understands, but just repeat, as loudly as their promotional budgets allow. Like presidential elections, this is an arm-wrestle of television time, clever publicists, and slogans. Gosh, folks, last I checked, strip-mining of the kind that extracts all that coal in Wyoming was widely considered environmentally nasty. Now it’s okay? How about removing the whole tops of mountains in West Virginia, and how about men with black faces and lungs descending into the earth in conditions of ponderable risk, for wages that can only be called “extremely moderate?” This is green, just because someone needs to make it look that way—the ends justifying the means? And what is “clean coal”? It is a suite of technologies which could be used to extract undesirable matter from the stack gas of coal-burning electric plants—analogous to Diesel exhaust

aftertreatment. It has been discussed, but is not currently in use because it costs money. Oh, no! Another large contingent of would-be problem-solvers speaks up at this point. These are the “free-marketwill-solve-everything” people. Just make electricity a buck-fifty a kWh and gasoline $20 a gallon and the world gets squeaky-clean overnight. That’s okay for those who can comfortably afford it, but have a look at what’s happening to many of our cities—endless blocks of empty buildings, windows gone, covered with graffiti. Make everything superexpensive and lots more people drop off the bottom of the food-chain. I could be one of them under those conditions, because I was too shiftless and lazy to become a vice-president of Enron or a Madoff partner. It’s socially risky to have too many angry people on the bottom—managing the balance between pleasant living in gated communities and gritty urban realities is one of the trickiest tasks of government. The socalled “free-market solution,” with its $20 gasoline (and $24 Diesel, let’s not forget!) would make it infinitely trickier. But it could happen anyway. How about carbon sequestration? Turns out that pilot plant in West Virginia is processing 2% of one plant’s stack gas— a modest but significant experiment. They may learn whether the high estimate—that sequestration will add 30% to the cost of electricity—or the low estimate of 5 - 15%, is true. Will we be told the results? I’m not sure, because someone in a responsible position in business or government may decide that an “adjusted” message—even though it’s a fib—represent a higher moral good than truth. Sad to say, it has happened before now. Makes me think of those long-ago debates over nuclear power, in which teams of bright, well-paid men and women, educated at the same prestigious universities and trained in the same arcane specialties, argue opposite sides of every point. What does this suggest to the lay person? Nothing good, except maybe that opinions are for hire and that truth may be irrelevant

119

to the outcome. How do we get to a future world in which we use less energy, where the least among us aren’t tempted to become desperate revolutionaries, and the others are not crowded into small kitchens just because it’s cold outside? By golly, that’s a good question. Here, help me tear the glassine address windows out of these used billing envelopes so I can put them in the recycling with a clear conscience. Now we are good children. I’d like to hear lots more from a Diesel lobby, because Diesel power is definitely going to be an important part of future energy-saving in this country. “Electric” has a big head start in building propaganda power, but Diesel power is practical now. Turbo Diesel Register Issue 67


By Golly, You Don’t Say! Just recently I was at the New England Air Museum and came upon a group of young people looking at a big 1940s aircraft piston engine. One of them was saying; “And you know what ELSE is JUST AMAZING? They designed these things with SLIDE RULES.” Is that so, sonny? In fact, most of the world’s great bridges and tall buildings were designed in the same way, as were the atomic bomb, the transistor, and radar. The slide rule was just a device for making quick numerical estimates. The real work of design took place through understanding the physics involved in the problem, and using that as a guide to new solutions. Understanding is the key, not raw computation. It is important to know just how much complex instrumentation went into acquiring that understanding. When I was barely out of college, I visited MIT’s Sloan Automotive lab, where the MIT balanced-diaphragm engine indicator was still in use. This was a device which enabled the actual compression, suction, and combustion pressures in an engine cylinder to be measured while firing. One side of the diaphragm was connected to the engine’s combustion chamber and the other to a variable source of pressure. As the engine ran on a dynamometer, pressure on one side of the diaphragm was slowly varied, and the point of crank rotation at which the diaphragm changed sides in its housing was noted. Gradually, point-by-point, a complete pressure-volume curve of the engine cycle could be recorded. With this P-V curve it was then possible to calculate the power the engine would make if there were no bearing or piston friction. It was also possible to derive the speed of flame propagation, and to see the beginnings of detonation. Today, an engine indicator takes the form of an $8000 water-cooled pressure transducer that screws into the head, sending its signal to a pre-amp that then feeds data to a computer. Same idea,

faster measurement, new equipment. How much stress do you suppose is acting on this connecting-rod during operation? Today computers are used to run rapid analysis of this based upon conceptually breaking the rod up into an assembly of many small regions (finite elements), then computing the forces generated by, and acting upon, each such region. Millions of such computations produce a predicted stress pattern in the rod at each point in the engine cycle. Back in the 1980s a friend described seeing such a finite element analysis (FEA) in progress at an engine manufacturer. Computers were slower then, so the false-color “stress picture” of the connecting rod was taking minutes to fill up the monitor’s screen, line-byline, starting at the bottom. Meanwhile the engineer went to the coffee room to refresh himself with a hot drink and the morning’s gossip. This new breed of engineer is sometimes called a “screen jockey” and he depends more on machine computation and modeling than on understanding. That is how he was educated. This becomes ever-more true as older engineers— those with long experience—are offered early retirement and are replaced by much younger (and lesser-paid) persons seated at rows of high-end computer workstations. Why think through a problem when the machine will crank out a solution? How did engineers of the past know whether a cylinder block was strong enough? Or too strong? Today a dynamic FEA would be used. But long ago, E.S. Taylor came up with a simple technique that gave direct answers. This was “brittle lacquer”—to coat the part in question with a brittle paint, then subject it to stress (such as pressure in cylinders). Wherever the strain exceeded a certain value, the lacquer would flake. A few cycles of this technique, combined with changes to the casting, resulted in a strong part. Metal propellers driven by aircraft piston engines were vulnerable to fatigue

120

failures caused by blade flexure driven by the engine’s firing impulses. One approach to a solution was to design crank counterweights as pendulums, whose motion would absorb energy from the crank as a cylinder fired, then swing back and give that energy back to the crank as firing pressure died away. Such pendulous counterweights smoothedout the engine’s torque variation without actually consuming power. But how much stress remained in the various parts of the propeller blades? Blades were instrumented with strain gauges adhered to their surfaces, with the wires from the gauges led to a slip-ring assembly. Then the engine was run or the whole airplane flown to record the stress levels at various rpm and load, determining whether or not a given engine-prop combination was safe to fly. Moving parts of engines have been similarly instrumented. Piston rings have been insulated from their piston in order to study the extent of their contact with the cylinder wall by electrical conductivity. Connecting rods have been covered with strain gauges, their leads fed through ribbon wire or a feed arm. Temperatures of critical parts can be measured either electrically, with thermocouples, thermally with plugs of metal alloys that melt at various temperatures, or by means of paints which change color over a scale of temperatures. Long ago, in about 1903, the auto maker Napier in England developed its first inline six. The greater length and torsional springiness of this crank allowed the engine’s firing impulses to excite it into torsional vibration, making its timing gear rattle loudly. This problem was “handled” by the firm’s PR man, S.F. Edge, who cheerfully called the noise “power rattle.” But when Rolls produced its own six in 1905, they tackled the problem directly, placing a spring drive between the crank and gearbox, and placing a Lanchester frictional damper on the free end of the crank. Clearly, someone thought about what must be going on and came up


with a series of experimental solutions quickly leading to a practical fix. The same company was able to pacify torsional vibration in the crankshafts of its Merlin and Griffon aircraft engines as their power was developed from 900hp in 1939 to 2250hp at the end of WWII. All without computers. In the 1930s this same problem was tackled more directly by use of an instrument called the “torsiograph”. This employed an inertial flywheel, free to turn on its own bearings, but restrained by springs. As the crank spun, the flywheel turned steadily as the crank rotated in its series of jerks and vibrations. These jerks and vibrations were recorded by a capacitor whose value was changed by the positional difference between the crank and the torsiograph’s flywheel. This, in turn, controlled the frequency of an electronic oscillator, which was then recorded. By this means the extent of the crank’s torsional vibration could be measured quite accurately.

either with slide rule or mechanical desk calculators. This work was laborious but produced useful results.

without valve gear—as a means of learning just what share of friction each class of engine parts contributes.

Engines have been built with “floating” cylinder liners, by use of which piston ring friction forces were measured at all points in the engine cycle. This enabled validation of mathematical lubrication models, which could then be used to predict future performance. Such models are still in use, but the required computations are performed much more quickly today by computers. Yet the original understanding of such phenomena was generated by using experimental rigs to ask nature the relevant questions. This required actual thought—not just loading data into a $40,000 software package and then hitting “run.”

Details of airflow into the cylinders of two-stroke Diesels was studied by using “rakes” of impact probes, moved in small steps through headless cylinders as scavenge air was blown through their ports. Through such experiments the flow patterns necessary to make best torque were worked out.

Diesel engines for submarines had to be of much lighter construction than were Diesels for industrial applications, and they had to be slender enough to fit within the hulls. Torsional vibration was immediately a very serious problem, as lighter cranks lacked torsional stiffness. As a result, operation of such engines at certain speeds led rapidly to crank breakage. At first, the solution was to provide tables of vibration amplitude, measured in engine trials, and to forbid operation at those speeds which especially excited crank torsionals. Conducting a night surface attack is nerve-wracking enough without worrying about crankshaft forbidden speeds, so this was not a practical solution. What did work was to couple the Diesels to generators, operate them at a constant safe speed, and vary propeller rpm electrically.

From at least the 1930s, engines with transparent cylinder walls made of quartz have been used to study combustion and flame propagation. Rows of ionization gauges have been screwed into cylinder heads to record the velocity and direction of flame propagation. Even before that, Harry Ricardo used an in-cylinder pinwheel device to measure the rate of air swirl in combustion chambers. This was done to determine just how much turbulence was required to complete combustion in a given time, or as a means of avoiding combustion so rapid as to result in roughness. Even today, Ducati in Italy use such a device to assist their engineers in finding correct angles for intake ports and shapes for combustion chambers. There are hopes that computed fluid dynamics will soon take over such analytical tasks, but the complexity of turbulent combustion requires either extreme computation speed or unrealistic simplifications to the math model that is used. Its day is coming soon.

All this was done without computers. Indeed, math analysts created procedures by which any crank could be reduced to an “assembly of springs and masses,” whose motions could be analyzed by computations performed

Engineers from the early years of the 20th-century “motored” engines—turned them with electric motors—as a means of measuring friction and air pumping loss. They motored engines in various states of assembly—for example,

121

Even with computers there remains the argument between those who just want answers, and those who also want understanding. I attended a symposium on flight in ground effect and listened to a Russian engineer describe a mathematical method known as “matched asymptotic expansions.” During the question period a listener raised his hand and said in a bored voice, “Why go to all this trouble? We have fast computers here in the US. Why not just run Navier-Stokes code and let the machine grind out the answers?” The Russian replied, “Because matched asymptotic expansions not only give you answers, they also give you insight into what the flow is doing. And with that insight you can move directly toward improvement.” Computers are wonderfully fast computational aids, and using models of real phenomena they can predict some things that were previously beyond our understanding. Yet the models themselves cannot be perfect or infallible, as we are seeing in the recent controversy over whether data used to model global warming was correctly used. Human thought is still required! Or, as one crusty old engineer once put it to me, “Garbage in, garbage out. Always do a manual back-of-theenvelope calculation as a check on computer results.” Turbo Diesel Register Issue 68


On Hold Bad things don’t go away just because we take care not to think about them. Right now we’re in a global economic depression that squeezed the retirement funds of those who have them to about half their previous size. There are signs of some recovery here, but in Europe it is still at its worst. US automakers have had a terrible shock and it’s far from clear where their “recovery” is headed. But, we whistle past the graveyard. A similar situation is the weaponry of the Cold War. It still exists—at least 50,000 warheads—but because the former Soviet Union has new office stationery, we have stopped thinking about such things. In fact, their latest ICBM, called “Topol-M”, has maneuvering warheads intended to dodge anti-missiles as they crash through the atmosphere. YouTube offers a video with Russian narration. A giant multi-axle (and Diesel-powered, you can be sure!) transporter-erector splashes through a stream, its driver’s elbow jauntily out the open window of the cab. It climbs along a wooded pathway and takes up its firing position. There is a loud “BONK” as the cap pops off the storage tube. Then it takes about 30 seconds to bring the tube vertical, followed by a rush of gas as the tube is pressurized. The missile is blown up out of the tube with a thump, its solidfueled engine ignites with a roar, and the horrible bright thing rises out of sight into the evening sky. But it’s not real because since 1991 we’ve stopped thinking about such things. Likewise we are not thinking about how little money US automakers have to play with, and how impressed they are with the fragility of markets. By golly, folks, what if we wake up tomorrow to hear the news-readers intoning, “The Fed shocked the world today by announcing US currency will split three-for-one?” What if terrible inflation wipes out all values in response to the take-no-prisoners rate at which the mints are printing money? What if China, Japan, and Europe, each in its own suicidal desperation, decide to dump big amounts of the US paper they are holding? Business planners may

not take such drastic outcomes 100% seriously (let’s call it a “risk-weighted threat analysis”) but they see stock prices dithering and hear the pundits predicting another dive to come. No one knows. Everyone fears. Would you, in response to the above, plan expensive research and development projects that depended for their success on gambling that Americans will change the way they drive? You would not. You would hunker down and reserve all your assets to “attending to your core business.” So it is with the ambitious Diesel auto engine projects for the US from Honda, Ford, Nissan, and Chevrolet. With many Americans traveling to Europe and Asia, it was thought, there was wide exposure of influential buyers to the powerful, quiet, and highly economical new Diesel cars in those markets. I have had such experiences myself, and know many others in the same boat—people who would, in better times, be happy to buy one of the new-technology Diesel autos or light trucks. Multi-strike injection has quieted “Diesel knock” and selective catalytic reduction have taken care of the Diesel’s most difficult emission problem—nitrogen oxides. And the bete noir of 1980’s air science—Diesel particulates—has yielded to Diesel Particulate Filtration systems. With the full technology package, Diesel engines are clean and consumer-friendly. Trouble is, those emissions systems are expensive. Many sophisticated world travelers have a lot less disposable income than two years ago, and the price of Diesel fuel gives no comfort. In Europe, Diesel use is encouraged by tax policy, but not so in the US. In Europe, over 60% of new cars sold are Diesel powered, but here the majority of drivers continue to think of the Diesel as an “industrial engine” or some kind of third-world economy scheme. Look around and see that Hummers and other large and fuel-thirsty vehicles are back, encouraged by gasoline’s drop from $4.35 to $2.86. When most Americans—not the world-traveling

122

kind—think of Diesel cars, they think of funny-hat friends in the 1980s, driving quaint, faded VW Golf Diesels and resolutely refueling in the smelly part of the service plaza where the pavement is slippery and big trucks driven by men in mysterious checked shirts are growling through. As I see it, a US trend toward at least some Diesel autos had begun, based partly on fuel economy (which has two components—one is the “green” ideology and the other is money) and partly on gradually increasing familiarity. The current depression has for the moment squashed that, putting the automakers’ US Diesel projects on hold. That’s not the end—it’s just a hold, because we know that both Japan and Europe are furiously developing small economy power plants for the emerging markets that they see as a hedge against the shrinkage of the previously reliable and large US market. The big debate here is over the likely shape of the depression. One set of opinions predicts it will be V-shaped—a decline, a bottoming, followed by a climb back to the previous level of economic activity. All better now! Let us hope this is correct, or many of us will have to plan on being retired only half as long, or at half the planned economic level. The other opinion set models the world economy as a step downward—for the foreseeable future. This discouraging view suggests that investors have retreated from risk like slugs from salt, and that it will take a long time to persuade them that it is again safe to plan on buying cheap and selling dear. Plan on long-term unemployment. But, in either case, the basic problems of the world—limited resources, pollution, regional strife—carry on regardless. That means that in the long run, whether we are comfortably prosperous or bumping along just above the poverty line, if we need transportation it will have to be increasingly economical. Nothing is more economical than the Diesel engine. Let the ideologues of electric-everything


continue the strip-mining of Wyoming coal and the sending of men into deep and dusty coal-holes in West Virginia and Tennessee. In the absence of a national energy policy this will take the form of an economic struggle—coal versus petroleum—and may the cheaper, more convenient fuel win. Bear in mind that we have trillions invested in a workable distribution system for petroleum fuels. Arm-wavers, in and out of government, seem to imagine that an equally pervasive system for quickly recharging the batteries of millions of electric vehicles can be quickly and cheaply summoned into existence. Think of the many additional coal-fired powerplants that would be necessary to provide that electricity. Think of the present difficulty of fast-charging of batteries (three hours seems to be what it takes right now) with the battery electrode systems in production. Think of the predictions of when the energy density of batteries will reach three times what it is now—typically five or more years. California did its famous King Canute act, the equivalent of sitting in the surf and ordering the tide to go back. Technology doesn’t obey our plans for “scheduled breakthroughs,” so the electric technology California sought was not forthcoming. Lots more work may do the job, or it may not.

keeping watch in the Middle East and Central Asia, bidding against fuel-hungry China and Japan, arm-wrestling cranky nations while carefully avoiding any resemblance to an old-time colonial power—the usual high-wire act. Costs will rule despite ideology. Yes, we can imagine a world motivated by green thinking instead of by profit, but in its battle with costs, which will win? High-tech Diesels will be back because there is no viable alternative to their efficiency. But, in the US, temporarily at least, they are on economic hold. Turbo Diesel Register Issue 69

On the one hand we have petroleum, with its associated refinery, transportation, and political costs, being burned in Diesel engines that are 35% or even 40% (in some conditions) efficient. On the other we have coal (now generating almost 50% of national electricity), moving out from the mines in innumerable 10,000-ton trainloads, being burned for conversion to electricity at an average 35% efficiency, and then passing through transformer, line, more transformers, battery charge/discharge efficiency, and then motor efficiency to drive a vehicle. When you multiply all of these together, you get an overall system efficiency of 17-22%. Petroleum does have political costs—

123


More Than One Way Diesel engines have been designed and manufactured in a great variety of forms. Many of these can be found in A.W. Judge’s “High Speed Diesel Engines,” a book which also covers older fuel injection equipment. Two favorites of mine from this volume are the Sulzer opposed-piston four-stroke, and Napier’s “Nomad,” a two-stroke, turbo-compound aircraft engine. The Sulzer has its crankshaft located below its cylinders, with connecting rods oriented to right and left. Each con-rod drives a rocker-arm, and the upper arm of each rocker operates a piston. Each cylinder has two pistons which are driven toward each other by the rockers to compress air, after which fuel is injected in the normal manner. The advantage of this design is that the heat loss normally associated with cylinder heads is completely eliminated. This is also a feature of the “EcoMotor” currently under development. The Napier Nomad was a flat-12 with piston-ported liquid-cooled cylinders. Exhaust gas drove a turbine which could send power to both the crankshaft and to the scavenge blower that supplied charge air to the cylinders. A Beier variable-ratio drive was employed to control this flow of power. The Nomad was by no means the first Diesel aircraft engine. During the 1920s there was considerable pessimism over whether the spark-ignition engine even had a future in aviation. One reason for this was detonation, which could become uncontrollable with the poor fuels of the time, especially when engines were supercharged. Diesels were essentially immune to detonation, so a number of aircraft Diesel projects were undertaken. They might have been the way forward, but then Thomas Midgley of Delco Labs discovered the powerful anti-knock, tetraethyl lead, and spark ignition received a new lease on life. Certain German aircraft were powered by Junkers two-stroke Diesels. These engines had two crankshafts—an upper and a lower—with six open-

ended cylinders between them. In each cylinder two pistons compressed air between them. Exhaust ports in the cylinder wall were opened by one piston, which moved about 15 crank degrees in advance of the other. Fresh air ports were opened by the other piston in the same cylinder. Fresh air entered the cylinder at one end in a spiral pattern, chasing the exhaust gas to the other end where it exited through exhaust ports. Locomotive and submarine Diesels made by Fairbanks-Morse operated under the same principle. Napier, in their “Deltic” engine, made each of three crankshafts do double duty by placing them at the apexes of a triangle. The three sides of this triangle consisted of open-ended cylinders with two pistons in each, as above. Large marine Diesels, made at one time by Doxford, implemented the opposed-piston concept differently. One crankshaft drove the lower piston in each vertical cylinder conventionally, and moved the upper piston by means of a sliding frame driven up and down by a pair of secondary connecting-rods. In all of these two-strokes the scavenge air was supplied by a separate scavenge blower of some kind, just as it is in twostroke truck engines made by Detroit Diesel company. A few simplified two-stroke Diesels have been built using crankcase pumping just as found in the simple spark-ignition twostrokes that power chainsaws. Fascinating detail on Diesel design and performance—with many illustrations—is to be found in “Diesel Engine Reference Book,” edited by Lilly and published by Butterworths. The complex story of the development of Diesel engines for submarines can be found in Lyle Cummins’s encyclopedic “Diesels for the First Stealth Weapon; Submarine Power 1902-1945.” It is fascinating to follow the development as engines light enough to be useful proved too light to survive. Just as with

124

aircraft engines, detail design had to be refined and conditions of operation smoothed before engines capable of reliable operation on an 11,000-mile patrol came into being. Today nobody likes to design power gearing if he can possibly avoid it—the consequences of failure are too great. Gearing is also heavy, and every pound of weight added to ship, truck, or airplane is a pound less payload. For those reasons large marine Diesels are now directly coupled to their propellers and rotate at propeller speed—60 to 80rpm. To make the necessary power at such low revs requires that the engine make the largest possible number of power strokes. Today, that means twostroke engines. Years ago, ambitious designers hoped to fire their engines even more frequently – by compressing air and burning fuel on both faces of each piston. Such a double-acting engine was very common in steam piston practice, but the higher temperatures of combustion as compared with steam made piston cooling and piston-rod lubrication just too difficult. Double-acting Diesels were built, but failed to reach a desirable level of reliability. The tiny model engines that have been sold as “Diesels” are actually closer in operating principle to the “running-on” of gasoline engines after their ignition is turned off. Run-on was common in the early days of emissions controls. In the true Diesel cycle, air is compressed until its temperature rises high enough that injected Diesel fuel ignites spontaneously (after a short delay while droplets evaporate and the resulting vapor gets hot enough) upon contact with it. In the model “Diesels” it is actually the heat of residual exhaust gas, mixing with fresh fuel-air charge, that causes ignition. Engines operating on this principle are a very trendy branch of research now, under the name HCCI, for Homogenous Charge Compression Ignition. It is hoped that engines of this type may one day combine much of the


economy of a Diesel with the low NOx of lean-burn spark-ignition engines. Why did Napier design a Diesel aircraft engine just after WWII, when jets were the hot new technology? At the time, it was believed that decades would pass before jet engines lost their extravagant thirst for fuel. In the meantime, they used fuel too fast to fly the Atlantic directly. The planned route for early jets such as the De Havilland Comet therefore included a refueling stop at Gander, Newfoundland. The Napier Nomad-powered airplane, being highly efficient, but no faster than any other propeller aircraft, could fly London-to-New York directly. Because there was no need for refueling, it would arrive in New York first. History took a different direction, as the industrial nations of the earth poured resources into jet engine development, each vying with the other in the great game. Soon, Boeing 707s with more efficient engines had slammed the window of opportunity for Diesel engines in commercial aircraft. But not so fast. Today, with aviation gasoline down to a trickle, concern over lead has removed much of its detonation resistance. That has put the emphasis back on the Diesel as a way forward—at least for future light aircraft power and for some stealthy cruise missiles or drones that need the lowest possible exhaust temperature (which turbo-Diesels can deliver). As I was driving past work on Boston’s new tunnels some years ago, I heard a pile driver banging away, and its exhaust revealed it to be a Diesel. As the snarled traffic crept on, I tried to imagine how this worked. As it turns out, like a Sten or many aircraft guns, it “fires from an open bolt.” The heavy piston is raised to start the machine, drawing in fresh air through cylinder wall ports. To start, the piston is dropped, compressing air below it and then striking the pile cap, which has a small combustion chamber in its center, just as many Diesel engines have their combustion chamber in their pistons. The motion of the falling piston has “cocked”

a fuel injector, which now delivers fuel into the chamber where it ignites in the usual way. The initial impact of the piston against the pile cap has started the pile moving, and the added strong push of Diesel combustion simultaneously continues the pile’s downward motion and throws the piston upward. As it rises, the piston first uncovers exhaust ports (you guessed it—this is a two-stroke) and then fresh charge ports, allowing the cycle to repeat. Presumably today these must have DPF, SCR, and all the other acronyms of pollution abatement. The Diesel engine was a German invention and the Germans were quick to see the advantages of this engine for ship and submarine propulsion. An energyrich fuel and high efficiency combined to usefully reduce the volume once devoted to coal bunkers. Germany’s infamous “pocket battleships” of the between-thewars period were therefore remarkably powerful for their tonnage, which was limited by the Versailles Treaty. This makes it strange that in WWII German tanks were powered by gasoline engines, not by that ideal-for-the-job high torque powerplant, the Diesel. It was the Russians who saw the advantages of Diesel tank engines, giving their outstanding T-34 a four-stroke V-12 Diesel that increased range, delivered wide torque, and at least moderated the hazard of fire. Everybody else seems to have rushed into production with whatever happened to already be tooled for some other purpose. Germany’s famous Tiger tank was powered by a large Maybach gasoline engine whose original purpose was surely Zeppelin power. Its heavy thirst led to a leading Allied strategy for defeating this thickarmored behemoth, armed as it was with the feared 88-mm gun—wait for it to run out of gas. The Americans, then at the peak of their slap-leather productivity and “can do” attitude, were surprisingly no better off than the Germans. American tanks were also largely gasoline-powered, some by converted air-cooled airplane radials (actually not a bad engine for the North

125

African desert) and some by a strange cluster of five Dodge flathead sixes. The M4 Sherman was not called “the Ronson” by its crews out of affection. Tu r b o c h a r g i n g t r a n s f o r m e d t h e automotive Diesel. Without the turbo, a Diesel is an overweight and underpowered engine, attractive only for its low fuel consumption. With the turbo it becomes essentially whatever you want it to be—a Le Mans winner, a luxury car engine, a super economy-car engine, an engine for trucks of any size. All that currently holds Diesels back in the US market is economics and attitude. It is economics that has stopped overseas makers from paying what it costs to certify more Diesel autos here, and it is an outworn attitude that regards Diesels as “industrial engines.” Give it time. Turbo Diesel Register Issue 70


Purely Academic The US apparently lacks an energy policy, but it needs one. What is needed is some common sense about what changes are possible, and realism with regard to what can be done now to conserve energy. Instead, what we hear is unrealistic calls to switch to romantic, but impractical, non-solutions such as electric or fuel cell vehicles for which there is no nationwide refueling scheme. Proponents of electric vehicles will reply that they can be plugged in anywhere, but only yesterday I read that Chevy’s new Volt gasoline/electric hybrid (in average commuter service) will need to recharge for ten hours on 110V or four hours on 220V—not exactly a welcome interruption in the Christmas road trip to the grandparents’ house. Hydrogen as a fuel available for use in combustion or fuel-cell use does not exist. There is no free, uncombined hydrogen. Hydrogen must either be broken off of petroleum hydrocarbons (discarding the 30% of the energy present in the carbon) or electrolyzed from water in a process that puts in more energy than can later be recovered. What this means is that there is no practical alternative to combustionpowered vehicles for all-around use. The market for electrics and plug-in hybrids will be only as “third cars” for very welloff families who can use the pick-up for towing the horse trailer, drive the internal combustion-engined sedan for overthe-road travel, and still have cash left over to add a bit of “green” by parking a $40,000 Volt next to the outdoor plug for the hedge trimmer. Let’s think about what could be behind this push for electrics. We know that 48.5% of US electricity comes from coal-fired plants, and we know that the coal is either strip-mined in Wyoming or tunneled out of the mountains in West Virginia and Tennessee. Does this mean that the push for electric vehicles is really intended to switch some energy use

from imported petroleum to domestic coal? My green friends tell me that “coal is green” because petroleum-based fuels have hidden costs that coal does not. And they are serious. Petroleum must be refined, which involves application of heat and catalysts in expensive and specialized plants. It must be transported in huge ships on voyages taking up to two weeks. The single two-stroke Diesel engine that powers many such ships makes more or less 100,000-horsepower and burns just over 30,000-pounds of heavy residual fuel per hour. Why does that figure sound familiar? Why, it’s the amount of fuel burned per hour by a Boeing 747 in cruising flight. The 747 weighs close to 400 tons at take-off, but the oil ship weighs a quarter-of-a-million tons. Hmm. Petroleum has political costs. To get it, we may have to station military forces here and there around the world. We may have to out-bid the Chinese to buy it. We may have to keep aircraft carriers cruising nearby to discourage any “negative thinkers” who may have other ideas. Okay, okay, I get it, but don’t we already pay those costs on April 15th of every year? Yesterday I listened to a debate about all this, and fast-talking heads were telling me what oil ought to cost. At the pump, I learned, there ought to be a long list of hidden costs—refining, transportation, political, health (by some form of math they had computed that coal-fired electricity plants kill 13,000 Americans annually, and that Diesel engine exhaust annually kills some other terrible number), education, etc. This academic stuff is great fun, but I have practical needs in my life that can’t wait 20 years while everything is ideally restructured. All these future transportation schemes sound to me like the GI’s lament in WW II—“If we had some ham, we could have ham and

126

eggs, if we had some eggs.” My wife and I have to get to the grocery store. My youngest son has to drive 20 miles to college classes (It is $11,000 more if he lives in the dorm). I have to pick up the middle son at an airport 67 miles away, home on leave for Christmas. We all have to live—and not in some ideal, theoretical, academic tomorrow way, but in a do-it-today practical way. Therefore if I make bread, I do it in the electric bread machine (thinking of the 13,000 people I’m condemning to death by using electricity from coal-fired plants!), rather than going out into the bright, sunny front yard with a zeroemissions solar cooker to do my baking in the environmentally-perfect Vandana Shiva way. (She is an Indian idealist who proposes that we can all live perfectly well by subsistence agriculture, using one bullock per family as our power source.) I don’t have a bullock, and it’s 18° in my front yard. Therefore I am interested in solutions that work and are actually available to me this minute. If our politicians in Washington DC thought along these lines instead of foolishly believing in unattainable academic nonsense, they would do as the Europeans do and encourage wider use of highly efficient Diesel vehicles. Yesterday I looked at lists of the ages of US coal-fired electricity plants. Some were built in the 1920s and are still operating. In my research, 76% of the plants, it told me, were built before 1980. And rather than build new plants, the existing ones are being operated at higher percentages of capacity. Why is this? It is because new energy plants of any kind are very expensive, the policy future regarding such plants is uncertain, and nobody wants to bet the company on the proposition that this administration’s environmental policies will be smoothly continued by the next administration. Arm-wavers speak of a bright new future of “clean, 100% safe nuclear powerplants.” No one wants to build nuclear plants because the process for approving their construction takes


25 years. Plus, many people remember being told about “clean, 100% safe” before, and not having things quite turn out that way. Auto makers tell us that 40% of their research and development budgets are spent on what they call “contingency engineering.” Brain-stormers sit in Detroit conference rooms, dreaming up the kinds of safety and emissions features the EPA, NHTSA, and other agencies may in future insist upon. The more likely of these are then funded for limited development in hope that such head starts will save money if the concepts become law in future. Something similar must happen in the power industry. What if zero sulfur emissions become law for all combustion power plants? What if carbon recovery is mandated? (What I read yesterday said this will add 40% to the cost of electricity, but there are lower figures bandied about—believe them if it pleases you— no one really knows.) What if power demand increases by 25% because of vehicle electrification? What if tough new environmental laws are passed by wellmeaning green coalitions in Wyoming or West Virginia legislatures, pushing up the price of coal? Power companies have no idea what lies ahead, so they sit tight, keep costs under control, and cross their fingers. If my electric bill is just under $200 a month I think I’m doing well. Sorry about the 13,000 people. It is precisely because we can’t know what lies ahead that our best strategy is conservation, and not some idealistic total change-over to electric or fuel-cell vehicles. An element in conservation is use of the best available technology for vehicle fuel economy—the Diesel engine. Some folks just don’t get it... Turbo Diesel Register Issue 71

127


Diesels in the USA I thought it time to drop you a line. “Exhaust Note” is always the first article I read when my TDR comes in the mail. The reason is that I know that the Cummins under the hood is the heart and soul of my 2007 Ram. Every time I drive that truck, I am amazed and impressed by the power and efficiency of it. A gas engine with the same displacement wouldn’t get near the mileage nor have the power to haul my trailer with horses. After reading “Purely Academic” in Issue 71, I recalled my first experience with diesel power. It was 1982 when the price of gas shot up to a whopping 65 cents a gallon. The truck I was driving at the time was getting about 10mpg. I needed to know what my options were, for there was no way I was going to pay that much for fuel. One day at work a friend and fellow employee told me about a new truck he had just purchased which was getting 40mpg. My response was, “No way.” That evening in the parking lot after work he showed me his new ride. It was a new Chevy LUV (light utility vehicle) truck, purchased at a nearby dealership. A few days later I decided to check them out at the dealer and drive one. I soon realized it was an Isuzu, only with a little difference to the grill and other minor details. The engine was 2.2-liter, naturally aspirated, four-cylinder with a six-speed manual transmission. It took me little time to decide this was the answer and to make the purchase, and yes, it did get 40mpg. I was happy as a clam shifting through all those gears knowing I was saving money on fuel. The power was lacking a bit, but I could run circles around the diesel rabbit. Did I mention, it got 40mpg? At the time I didn’t know I would have that little truck for ten years with need for a larger one only on a few occasions. I did regular maintenance on it and had to change the glow plugs in it twice. It got 40mpg. When the odometer read 175K miles I thought it might be time to get rid of it. I put a “for sale” sign on it and another friend at work bought it. He drove it another 100K miles and sold it having had no trouble with it. It still got 40mpg.

Since I got rid of that little truck, I’ve seen a few still on the road. One day I followed one into a mini-mart to ask the owner if he would sell it to me. The man laughed and said he had been approached before with the same question. I knew no auto manufacturers sold anything close to this little wonder and often wondered why. The Isuzu engine has been around forever and has other applications, so I thought the reason they were no longer sold was they couldn’t meet the EPA’s pollution requirements. It got 40mpg. So here is the deal: why hasn’t Cummins, or any other manufacturer, designed a 2.5-liter, or thereabouts, in-line, fourcylinder, aspirated 16-valve diesel engine for a mid-sized truck (Dakota)? With a six-speed manual transmission the combination would sell like hotcakes. Mileage would be fantastic and the power would be all you could want. For daily use and commuting it would perform most of the tasks required of a truck. The Japanese knew this 30 years ago, and even lacking the technology that we have today, still sold huge numbers of small diesel trucks. Would you pass this on to your friends at Cummins and Dodge to get the ball rolling before the foreign market does. We need to buy “American made” now more than ever. I’ll be the first in line to buy one, not to replace my Ram but in addition to it. And, oh, did I mention that little truck got 40mpg? Doug Tourville Doug’s letter expresses a disappointment that, as a pro-Diesel (as we are writing in the “Exhaust Note” column, please note the capital D) audience we all feel. If you want Kevin Cameron’s chapter and verse about the political aspects of Diesel power in America, I will refer you to the TDR’s web site and the left control panel. Click on the “Cameron Collection” and focus your attention on numbers 35, 39, 40, 60, 67, 69 and, most recently, 70. Doug also mentions that the Isuzu engine was “around forever,” but all those made in Diesel engines are turbochargers in the “Cameron Collection” numbers 42, 47, 50 and 70. While this audience can agree and appreciate the marvels of the Chevy LUV truck with the little Isuzu engine, the manufacturers’ number-crunchers at

128

GM, Ford, Chrysler, Toyota, Fiat, Nissan, Mercedes, BMW, et al, would not be able to argue the profitability of a small Diesel light utility vehicle. Yet, to address your desire for a LUV-type product, the answer could be Mihindra. There is already an enthusiast web site for the truck, www.mahindratruckforum. com. However, as we reported in Issue 57, page 70 (four years ago), and again in this issue on page ___, this truck is mired in government and distribution red tape. There is progress. As G.R. Whale noted, the recently released miles-per-gallon rating from the EPA on this truck is 19/21. But what a disappointment. This does not match the 40mpg of the Chevy LUV. Whale has suggested that it is likely the actual mpg numbers would be better than the EPA estimates, bringing them in line with a gasoline-powered Toyota Tacoma at 19/25. Nevertheless, what a disappointment. And, since I have alluded to the smallish Toyota Tacoma, let us reflect on vehicle sales and conjecture about implications in making a case for a small truck. The numbers: ’09 ’10 ’11 Toyota Tacoma Ford Ranger Dodge Dakota Chevy Colorado Notice a trend? If we were to calculate the current price premium for diesel fuel eating away at the cost advantage of a Diesel power plant, and the decline in sales of light utility vehicles, as a businessman I’m not standing in line for a Mihindra dealership. Maybe that’s why there has been more talk than action about Mihindra for four-years? There is my logic. I forwarded Doug’s letter to Kevin Cameron and Kevin gives the following response: What happened is that at one time, Diesel was the darling of the EPA, and during that time some fairly economical vehicles were offered for sale in the US. Then, in the later 1980s it was discovered that terrible carcinogens were attaching themselves to carbon clusters in Diesel particulates. Eek! Overnight, diesels became anathema,


and a long list of emissions limits were “added to their bill.” Because these requirements are not technically easy to meet, and because the solutions are not cheap, few companies want to bet that a significant number of Americans can overcome their prejudices (Don’t even walk near the Diesel pumps at service plazas—the whole place is slippery! Just touch the pump and your hand stinks all day. Diesels rattle and smoke! Diesels are environmental disasters! Diesels are low-class—only for working-class stuff like ships and trains) to buy enough to pay the R&D bill. So, no Diesels for the mass market. The perception is that they’re just for frickin’ intellectuals. Those who buy diesels are likely to be people who can’t just pull the lever on voting day and let responsible persons tell them what’s best for them. It’s a propaganda problem. Greens think electricity is in the wall, and is clean and pure, not generated half from coal, one-fifth from natural gas, and onefifth from nuclear. The people pushing electric cars are delighted that this is so. Hey! Want everlasting life? Sure ya do! Well, forget Baptist, Catholic, and the rest of that. And buy electric! Electric is salvation. Kevin Cameron TDR Writer Ouch, I think we know how Kevin feels. We must have hit a raw nerve. For further commentary about the plight of the Diesel engine, I refer you to excerpts from a column in AutoWeek, 9/13/10 by writer Denise McCluggage. “Scene: A clutch of motoring press gathered before an intoning car-company guy. Guy singing the praises of its model electric. Ah, so. All car songs these days seem to be in the key of E. Electric smelectric. I grow weary. But the cars of carmakers are turned to consultants and focus groups who persistently whisper therein that American car buyers want to be plugged in.

“No, these focus-group folks simply collect any misinformed mutterings as if they were gems and glitter them on to their clients. Their professional conclusion: Americans won’t buy diesel. Thus, more electrics are announced and more diesels are canceled. “‘Well,’ Guy answers me in a scripted tone, ‘our research shows that with the additional cost of a diesel engine and the uncertainty in diesel fuel prices, it would take seven years to achieve payback.’ Ergo: Americans won’t buy diesels. “Later, one-on-one with the Guy, I say, ‘You’re wrong about seven years to payback.’ His antennae bristle. ‘With a diesel, payback is immediate—it’s called torque and range.’ (To his credit, Guy acknowledges that I have a point.) “About that ‘payback’ business, does anyone ever talk about the payback time on, say, leather seats? And what about high-end sound systems? Or on a sunroof? With the Diesel you experience real-time payback. Like when your right foot prompts a diesel engine to swell quickly into action and then roll on all day without ever whining to be fed. “Not to mention the good a diesel engine does for your resale value. Payback, indeed.” But let’s be practical and consider the bottom-line: If Mr. Big, the auto executive, were to ask you to approve the Diesel project for a new incarnation of the LUV, and the success of this LUV would determine your continued tenure at the auto company, would you sign on the dotted line? I think I’ll let Mihindra go first. Robert Patton TDR Staff Turbo Diesel Register Issue 72

“I ask Guy what is his company’s excuse for not at least offering a diesel engine. So a continuing large number of American car buyers still believe that diesels are noisy, dirty and trucklike— doesn’t it matter that they are wrong?

129


Conflicting Interests This morning I was reading that three groups with environmental interests have written to President Obama urging him to adopt the strictest of four proposed rates of improvement for fuel consumption in light vehicles. Such vehicles are already having to step up from the old 27.5mpg to the new 2016 standard of 35.5mpg. (These mileage numbers are CAFÉ, or Corporate Average Fuel Economy, measured in laboratory driving cycles.) Such improvements are sensible national selfdefense in a world of rising petroleum price and uncontrolled deficit spending. For the period 2017-2025, light vehicle fuel economy may be required to improve at a yearly rate between 3 and 6%—the number to be chosen in upcoming deliberations. The highest number would eventually raise the required fuel economy to 62mpg. Automakers are saying (with their voices suitably amplified by the best available publicists and lobbyists) that the new technology required to meet these standards could push car ownership out of reach of many Americans. Like the President, business has its legitimate interests which it must defend. So have consumers. What would that new technology be? To a large extent, that might be determined by the (bankrupt) State of California, whose influential Air Resources Board (CARB) will make relevant decisions this coming November. Back a couple of years ago it looked as though CARB would follow its usual path—seeking above all to reduce emissions without much concern over technology cost or fuel consumption. Remember that California’s legitimate interest over the past 45 years has been to deal with green and yellow air over its autoclogged cities. However, there have been some rays of a different kind of sunlight, as when CARB, formerly dead-set against any kind of two-stroke engine, actually read the specs on Bombardier’s direct-injection watercraft two-strokes and decided they looked pretty okay.

CARB’s decision on their 2016 Low Emissions Vehicle III standard was expected to hold Diesels to such difficult emissions standards as to amount to forbidding them. Industry observers regarded this as part of a “push” to make electric vehicles (as if, after Enron, California has electricity to spare) more attractive. Now some are saying that CARB may be considering that highly efficient Diesel engines do have a legitimate place in the nation’s mix of prime movers. CARB’s decision will carry a lot of influence, for EPA often follows CARB’s lead. A softer position, balancing the need for improved fuel consumption against endless improvements in air quality, would be welcome news for Diesel users. If you sit down at your computer and Google, “Diesel combustion lift-off,” you will be presented with nice four-color illustrations of the combustion of Diesel fuel sprays. “Lift-off” is the point in the spray beyond which light is being generated by the beginnings of combustion. The spray billows out after that point as heat release causes rapid gas expansion. What comes out of the injector is a narrow cone of tiny droplets—most sized between .0002 and .001-inch diameter—moving at between 650 and 1650 feet per second (which is why the spray from an injector will cut your skin). This jet of droplets entrains air at its surface, dragging it along, and some droplets evaporate into that entrained air to form a mixture of air and fuel vapor. This pre-mixed air and vapor is like the premixed charge in a spark-ignition engine. The usual explanation of Diesel combustion is that pure air is drawn into an engine cylinder and is then compressed (in truck engines a typical compression ratio is 16-18:1—much higher than is possible without knock in gasoline-fired engines) until its temperature is well above the fire point of the fuel. Fuel is then injected, and it ignites from contact with the hot air. Not quite! Heat is required to evaporate liquids, and in the case of the speeding droplets of Diesel fuel, that heat comes from the hot air into which they are injected. Evaporation of some fuel

130

cools the air, delaying the beginning of combustion. This is why the fuel cannot ignite the moment it enters the cylinder, but instead moves some distance (the “lift-off length”) before contact with fresh hot air brings the temperature of the fuel-air vapor up enough to result in actual combustion. The time between the beginning of injection and the appearance of flame is very reasonably called the “ignition delay period” and is a few crank degrees—say 5 to 7. As the surface of the spray ignites, (Don’t even ask about the chemistry— it’s wonderful stuff, with chains of events clanging away that only the strangest of experts begin to understand.) the flame originates and propagates where the mixture is close to chemically-correct— and is very fast. This part of Diesel combustion is the premixed combustion phase—the very short, intense time during which that part of the fuel that has previously evaporated and mixed with air flashes into flame. Energy release rate is very high and this phase may take just 5 crank degrees. Meanwhile, elsewhere in the spray, conditions are settling in for the longhaul part of Diesel combustion—the diffusion flame. Imagine a cluster of droplets, rapidly evaporating in the hot air, and being further helped to evaporate by infrared energy coming from a nearby flame. Close to the cluster there is only fuel vapor, which cannot burn without atmospheric oxygen. Far from the droplet, there is only air, which cannot burn without fuel. Fuel vapor diffuses away from the droplet, its molecules driven this way and that by the constant buzz of collisions in the high temperature gas. Oxygen molecules diffuse toward the fuel droplet. Where they meet and form a combustible mixture, flame occurs. That flame cannot form a moving flame front as it does in the pre-mixed fuel-air charge of a spark-ignition (gasoline) engine. If the flame were to move toward the fuel droplet, it would encounter a mixture too rich to burn, and it would go out. If it were to move away from


the droplet, it would be moving into leaner conditions, and again, it would be extinguished. So the flame sits still, fed by the steady outward diffusion of fuel vapor and the inward diffusion of atmospheric oxygen. This “diffusion flame” consumes its reactants at the rate they diffuse. For this reason, this part of Diesel combustion takes much longer than the premixed combustion phase. It goes on and on at a very moderate rate of energy release, as the crank rotates through perhaps 40 degrees centered around TDC. If you think about this model, you can imagine different scenarios. One is the single-droplet case, another would be a cluster of droplets surrounded by a diffusion flame, and yet another might be that the entire volume of the droplet spray is so fuel-rich that flame occurs only on its outer surface. In combustion, hydrocarbon fuel molecules must be broken up by thermal energy—that is, their hydrogens are knocked loose by the billiard-ball-like collisions with other agitated and fastmoving molecules. The same happens to the two-atom (“diatomic”) molecules of oxygen. Once set loose, these fragments can combine to make dozens of possible “relationships,” some of which liberate a lot of energy. Hydrogen and oxygen combine to form water fairly easily, but the carbon chain backbones of the fuel molecules aren’t so fast. The longer the carbons bake inside of droplet clusters or within the cooler fuel spray, the greater the chance that instead of finding true love by combining with oxygen atoms (forming carbon monoxide and carbon dioxide), they will find only each other and form carbon- atom clusters. This is the origin of “Diesel particulates” or soot. Carbon is very attractive stuff, which is why it is used to extract badtasting active compounds from whiskey or cigarette smoke. Rich, hot regions in the fuel spray contain carbon rings, one or more of whose hydrogens have been knocked off. When rings bond to each other to form “polycyclics,” some of the resulting compounds turn out to be nasty carcinogens.

Where do the carbon rings come from? A large percentage of the molecular structures in Diesel fuel are based on such rings—probably because (a) ring structures are highly stable, and have persisted in underground petroleum reservoirs for millennia, and (b) because plants—the precursors of petroleum— employ ring structures in their cell walls. Polycyclics are attracted to the cruising carbon balls, and unless combustion turbulence carries those balls to someplace hot and oxygen-rich enough to burn them up, they will sail right on out through the exhaust valve when it opens, and become part of the exhaust stream. You can sort of see from this general picture why Diesel manufacturers are raising injection pressure, providing injectors with greater numbers of evertinier holes, and injecting in short, multiple events rather than in one long spray. The last thing they want is longlasting hot, rich zones in which soot forms easily. Multiple spray events distribute fuel into fresh regions of air, discouraging the formation of longlasting rich zones. High injection velocity penetrates compressed air made even more dense by both turbocharging and by stuffing in cooled EGR. That dense air is like an array of heavy-duty football linemen, and it’s going to take a lot of energy to break through their line. Then there’s the little matter of nitrogen oxides—NOx in the official language. NOx is a step in a complex smogformation process that results in the creation of ozone (molecules consisting of three rather than the usual two oxygen atoms). Breathing becomes difficult and urban air assumes that greenishyellow cast that I saw for the first time in Southern California in 1971. Normally, nitrogen is highly stable and stays that way. Fortunately for the whole world, it takes about twice as much energy to separate the two atoms of a nitrogen molecule as it does to separate two oxygens. If it were easier to knock nitrogen apart, the next lightning bolt could set the air on fire. (It is 78% nitrogen and 20% oxygen.) This

131

is something that worried some of the atomic scientists in 1945 as they made ready to test their first A-bomb. Nitrogen remains in the form of two-atom molecules until heated to something like 2800°, above which point the rate of thermal molecule-busting increases rapidly. What this means for combustion in engines is that some loose, single nitrogen and oxygen atoms will be set free in the hottest parts of the flame, and may combine with each other to form nitrogen oxides. Three pathways, called the “Zel’dovich mechanism,” all lead to NOx formation. As we know, one scheme for preventing its formation is to cool things off by delaying injection (which normally begins ~ 20-degrees BTDC) until TDC, but that naturally reduces power and increases fuel consumption. Another is to dilute the air charge in the cylinder with cooled exhaust gas recirculated from a previous cycle. This “cooled EGR” is mostly carbon dioxide and water, and so cannot burn or contribute to power. Its presence reduces flame temperature, thereby stopping a lot of NOx at its source. Stuffing this extra inert gas into the cylinders requires more work from our busy turbocharger, but the less NOx we produce in the cylinders, the less expensive technology we will have to tack on downstream to react that smog-forming stuff into harmless, normal diatomic nitrogen and oxygen. Conflicting interests are not confined to politics. The unpalatable truth is that whatever we do to suppress soot formation fosters NOx production, and vice-versa. The more vigorously we stir the fuel droplets into the air and heat up combustion in an attempt to burn up carbon before it forms soot, the more fuel burns at high temperature and the more NOx is formed. When we cool things off to suppress NOx formation, more soot forms because carbon burns best in hotter flame. So far, no way has been found to improve everything at once—so we are stuck with particulate filters to collect and periodically burn the soot, and with either SCR (Selective Catalytic Reduction—


the famous ammonia-from-urea process for rendering NOx into harmless form) or with NOx trap-and-burn. None of these technologies is cheap, especially as lower and lower levels of soot and NOx are legislated. Good performance in the lab isn’t enough—the makers also have to prove to CARB and EPA that systems will manage themselves (lots of computers and sensors!) and continue to function reliably for years and years (gold-plated connectors, fancy water-exclusion seals, durable catalysts!). Those regulatory bodies are also skeptical that people will remember to fill their urea tanks. Therefore we hope that (a) the clever folks in the manufacturers’ emissions labs will learn wonderful new and cheaper ways to clean up emissions, and (b) that the folks at CARB will show some restraint in November and allow the powerful fuel-saving ability of Diesel engines to find wider application in the US transportation mix. Turbo Diesel Register Issue 73

132


Simplicity and Something to Think About A basic appeal of the Diesel engine has been its rugged simplicity. I was reminded of this recently when watching a video of illegal Amazon-basin hydraulic miners, starting a big truck engine coupled to a water pump. A big rope was wrapped around the engine’s clutch drum several times, and then a group of lusty lads just walked off with it, spinning the engine fast enough to start it. No battery, no starter, no microprocessors, no sensors. If our world comes apart as financier Mr. George Soros is lately predicting, the few jerk-pump Diesels still in existence can be started in the same manner, and will go on carrying the freight as long as fuel can be found. Hydraulic mining uses a jet of water from a big pump to blast away vegetation, topsoil—everything in its path—to find a bit of gold. It is not the favored method of environmentalists, but I reckon those fellows pulling that big starting-rope are more concerned with their own day-to-day survival than they are with Mr. Gore’s movie or the mathematical modeling of climate. We all know that Diesel combustion is difficult to make complete. As a result, clusters of unburned carbon atoms— Diesel particulates—are blown out of the cylinder, each perhaps carrying some molecules of PAH—polycyclic aromatic hydrocarbons—sticking to it (fancy term is ‘adsorbed’, meaning stuck onto). These are compounds made up of two or more (hence the ‘poly’) joined carbon rings (hence the ‘cyclic’). Some of these, by mimic-ing the chemistry of compounds used in our bodies’ metabolism, can become incorporated within us to act as carcinogens. The word ‘aromatic’ means made up of six carbons in a ring, and it so happens that the structure of plant cell walls is made of such compounds. So it’s not surprising that we find them in petroleum—then in Diesel fuel, and finally, in Diesel exhaust. When crude petroleum is refined to produce products such as gasoline,

Diesel, etc., it is run through a distillation tower. The basic idea is the same as in the operation of the moonshiner’s still. When you heat a mixture of compounds dif fering in molecular weight, the lighter compounds, consisting of the fewest atoms, boil away first. Then, at higher temperatures come the heavier fractions. The petroleum still separates ethyl alcohol from the waterand-alcohol mixture in the mash. In the distillation towers of the petroleum industry condensation pans are set at various heights in the tower. Crude oil is heated at the bottom of the tower, and vapor rises from it. The lightest fractions—gases such as methane—go right out the top of the tower. (I used to see such gases being “flared” from New Jersey refineries when I was a little boy riding in my parents’ car on trips.) Compounds in the gasoline range, consisting of chains or rings of 5 to 8 carbons, condense near the top of the tower, and are piped away for further processing. No. 1 fuel oil has a range of chain lengths of 9-16 carbons, No. 2 is 10-20, and so on down to No. 6 residual fuel, with 20-70 carbon chains. Condensing in the lowest pan is asphalt—really heavy stuff that we make roads out of. Thanks to the insights of chemistry, any of the heavier distillates can be “cracked”—that is, broken down into lighter fragments— through a combination of heat and catalyst. Think of the catalyst as the mugger who grabs you from behind, allowing his helper to feel for your wallet. The catalyst momentarily grabs onto and changes the shape of the molecule, causing a strain that makes the desired chemical change a lot more likely. Once the target molecule is broken in this way, the pieces pop off of the catalyst molecule, which is then ready to repeat. T he problem w it h t hese highermolecular-weight hydrocarbons is combustion. If you were assigned the task of taking apart simple Tinkertoy structures in limited time, it would be easy. But if we supply Tinkertoy

133

structures made up of more and more knobs and sticks, you would do a less and less complete job of disassembling them in that limited time. Combustion is just the same. Before the carbon and hydrogen atoms in any hydrocarbon molecule can unite with oxygen, they must first be knocked apart to make them available. The more complex the structure, the longer that disassembly takes. As we go to longer chain-length fuels, the result is more unburned particulates in the exhaust, and less-complete combustion. If we want to run our engine near other people, the law has now decided, we have to filter out those particulates. The equipment that can do this costs money, and we middle-class folk have only a limited amount of that. Automakers in the US have generally decided that the cost of Diesel emissions compliance is just too high for the automobile mass market. Out on the oceans, giant marine Diesels do an efficient job of moving the world’s goods over vast distances. China imports iron ore from South America, Japan imports oil from the Middle East and Alaska, and manufactured goods go out in all directions. So efficient is this process that VLCCs (very large crude carriers) use a volume of heavy residual fuel that is less than 1% of the volume of crude oil they are carrying. However, burning big fuel molecules that are 20-70 carbon atoms in length presents special problems. The State of California wants Diesel motorships operating within 200 miles of its coast to switch to a lowermolecular-weight fuel so that ship exhaust will contain fewer big, black, gooey chunks. Understandable, and we know by reading industry publications that makers of such marine Diesels are hard at work on technologies to clean up their combustion. Everyone knows that when it comes to emissions regulations, “As goes California, so goes the world.” Another thing we notice is that Diesel fuel price goes up and down, and up


again. Even consoling ourselves with the fact that there is more energy in a gallon of Diesel than in a gallon of gas, the price is never easy to pay. The salad days when the oil majors estimated the production cost of a barrel of Arabian light crude at a nickel, are over and they aren’t coming back. Today we have to pay these companies to operate in unfriendly places like a mile down in the sea or (dare I say it?) jolly old Iraq, places in which you never know what little disaster tomorrow may bring. A giant blowout with incalculable loss of corporate reputation? Angry folks with automatic weapons, little disposed to compromise? As the old phrase has it, “They pass along the savings to us.” All of the above makes it especially interesting to read in the August 8, 2011 issue of Autoweek Magazine that the US Department of Energy’s Argonne National Lab has a little project going to investigate the operation of Diesel engines on gasoline. If the inevitable problem of gasoline’s very poor cetane rating (its ability to autoignite) could be solved, the low molecular weight of gasoline could instantly erase much of the Diesel’s problem with particulates and PAHs. The engine would continue to display the Diesel’s low fuel consumption. Could this be the best of both worlds?

cost of necessary emissions-reduction equipment. Something to think about. Turbo Diesel Register Issue 75

This might very easily come to nothing— lots of research ends because the grant that backs it is not renewed, or because zealous government cost-cutters close the research institute and sell its equipment to the Chinese, or because problems uncovered resisted solution. The investigator at Argonne is Steve Ciatti, and his testing began with a 1.9-liter GM automotive Diesel. Results are said to have been encouraging. Basic changes to the engine were unnecessary—the trickery is in the details of fuel injection. The short article notes that success in operating Diesels on gasoline (or a gasoline-like fuel) might lead to substantial reduction in the

134


Kevin Cameron - Exhaust Note - Excerpts from the Turbo Diesel Register  

Kevin Cameron's Exhaust Notes columns from the Turbo Diesel Register

Read more
Read more
Similar to
Popular now
Just for you