Full download Mastering elasticsearch 2nd edition edition rafal kuc pdf docx

Page 1


https://ebookgate.com/product/masteringelasticsearch-2nd-edition-edition-rafal-kuc/

More products digital (pdf, epub, mobi) instant download maybe you interests ...

Elasticsearch server a practical guide to building fast scalable and flexible search solutions with clear and easy to understand examples Second Edition Kuc

https://ebookgate.com/product/elasticsearch-server-a-practicalguide-to-building-fast-scalable-and-flexible-search-solutionswith-clear-and-easy-to-understand-examples-second-edition-kuc/

Mastering Deputy Headship 2nd Edition Trevor Kerry

https://ebookgate.com/product/mastering-deputy-headship-2ndedition-trevor-kerry/

Mastering Mountain Bike Skills 2nd Edition Brian Lopes

https://ebookgate.com/product/mastering-mountain-bike-skills-2ndedition-brian-lopes/

Mastering Predictive Analytics with R 2nd edition Edition Forte

https://ebookgate.com/product/mastering-predictive-analyticswith-r-2nd-edition-edition-forte/

Mastering Windows XP Registry 2nd Edition Peter D. Hipson

https://ebookgate.com/product/mastering-windows-xp-registry-2ndedition-peter-d-hipson/

Mastering Arabic 1 Activity Book 2nd Edition Jane Wightwick

https://ebookgate.com/product/mastering-arabic-1-activitybook-2nd-edition-jane-wightwick/

Mastering the Internet XHTML and JavaScript 2nd Edition Edition Zeid

https://ebookgate.com/product/mastering-the-internet-xhtml-andjavascript-2nd-edition-edition-zeid/

Mastering Opencv 3 2nd Edition Daniel Lélis Baggio

https://ebookgate.com/product/mastering-opencv-3-2nd-editiondaniel-lelis-baggio/

Mastering Revit Architecture 2008 Mastering 1st Edition Tatjana Dzambazova

https://ebookgate.com/product/mastering-revitarchitecture-2008-mastering-1st-edition-tatjana-dzambazova/

Table of Contents

Mastering Elasticsearch Second Edition

Credits About the Author

Acknowledgments

About the Author

Acknowledgments

About the Reviewers

www.PacktPub.com

Support files, eBooks, discount offers, and more

Why subscribe?

Free access for Packt account holders

Preface

What this book covers What you need for this book

Who this book is for

Conventions

Reader feedback

Customer support

Downloading the example code

Errata

Piracy

Questions

1. Introduction to Elasticsearch

Introducing Apache Lucene

Getting familiar with Lucene

Overall architecture

Getting deeper into Lucene index

Norms

Term vectors

Posting formats

Doc values

Analyzing your data

Indexing and querying

Lucene query language

Understanding the basics

Querying fields

Term modifiers

Handling special characters

Introducing Elasticsearch

Basic concepts

Index

Document

Type

Mapping

Node

Cluster Shard

Replica

Key concepts behind Elasticsearch architecture

Workings of Elasticsearch

The startup process

Failure detection

Communicating with Elasticsearch

Indexing data

Querying data

The story

Summary

2. Power User Query DSL

Default Apache Lucene scoring explained

When a document is matched

TF/IDF scoring formula

Lucene conceptual scoring formula

Lucene practical scoring formula

Elasticsearch point of view

An example

Query rewrite explained

Prefix query as an example

Getting back to Apache Lucene

Query rewrite properties

Query templates

Introducing query templates

Templates as strings

The Mustache template engine

Conditional expressions

Loops

Default values

Storing templates in files

Handling filters and why it matters

Filters and query relevance

How filters work

Bool or and/or/not filters

Performance considerations

Post filtering and filtered query

Choosing the right filtering method

Choosing the right query for the job

Query categorization

Basic queries

Compound queries

Not analyzed queries

Full text search queries

Pattern queries

Similarity supporting queries

Score altering queries

Position aware queries

Structure aware queries

The use cases

Example data

Basic queries use cases

Searching for values in range

Simplified query for multiple terms

Compound queries use cases

Boosting some of the matched documents

Ignoring lower scoring partial queries

Not analyzed queries use cases

Limiting results to given tags

Efficient query time stopwords handling

Full text search queries use cases

Using Lucene query syntax in queries

Handling user queries without errors

Pattern queries use cases

Autocomplete using prefixes

Pattern matching

Similarity supporting queries use cases

Finding terms similar to a given one

Finding documents with similar field values

Score altering queries use cases

Favoring newer books

Decreasing importance of books with certain value

Pattern queries use cases

Matching phrases

Spans, spans everywhere

Structure aware queries use cases

Returning parent documents having a certain nested document

Affecting parent document score with the score of nested documents

Summary

3. Not Only Full Text Search

Query rescoring

What is query rescoring?

An example query

Structure of the rescore query

Rescore parameters

Choosing the scoring mode

To sum up

Controlling multimatching

Multimatch types

Best fields matching

Cross fields matching

Most fields matching

Phrase matching

Phrase with prefixes matching

Significant terms aggregation

An example

Choosing significant terms

Multiple values analysis

Significant terms aggregation and full text search fields

Additional configuration options

Controlling the number of returned buckets

Background set filtering

Minimum document count

Execution hint

More options

There are limits

Memory consumption

Shouldn’t be used as top-level aggregation

Counts are approximated

Floating point fields are not allowed

Documents grouping

Top hits aggregation

An example

Additional parameters

Relations between documents

The object type

The nested documents

Parent–child relationship

Parent–child relationship in the cluster

A few words about alternatives

Scripting changes between Elasticsearch versions

Scripting changes

Security issues

Groovy – the new default scripting language

Removal of MVEL language

Short Groovy introduction

Using Groovy as your scripting language

Variable definition in scripts

Conditionals

Loops

An example

There is more

Scripting in full text context

Field-related information

Shard level information

Term level information

More advanced term information

Lucene expressions explained

The basics

An example

There is more

Summary

4. Improving the User Search Experience

Correcting user spelling mistakes

Testing data

Getting into technical details

Suggesters

Using the suggest REST endpoint

Understanding the REST endpoint suggester response

Including suggestion requests in query

The term suggester

Configuration

Common term suggester options

Additional term suggester options

The phrase suggester

Usage example

Configuration

Basic configuration

Configuring smoothing models

Configuring candidate generators

Configuring direct generators

The completion suggester

The logic behind the completion suggester

Using the completion suggester

Indexing data

Querying data

Custom weights

Additional parameters

Improving the query relevance

Data

The quest for relevance improvement

The standard query

The multi match query

Phrases comes into play

Let’s throw the garbage away

Now, we boost

Performing a misspelling-proof search

Drill downs with faceting

Summary

5. The Index Distribution Architecture

Choosing the right amount of shards and replicas

Sharding and overallocation

A positive example of overallocation

Multiple shards versus multiple indices

Replicas

Routing explained

Shards and data

Let’s test routing

Indexing with routing

Routing in practice

Querying

Aliases

Multiple routing values

Altering the default shard allocation behavior

Allocation awareness

Forcing allocation awareness

Filtering

What include, exclude, and require mean

Runtime allocation updating

Index level updates

Cluster level updates

Defining total shards allowed per node

Defining total shards allowed per physical server

Inclusion

Requirement

Exclusion

Disk-based allocation

Query execution preference

Introducing the preference parameter

Summary

6. Low-level Index Control

Altering Apache Lucene scoring

Available similarity models

Setting a per-field similarity

Similarity model configuration

Choosing the default similarity model

Configuring the chosen similarity model

Configuring the TF/IDF similarity

Configuring the Okapi BM25 similarity

Configuring the DFR similarity

Configuring the IB similarity

Configuring the LM Dirichlet similarity

Configuring the LM Jelinek Mercer similarity

Choosing the right directory implementation – the store module

The store type

The simple filesystem store

The new I/O filesystem store

The MMap filesystem store

The hybrid filesystem store

The memory store

Additional properties

The default store type

The default store type for Elasticsearch 1.3.0 and higher

The default store type for Elasticsearch versions older than 1.3.0

NRT, flush, refresh, and transaction log

Updating the index and committing changes

Changing the default refresh time

The transaction log

The transaction log configuration

Near real-time GET

Segment merging under control

Choosing the right merge policy

The tiered merge policy

The log byte size merge policy

The log doc merge policy

Merge policies’configuration

The tiered merge policy

The log byte size merge policy

The log doc merge policy

Scheduling

The concurrent merge scheduler

The serial merge scheduler

Setting the desired merge scheduler

When it is too much for I/O – throttling explained

Controlling I/O throttling

Configuration

The throttling type

Maximum throughput per second

Node throttling defaults

Performance considerations

The configuration example

Understanding Elasticsearch caching

The filter cache

Filter cache types

Node-level filter cache configuration

Index-level filter cache configuration

The field data cache

Field data or doc values

Node-level field data cache configuration

Index-level field data cache configuration

The field data cache filtering

Adding field data filtering information

Filtering by term frequency

Filtering by regex

Filtering by regex and term frequency

The filtering example

Field data formats

String-based fields

Numeric fields

Geographical-based fields

Field data loading

The shard query cache

Setting up the shard query cache

Using circuit breakers

The field data circuit breaker

The request circuit breaker

The total circuit breaker

Clearing the caches

Index, indices, and all caches clearing

Clearing specific caches

Summary

7. Elasticsearch Administration

Discovery and recovery modules

Discovery configuration

Zen discovery

Multicast Zen discovery configuration

The unicast Zen discovery configuration

Master node

Configuring master and data nodes

Configuring data-only nodes

Configuring master-only nodes

Configuring the query processing-only nodes

The master election configuration

Zen discovery fault detection and configuration

The Amazon EC2 discovery

The EC2 plugin installation

The EC2 plugin’s generic configuration

Optional EC2 discovery configuration options

The EC2 nodes scanning configuration

Other discovery implementations

The gateway and recovery configuration

The gateway recovery process

Configuration properties

Expectations on nodes

The local gateway

Low-level recovery configuration

Cluster-level recovery configuration

Index-level recovery settings

The indices recovery API

The human-friendly status API – using the Cat API

The basics

Using the Cat API

Common arguments

The examples

Getting information about the master node

Getting information about the nodes

Backing up

Saving backups in the cloud

The S3 repository

The HDFS repository

The Azure repository

Federated search

The test clusters

Creating the tribe node

Using the unicast discovery for tribes

Reading data with the tribe node

Master-level read operations

Writing data with the tribe node

Master-level write operations

Handling indices conflicts

Blocking write operations

Summary

8. Improving Performance

Using doc values to optimize your queries

The problem with field data cache

The example of doc values usage

Knowing about garbage collector

Java memory

The life cycle of Java objects and garbage collections

Dealing with garbage collection problems

Turning on logging of garbage collection work

Using JStat

Creating memory dumps

More information on the garbage collector work

Adjusting the garbage collector work in Elasticsearch

Using a standard start up script

Service wrapper

Avoid swapping on Unix-like systems

Benchmarking queries

Preparing your cluster configuration for benchmarking

Running benchmarks

Controlling currently run benchmarks

Very hot threads

Usage clarification for the Hot Threads API

The Hot Threads API response

Scaling Elasticsearch

Vertical scaling

Horizontal scaling

Automatically creating replicas

Redundancy and high availability

Cost and performance flexibility

Continuous upgrades

Multiple Elasticsearch instances on a single physical machine

Preventing the shard and its replicas from being on the same node

Designated nodes’roles for larger clusters

Query aggregator nodes

Data nodes

Master eligible nodes

Using Elasticsearch for high load scenarios

General Elasticsearch-tuning advices

Choosing the right store

The index refresh rate

Thread pools tuning

Adjusting the merge process

Data distribution

Advices for high query rate scenarios

Filter caches and shard query caches

Think about the queries

Using routing

Parallelize your queries

Field data cache and breaking the circuit

Keeping size and shard size under control

High indexing throughput scenarios and Elasticsearch

Bulk indexing

Doc values versus indexing speed

Keep your document fields under control

The index architecture and replication

Tuning write-ahead log

Think about storage

RAM buffer for indexing

Summary

9. Developing Elasticsearch Plugins

Creating the Apache Maven project structure

Understanding the basics

The structure of the Maven Java project

The idea of POM

Running the build process

Introducing the assembly Maven plugin

Creating custom REST action

The assumptions

Implementation details

Using the REST action class

The constructor

Handling requests

Writing response

The plugin class

Informing Elasticsearch about our REST action

Time for testing

Building the REST action plugin

Installing the REST action plugin

Checking whether the REST action plugin works

Creating the custom analysis plugin

Implementation details

Implementing TokenFilter

Implementing the TokenFilter factory

Implementing the class custom analyzer

Implementing the analyzer provider

Implementing the analysis binder

Implementing the analyzer indices component

Implementing the analyzer module

Implementing the analyzer plugin

Informing Elasticsearch about our custom analyzer

Testing our custom analysis plugin

Building our custom analysis plugin

Installing the custom analysis plugin

Checking whether our analysis plugin works

Summary

Index

Mastering Elasticsearch Second Edition

Copyright © 2015 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: October 2013

Second edition: February 2015

Production reference: 1230215

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78355-379-2

www.packtpub.com

Credits

Authors

Rafał Kuć

Marek Rogoziński

Reviewers

Hüseyin Akdoğan

Julien Duponchelle

Marcelo Ochoa

Commissioning Editor

Akram Hussain

Acquisition Editor

Rebecca Youé

Content Development Editors

Madhuja Chaudhari

Anand Singh

Technical Editors

Saurabh Malhotra

Narsimha Pai

Copy Editors

Stuti Srivastava

Sameen Siddiqui

Project Coordinator

Akash Poojary

Proofreaders

Paul Hindle

Joanna McMahon

Indexer

Hemangini Bari

Graphics

Sheetal Aute

Valentina D’silva

Production Coordinator

Alwin Roy

Cover Work

Alwin Roy

About the Author

Rafał Kuć is a born team leader and software developer. Currently, he is working as a consultant and a software engineer at Sematext Group, Inc., where he concentrates on open source technologies, such as Apache Lucene, Solr, Elasticsearch, and the Hadoop stack. He has more than 13 years of experience in various software branches from banking software to e-commerce products. He is mainly focused on Java but is open to every tool and programming language that will make the achievement of his goal easier and faster. Rafał is also one of the founders of the solr.pl website, where he tries to share his knowledge and help people with their problems related to Solr and Lucene. He is also a speaker at various conferences around the world, such as Lucene Eurocon, Berlin Buzzwords, ApacheCon, Lucene Revolution, and DevOps Days.

He began his journey with Lucene in 2002, but it wasn’t love at first sight. When he came back to Lucene in late 2003, he revised his thoughts about the framework and saw the potential in search technologies. Then came Solr, and that was it. He started working with Elasticsearch in the middle of 2010. Currently, Lucene, Solr, Elasticsearch, and information retrieval are his main points of interest.

Rafał is the author of Solr 3.1 Cookbook, its update Solr 4.0 Cookbook and its third release Solr Cookbook, Third Edition. He is also the author of Elasticsearch Server and its second edition, along with the first edition of Mastering Elasticsearch, all published by Packt Publishing.

Acknowledgments

With Marek, we were thinking about writing an update to Mastering Elasticsearch, Packt Publishing. It was not a book for everyone, but the first version didn’t put enough emphasis on that we were treating Mastering Elasticsearch as an update to Elasticsearch Server. The same goes with Mastering Elasticsearch Second Edition. The book you are holding in your hands was written as an extension to Elasticsearch Server Second Edition, Packt Publishing, and should be treated as a continuation to that book. Because of such an approach, we could concentrate on topics such as choosing the right queries, scaling Elasticsearch, extensive scoring descriptions with examples, internals of filtering, new aggregations, comparison to documents’relations handling, and so on. Hopefully, after reading this book, you’ll be able to easily get all the details about Elasticsearch and the underlying Apache Lucene architecture; this will let you get the desired knowledge easier and faster.

I would like to thank my family for the support and patience during all those days and evenings when I was sitting in front of a screen instead of being with them.

I would also like to thank all the people I’m working with at Sematext, especially Otis, who took his time and convinced me that Sematext is the right company for me.

Finally, I would like to thank all the people involved in creating, developing, and maintaining Elasticsearch and Lucene projects for their work and passion. Without them, this book wouldn’t have been written and open source search wouldn’t have been the same as it is today.

Once again, thank you.

Another random document with no related content on Scribd:

figures, an accuracy no analog computer can match. The significant point is that the analog can never hope to compete with digital types for accuracy.

A third perhaps not as important advantage the digital machine has is its compactness. We are speaking now of later computers, and not the pioneer electromechanical giants, of course. The transistor and other small semiconductor devices supplanted the larger tubes, and magnetic cores took the place of cruder storage components. Now even more exotic devices are quietly ousting these, as magnetic films and cryotrons begin to be used in computers.

Science Materials Center

BRAINIAC, another do-it-yourself computer. This digital machine is here being programmed to solve a logic problem involving a will.

This drastic shrinking of size by thinking small on the part of computer designers increases the capacity of the digital computer at no sacrifice in accuracy or reliability. The analog, unfortunately, cannot make use of many of these solid-state devices. Again, the bugaboo of accuracy is the reason; let’s look further into the problem.

The most accurate and reliable analog computers are mechanical in nature. We can cut gears and turn shafts and wheels to great accuracy and operate them in controlled temperature and humidity. Paradoxically, this is because mechanical components are nearer to digital presentations than are electrical switches, magnets, and electronic components. A gear can have a finite number of teeth; when we deal with electrons flowing through a wire we leave the discrete and enter the continuous world. A tiny change in voltage or current, or magnetic flux, compounded several hundred times in a complex computer, can change the final result appreciably if the errors are cumulative, that is, if they are allowed to pile up. This is what happens in the analog computer using electrical and electronic components instead of precisely machined cams and gears.

The digital device, on the other hand, is not so penalized. Though it uses electronic switches, these can be so set that even an appreciable variation in current or voltage or resistance will not affect the proper operation of the switch. We can design a transistor switch, for example, to close when the current applied exceeds a certain threshold. We do not have to concern ourselves if this excess current is large or small; the switch will be on, no more and no less. Or it will be completely off. Just as there is no such thing as being a little bit dead, there is no such thing as a partly off digital switch. So our digital computer can make use of the more advanced electronic components to become more complex, or smaller, or both. The analog must sacrifice its already marginal accuracy if it uses more electronics. The argument here is simplified, of course; there are electronic analog machines in operation. However, the problem of the “drift” of electronic devices is inherent and a limiting factor on the performance of the analog.

These, then, are some of the advantages the digital computer has over its analog relative. It is more flexible in general—though there are some digital machines that are more specialized than some analog types; it is more accurate and apparently will remain so; and it is more amenable to miniaturization and further complexity because its designer can use less than perfect parts and produce a perfect result.

In the disadvantage department the digital machine’s only drawback seems to be its childish way of solving problems. About all it knows how to do is to add 1 and 1 and come up with 2. To multiply, it performs repetitive additions, and solving a difficult equation becomes a fantastically complex problem when compared with the instantaneous solution possible in the analog machine. The digital computer redeems itself by performing its multitudinous additions at fabulous speeds.

Because it must be fed digits in its input, the digital machine is not economically feasible in many applications that will probably be reserved for the analog. A digital clock or thermometer for household use would be an interesting gimmick, but hardly worth the extra trouble and expense necessary to produce. Even here, though, first glances may be wrong and in some cases it may prove worth while to convert analog inputs to digital with the reverse conversion at the output end. One example of this is the airborne digital computer which has taken over many jobs earlier done by analog devices.

There is another reason for the digital machines ubiquitousness, a reason it does not seem proper to list as merely a relative advantage over the analog. We have described the analog computer used as an aid to psychological testing procedures, and its ability to handle a multiplicity of problems at once. This perhaps tends to obscure the fact that the digital machine by its very on-off, yes-no nature is ideally suited to the solving of problems in logic. If it achieves superiority in mathematics in spite of its seemingly moronic handling of numbers, it succeeds in logic because of this very feature.

While it might seem more appropriate that music be composed by analogy, or that a chess-playing machine would likely be an analog computer, we find the digital machine in these roles. The reason may be explained by our own brains, composed of billions of neurons, each capable only of being on or off. While many philosophers build a strong case for the yes-no-maybe approach with its large areas of gray, the discipline of formal logic admits to only two states, those that can so conveniently be represented in the digital computer’s flip-flop or magnetic cores.

The digital computer, then, is not merely a counting machine, but a decision-maker as well. It can decide whether something should be added, subtracted, or ignored. Its logical manipulations can by clever circuitry be extended from AND to OR, NOT, and NOR. It thus can solve not only arithmetic, but also the problems of logic concerning foxes, goats, and cabbages, or cannibals and missionaries that give us human beings so much trouble when we encounter them.

The fact that the digital computer is just such a rigorously logical and unbending machine poses problems for it in certain of its dealings with its human masters. Language ideally should be logical in its structure. In general it probably is, but man is so perverse that he has warped and twisted his communications to the point that a computer sticking strictly to book logic will hit snags almost as soon as it starts to translate human talk into other human talk, or into a logical machine command or answer.

For instance, we have many words with multiple meanings which give rise to confusion unless we are schooled in subtleties. There are stories, some of them apocryphal but nonetheless pointing up the problem, of terms like “water goat” cropping up in an English-toRussian translation. Investigation proved that the more meaningful term would have been “hydraulic ram.” In another interesting experiment, the expression, “the spirit is willing but the flesh is weak” was machine translated into Russian, and then that result in turn re-translated back into English much in the manner of the party

game of “Telephone” in which an original message is whispered from one person to another and finally back to the originator. In this instance, the final version was, “The vodka is strong, but the meat is rotten.”

It is a fine distinction here as to who is wrong, the computer or man and his irrational languages. Chances are that in the long run true logic will prevail, and instead of us confusing the computer it will manage instead to organize our grammar into the more efficient tool it should be. With proper programming, the computer may even be able to retain sufficient humor and nuance to make talk interesting and colorful as well as utilitarian.

We can see that the digital machine with its flexibility, accuracy, and powerful logical capability is the fair-haired one of the computer family. Starting with a for abacus, digital computer applications run through practically the entire alphabet. Its take-over in the banking field was practically overnight; it excels as a tool for design and engineering, including the design and engineering of other computers. Aviation relies heavily on digital computers already, from the sale of tickets to the control of air traffic.

Gaming theory is important not only to the Saturday night pokerplayer and the Las Vegas casino operator, but to military men and industrialists as well. Manufacturing plants rely more and more on digital techniques for controls. Language translation, mentioned lightly above, is a prime need at least until we all begin speaking Esperanto, Io, or Computerese. Taxation, always with us, may at least be more smoothly handled when the computers take over. Insurance, the arrangement of music, spaceflight guidance, and education are random fields already dependent more or less on the digital computer. We will not take the time here to go thoroughly into all the jobs for which the computer has applied for work and been hired; that will be taken up in later chapters. But from even a quick glance the scope of the digital machine already should be obvious. This is why it is usually a safe assumption that the word computer today refers to the digital type.

HybridComputers

We have talked of the analog and the digital; there remains a further classification that should be covered. It is the result of a marriage of our two basic types, a result naturally hybrid. The analog-digital computer is third in order of importance, but important nonetheless.

Minneapolis-Honeywell

Nerve center of Philadelphia Electric Company’s digital computer-directed

automatic economic dispatch system is this console from which power directors operate and supervise loading of generating units at minimum incremental cost.

Necessity, as always, mothered the invention of the analog-digital machine. We have talked of the relative merits of the two types; the analog is much faster on a complex problem such as solving simultaneous equations. The digital machine is far more accurate. As an example, the Psychological Matrix Rotator described earlier could solve its twelve equations practically instantaneously. A digital machine might take seconds—a terribly long time by computer standards. If we want an accurate high-speed differential analyzer, we must combine an analog with a digital computer.

Because the two are hardly of the same species, this breeding is not an easy thing. But by careful study, designers effected the desired mating. The hybrid is not actually a new type of computer, but two different types tied together and made compatible by suitable converters.

The composite consists of a high-speed general-purpose digital computer, an electronic analog computer, an analog-to-digital converter, a digital-to-analog and a suitable control for these two converters. The converters are called “transducers” and have the ability of changing the continuous analog signal into discrete pulses of energy, or vice versa.

Sometimes called digital differential analyzers, the hybrid computers feature the ease of programming of the analog, plus its speed, and the accuracy and much broader range of the digital machine. Bendix among others produced such machines several years ago. The National Bureau of Standards recently began development of what it calls an analog-digital differential analyzer which it expects to be from ten to a hundred times more accurate than earlier hybrid computers. The NBS analyzer will be useful in missile and aircraft design work.

Despite its apparent usefulness as a compromise and happy medium between the two types, the hybrid would seem to have as

limited a future as any hybrid does. Pure digital techniques may be developed that will be more efficient than the stopgap combination, and the analog-digital will fall by the wayside along the computer trail.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.