Visit to download the full and correct content document: https://textbookfull.com/product/data-management-solutions-using-sas-hash-table-op erations-a-business-intelligence-case-study-paul-dorfman/
More products digital (pdf, epub, mobi) instant download maybe you interests ...
Artificial Intelligence for Big Data Complete guide to automating Big Data solutions using Artificial Intelligence techniques Anand Deshpande
https://textbookfull.com/product/artificial-intelligence-for-bigdata-complete-guide-to-automating-big-data-solutions-usingartificial-intelligence-techniques-anand-deshpande/
Learn Data Science Using SAS Studio: A Quick-Start Guide Engy Fouda
https://textbookfull.com/product/learn-data-science-using-sasstudio-a-quick-start-guide-engy-fouda/
PROC SQL Beyond the Basics Using SAS Third Edition Kirk Paul Lafler
https://textbookfull.com/product/proc-sql-beyond-the-basicsusing-sas-third-edition-kirk-paul-lafler/
The Joys of Hashing: Hash Table Programming with C 1st Edition Thomas Mailund
https://textbookfull.com/product/the-joys-of-hashing-hash-tableprogramming-with-c-1st-edition-thomas-mailund/
Business Intelligence Strategy and Big Data Analytics A
General Management Perspective 1st Edition Steve Williams
https://textbookfull.com/product/business-intelligence-strategyand-big-data-analytics-a-general-management-perspective-1stedition-steve-williams/
Statistical Data Analysis using SAS Intermediate Statistical Methods Mervyn G. Marasinghe
https://textbookfull.com/product/statistical-data-analysis-usingsas-intermediate-statistical-methods-mervyn-g-marasinghe/
SAS Certified Specialist Prep Guide Base Programming Using SAS 9 4 Sas
https://textbookfull.com/product/sas-certified-specialist-prepguide-base-programming-using-sas-9-4-sas/
Health Services Management A Case Study Approach Twelfth Edition Mcalearney
https://textbookfull.com/product/health-services-management-acase-study-approach-twelfth-edition-mcalearney/
Open Data Politics A Case Study on Estonia and Kazakhstan Maxat Kassen
https://textbookfull.com/product/open-data-politics-a-case-studyon-estonia-and-kazakhstan-maxat-kassen/
The correct bibliographic citation for this manual is as follows: Dorfman, Paul and Don Henderson. 2018. Data Management Solutions Using SAS® Hash Table Operations: A Business Intelligence Case Study Cary, NC: SAS Institute Inc.
Data Management Solutions Using SAS® Hash Table Operations: A Business Intelligence Case Study
Copyright © 2018, SAS Institute Inc., Cary, NC, USA
ISBN 978-1-62960-143-4 (Hard copy)
ISBN 978-1-63526-059-5 (EPUB)
ISBN 978-1-63526-060-1 (MOBI)
ISBN 978-1-63526-061-8 (PDF)
All Rights Reserved. Produced in the United States of America.
For a hard copy book: No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, or otherwise, without the prior written permission of the publisher, SAS Institute Inc.
For a web download or e-book: Your use of this publication shall be governed by the terms established by the vendor at the time you acquire this publication.
The scanning, uploading, and distribution of this book via the Internet or any other means without the permission of the publisher is illegal and punishable by law. Please purchase only authorized electronic editions and do not participate in or encourage electronic piracy of copyrighted materials. Your support of others’ rights is appreciated.
U.S. Government License Rights; Restricted Rights: The Software and its documentation is commercial computer software developed at private expense and is provided with RESTRICTED RIGHTS to the United States Government. Use, duplication, or disclosure of the Software by the United States Government is subject to the license terms of this Agreement pursuant to, as applicable, FAR 12.212, DFAR 227.7202-1(a), DFAR 227.7202-3(a), and DFAR 227.7202-4, and, to the extent required under U.S. federal law, the minimum restricted rights as set out in FAR 52.227-19 (DEC 2007). If FAR 52.227-19 is applicable, this provision serves as notice under clause (c) thereof and no other notice is required to be affixed to the Software or documentation. The Government’s rights in Software and documentation shall be only those set forth in this Agreement.
SAS Institute Inc., SAS Campus Drive, Cary, NC 27513-2414
July 2018
SAS® and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration.
Other brand and product names are trademarks of their respective companies.
SAS software may be provided with certain third-party software, including but not limited to open-source software, which is licensed under its applicable third-party software license agreement. For license information about third-party software distributed with SAS software, refer to http://support.sas.com/thirdpartylicenses
6.2
5.2.3
5.2.4
6.4
6.2.1
6.2.2
7.2
6.4.1
About This Book
What Does This Book Cover?
This book is about the How, the What, and the Why of using the SAS DATA step hash object. These three topics are interconnected and quite often SAS users focus on just a small part of what SAS software can do. This is especially true for the SAS hash object and hash tables. Far too many users immediately think of the use of hash tables as just a very powerful table lookup facility (a What), and that notion then influences their understanding of the How and the Why.
The authors have found that the SAS hash object and hash tables provide for a very robust data management and analysis facility, and we collaborated on this book to provide the insights we have discovered:
● More Whats: e.g., data management; data aggregation, . . .
● More Whys: e.g., efficiency; flexibility, parameterization, . . .
● More Hows: e.g., memory management, key management, .
The focus of this book is to provide the readers with a more complete understanding and appreciation of the SAS hash object. As such, we have included a number of SAS programs that illustrate this broad range of functionality. Many of the programs use features of the SAS DATA step language that many readers may not be familiar with. This book does not attempt to describe those techniques in detail; instead, the authors will expand upon traditional SAS DATA step programming techniques that are particularly relevant to the SAS object in a series of blog entries. You can access the blog entries from the author page at support.sas.com/authors. Select either “Paul Dorfman” or “Don Henderson.” Then look for the cover thumbnail of this book, and select “Blog Entries.”
The book is organized around a Proof of Concept (PoC) project whose goal is to convince a group of business and IT users that the SAS hash object can be used to address many of their requirements for data management and reporting.
Is This Book for You?
This book is intended for any SAS programmer who has an interest in learning more about what can be done with the SAS hash object and specifically about how to use the hash object to assist in the analysis of data to address business intelligence problems. The hash object is more than just a technique for table lookup; the point of this book is to broaden that perspective.
How to Read This Book
The book is organized into four parts. There is no requirement to read this book in order.
Part 1 focuses on the How of the hash object and provides a deep dive into the details of how it works. It provides a high-level overview of the hash object followed by a discussion of both table-level and itemlevel operations. It concludes with a more advanced discussion of item-level enumeration operations. Part 1 is probably best used by first reading Chapter 1 to get a better understanding of the kinds of tasks the hash object can be used for. The remaining Part 1 chapters can be reviewed later.
The focus of Part 2 is What the hash object should be used for, along with a discussion of Why that hash object is a good deal for many business intelligence questions. It starts with a discussion of the sample data used in the book and how the business users are interested in providing answers to business intelligence and analytical questions. It then provides an overview of common business intelligence and analytical data tasks. Part 2 also discusses the use of the SAS hash object to support the creation and updating of the data warehouse or data mart table. Following that, the discussion moves to using the hash object to support a range of data aggregation capabilities via a number of sample programs that you the reader can adapt to your business problems. Readers with some experience with DATA step programming might want focus on Part 2 after reviewing the overview chapter in Part 1.
Part 3 introduces how some more advanced features of the hash object can facilitate data-driven techniques in order to offer more flexibility and dynamic programming. It also addresses techniques for memory management and data partitioning, focusing on all three of the topics of How, What, and Why. Part 3 should be reviewed in detail once the reader feels comfortable with the examples presented in Part 2.
Two short case studies are included in Part 4. The first illustrates using the hash object to research alternatives metrics. The second one provides an example of using the hash object to support answering adhoc questions. The sample programs in Part 4 leverage the example programs from Part 2. Reviewing the examples in Part 4 can probably be done in any order by referring to the techniques used.
More details about each part, including suggestions for what to focus on, can be found in the short introductions to each of the 4 parts.
You can access a glossary of terms from the author page at support.sas.com/authors. Select either “Paul Dorfman” or “Don Henderson.” Then look for the cover thumbnail of this book, and select “Glossary of Terms.”
What Are the Prerequisites for This Book?
The only prerequisite for this book is familiarity with DATA step programming. Some knowledge of the macro language is desirable, but is not required.
What Should You Know about the Examples?
This book includes examples for you to follow to gain hands-on experience with the SAS hash object.
Software Used to Develop the Book's Content
All of the examples in this book apply to SAS 9.3 and SAS 9.4. Where differences exist, we have done our best to reference them. Many of the examples also work in early releases of SAS, but the examples have not been tested using those earlier releases.
Example Code and Data
The sample data for this book is for a fictitious game called Bizarro Ball. Bizarro Ball is conceptually similar to baseball, with a few wrinkles.
We have been engaged by the business users who are responsible for marketing Bizarro Ball about their interest in reporting on their game data. They currently have no mechanism to capture their data and so we have agreed to write programs to generate data that can be used in a Proof of Concept. The programs, most of which use the hash object, generate our sample data and are discussed in a series of blog entries. You can access the blog entries from the author page at support.sas.com/authors. Select either “Paul Dorfman” or “Don Henderson.” Then look for the cover thumbnail of this book, and select “Blog Entries.”
Selected example programs do make use of DATA step programming features which, while known by many, are not widely used. The authors plan to write blog entries (as mentioned above) about some of those techniques, and readers are encouraged to suggest programming techniques used in the book for which they would like to see a more detailed discussion.
You can access the example code and data from the author page at support.sas.com/authors. Select either “Paul Dorfman” or “Don Henderson.” Then look for the cover thumbnail of this book, and select “Example Code and Data.”
An Overview of Bizarro Ball
The key features of Bizarro Ball that we agreed to implement in our programs to generate the sample data include:
● Creating data for 32 teams, split between 2 leagues with 16 teams in each league.
● Each team plays the other 15 teams in their league.
● Each team plays each other team a total of 12 times; 6 as the home team and 6 as the away team. In other words, they play a balanced schedule.
● Games are played in a series consisting of 3 games each.
● Each week has 2 series for each team. Games are played on Tuesday, Wednesday, Thursday; the second series of games are played on Friday, Saturday, and Sunday. Monday is an agreed upon off-day for each team. This off-day is used when it is necessary to schedule a date for a game that was canceled (e.g., due to the weather). It was agreed that, to simplify the programs to generate our sample data, we would assume that no such makeup games are needed.
● Since each team plays each other team in their league 12 times, this results in a regular season of 180 games. Since each team plays 6 games a week, the Bizarro Ball regular season is 30 weeks long.
● Another simplifying assumption that was agreed to was that we could generate a schedule without regard to constraints related to travel time or rules about consecutive home or away series.
● Each game is 9 innings long, and games can end in a tie.
● If the home team (which always bats in the bottom half of an inning) is ahead going into the bottom half of the 9th inning, they still play that half-inning. The reason for that is that the tie breakers for determining who the league champion is include criteria that could adversely impact a good team if they are often ahead at the beginning of the bottom half of the 9th inning.
● Each team has 25 players and has complete control over the distribution of the positions a player can play.
● Each team would set its lineup for each game using whatever criteria they felt appropriate. We informed the business users that using the logic to implement a rules-based approach to do this did not add value to the PoC and would take significant extra time. So it was agreed we could randomize the generation of the line-up for each game.
There are a number of key differences between Bizarro Ball and baseball. Therefore, in the interests of time and focusing on how the hash object that can be used to address business problems, we agreed to a number of simplifying assumptions with our business users. Those assumptions are discussed in the blog posts mentioned above.
SAS University Edition
This book is compatible with SAS University Edition. If you are using SAS University Edition, then begin here: https://support.sas.com/ue-data .
The only requirement is to make sure to extract the ZIP file of sample data and programs in a location accessible to the SAS University Edition. Example code and data can be found on the author pages: support.sas.com/dorfman support.sas.com/henderson
We Want to Hear from You
SAS Press books are written by SAS Users for SAS Users. We welcome your participation in their development and your feedback on SAS Press books that you are using. Please visit sas.com/books to do the following:
● Sign up to review a book
● Recommend a topic
● Request information on how to become a SAS Press author
● Provide feedback on a book
Do you have questions about a SAS Press book that you are reading? Contact the author through saspress@sas.com or https://support.sas.com/author_feedback.
SAS has many resources to help you find answers and expand your knowledge. If you need additional help, see our list of resources: sas.com/books.
Chapter 1: Hash Object Essentials
The goal of this chapter is to discuss the organization and data structure of the SAS hash object, in particular:
● Hash object and table structure and components.
● Hash table properties.
● Hash table lookup organization.
● Hash operations and tools classification.
● Basics of the behind-the-scenes hash table structure and search algorithm.
On the whole, the chapter should provide a conceptual background related to the hash object and hash tables and serve as a stepping stone to understanding hash table operations.
Since we have two distinct sets of users who are in this Proof of Concept, this chapter would likely be of much more interest to the IT users as they are more likely than the business users to understand the details and the nuances discussed here. We did suggest that it would be worthwhile for the business users to skim this chapter, as it should give them a good overview of the power and flexibility of the SAS hash object/table.
1.2 Hash Object in a Nutshell
The first production release of the hash object appeared in the SAS 9.1. Perhaps the original motive for its development had been to offer a DATA step programmer a table look-up facility either much faster or more convenient -or both - than the numerous other methods already available in the SAS arsenal. The goal was certainly achieved right off the bat. But what is more, the potential capabilities built into the newfangled hash object were much more scalable and functionally flexible than those of a mere lookup table. In fact, it became the first in-memory data structure accessible from the DATA step that could emerge, disappear, grow, shrink, and get updated dynamically at run time. The scalability of the hash object has made it possible to vastly expand the original hash object functionality in future versions and releases, and its functional flexibility has enabled SAS programmers to invent and implement new uses for it, perhaps even unforeseen by its developers.
So, what is the hash object? In a nutshell, it is a dynamic data structure controlled during execution time from the DATA step (or the DS2 procedure) environment. It consists of the following principal elements:
● A hash table for data storage and retrieval specifically organized to perform table operations based on searching the table quickly and efficiently via its key.
● An underlying, behind-the-scenes hashing algorithm which, in tandem with the specific table organization, facilitates the search.
● A set of tools to control the very existence of the table - that is, to create and delete it.
● A set of tools to activate the table operations and thus enable information exchange between the DATA step environment and the table.
● Optional: a hash iterator object instance linked to the hash table with the purpose of accessing the table entries sequentially.
The terms "hash object" and "hash table" are most likely derived from the hashing algorithm underlying their functionality. Let us now discuss the hash table and its specific features and usage prerequisites.
1.3 Hash Table
From the standpoint of a user, the hash object’s table is a table with rows and columns - just like any other table, such as a SAS data file. Picture the image of a SAS data set, and you have pretty much pictured what a hash table may look like. For example, let us suppose that it contains a small subset of data from data set Bizarro.Player_candidates:
Table 1.1 Hash Object Table Layout
Hash Table Variables
Key Portion Data Portion
Team_SK Player_ID First_name Last_name Position_code
115 23391 Ryan Coleman C
158 38259 Ronald Wright CF
189 24603 Alan Torres CIF
189 59690 Gregory Roberts COF
193 11628 Henry Rodriguez MIF
259 30598 Eugene Thompson SP
Reminds us of an indexed SAS data set, does it not? Indeed, it looks like a relational table with rows and columns. Furthermore, we have a composite key (Team_SK, Player_ID) and the rest of the variables associated with the key, also termed the satellite data. The analogy between an indexed SAS data set and a hash table is actually pretty deep, especially in terms of the common table operations both can perform. However, there are a number of significant distinctions dictated by the intrinsic hash table properties. Let us examine them and make notes of the specific hash table nomenclature along the way.
1.4 Hash Table Properties
To make the hash table’s properties stand out more clearly, it may be useful to compare them with the properties of the indexed SAS data set from a number of perspectives.
1.4.1
Residence and Volatility
● The hash table resides completely in memory. This is one of the factors that makes its operations very fast. On the flip side, it also limits the total amount of data it can contain, which consists of the actual data and some underlying overhead needed to make the hash table operations work.
● The hash table is temporary. Even if the table is not deleted explicitly, it exists only for the duration of the DATA step. Therefore, the hash table cannot persist across SAS program steps. However, its content can be saved in a SAS data set (or its logical equivalent, such as an RDBMS table) before the DATA step completes execution and then reloaded into a hash table in DATA steps that follow.
1.4.2 Hash Variables Role Enforcement
● The hash variables are specifically defined as belonging to two distinct domains: the key portion and the data portion. Their combination in a row forms what is termed a hash table entry.
● Both the key and the data portions are strictly mandatory. That is, at least one hash variable must be defined for the key portion and at least one for the data portion. (Note that this is different from an indexed SAS table used for pure look-up where no data portion is necessary.)
● The two portions communicate with the DATA step program data vector (PDV) differently. Namely, only the values of the data portion variables can be used to update their PDV host variables
● Likewise, only the data portion content can be “dumped” into a SAS data file.
● In the same vein, in the opposite data traffic direction, only the data portion hash variables can be updated from the DATA step PDV variables or other expressions.
● However, if a hash variable is defined in the key portion, a hash variable with the same name can also be defined in the data portion. Note that because the data portion variable can be updated and the key portion variable with the same name cannot, their values can be different within one and the same hash item.
1.4.3 Key Variables
● Together, the key portion variables form the hash table key used to access the table.
● The table key is simple if the key portion contains one variable, or compound if there is more than one. For example, in the sample table above, we have a two-term compound key consisting of variables (Team_SK, Player_ID).
● A compound key is processed as a whole, i.e., as if its components were concatenated.
● Hence, unlike an indexed SAS table, the hash table can be searched based on the entire key only, rather than also on a number of its consecutive leading components.
1.4.4 Program Data Vector (PDV) Host Variables
Defining the hash table with at least one key and one data variable is not the only requirement to make it operable. In addition, in order to communicate with the DATA step, the hash variables must have corresponding variables predefined in the PDV before the table can become usable. In other words, at the time when the hash object tools are invoked to define hash variables, variables with the same exact names must already exist in the PDV. Let us make several salient points about them:
● In this book, from now on, we call the PDV variables corresponding to the variables in the hash table the PDV host variables
● This is because they are the PDV locations from which the hash data variables get their values and into which they are retrieved.
● When a hash variable is defined in a hash table, it is from the existing host variable with the same name that it inherits all attributes, i.e., the data type, length, format, informat, and label.
● Therefore, if, as mentioned above, key portion and the data portion each contain a hash variable with the same name, it will have all the same exact attributes in both portions as inherited from one, and only one, PDV host variable with the same name.
● The job of creating the PDV host variables, as any other PDV variables, belongs to the DATA step compiler. It is complete when the entire DATA step has been scanned by the compiler, i.e., before any hash object action -invoked at run time - can occur.
● Providing the compiler with the ability to create the PDV host variables is sometimes called parameter type matching. We will see later that it can be done in a variety of ways, different from the standpoint of automation, robustness, and error-proneness.
In order to use the hash object properly, you must understand the concept of the PDV host variables and their interaction with the hash variables. This is as important to understand as the rules of Bizarro Ball if you want to play the game.
1.5 Hash Table Lookup Organization
● The table is internally organized to facilitate the hash search algorithm.
● Reciprocally, the algorithm is designed to make use of this internal structure.
● This tandem of the table structure and the algorithm is sufficient and necessary to facilitate an extremely fast mechanism of direct-addressing table look-up based on the table key.
● Hence, there is no need for the overhead of a separate index structure, such as the index file in the case of an indexed SAS table. (In fact, as we will see later, the hash table itself can be used as a very efficient memory-resident search index.)
For the purposes of this book, it is rather unimportant how the underlying hash table structure and the hashing algorithm work – by the same token as a car driver can operate the vehicle perfectly well and know next to nothing about what is going on under the hood. As far as this subtopic is concerned, hash object users need only be aware that its key-based operations work fast – in fact, faster or on par with other lookup techniques available in SAS. In particular:
● The hash object performs its key-based operations in constant time. A more technical way of saying it is that the run time for the key-based operations scales as O(1).
● The meaning of O(1) notation is simple: The speed of hash search does not depend on the number of items in the table. If N is the number of unique keys in the table, the time needed to either find a key in it or discover that it is not there does not depend on N. For example, the same hash table is searched equally fast for, say, N=1,000 and N=1,000,000.
● It still does not hurt to know how such a feat is achieved behind the scenes. For the benefit of those who agree, a brief overview is given in the last, optional, section of this chapter, “Peek Under the Hood”.
1.5.1 Hash Table Versus Indexed SAS Data File
To look at the hash table properties still more systematically, it may be instructive to compile a table of the differences between a hash table and an indexed SAS file:
Table 1.2 Hash Table vs Indexed SAS File
Attribute Hash Table Indexed SAS File
Table residence medium RAM Disk
Time when defined/structured Run Compile
Required or not Yes Yes
Key portion
Data portion
PDV host variables Yes Yes
Partial key lookup No Yes
Required or not Yes No
PDV host variables Yes No
Structure AVL trees Index binary tree
Key search
Algorithm Hash + AVL trees Binary search
Index / index file No
Yes
Scaling time O(1)=constant O(log2 (N))
1.6 Table Operations and Hash Object Tools
To recap, the hash object is a table in memory internally organized around the hashing algorithm and tools to store and retrieve data efficiently. In order for any data table to be useful, the programming language used to access it must have tools to facilitate a set of fundamental standard operations. In turn, the operations can be used to solve programming or data processing tasks. Let us take a brief look at the hierarchy comprising the tasks, operations, and tools.
1.6.1 Tasks, Operations, Environment, and Tools Hierarchy
Whenever a data processing task is to be accomplished, we do not start by thinking of tools needed to achieve the result. Rather, we think about accomplishing the task in terms of the operations we use as the building blocks of the solution. Suppose that we have a file and need to replace the value of variable VAR with 1 in every record where VAR=0. At a high level, the line of thought is likely to be:
1. Read.
2. Search for records where VAR=0.
3. Update the value of VAR with 1.
4. Write.
Thus, we first think of the operations (read, search, update, write) to be performed. Once the plan of operations has been settled on, we would then search for an environment and tools capable of performing the operations. For example, we can decide whether to use the DATA step or SQL environment. Each environment has its own set of tools, and so, depending on the choice of environment, we could then decide which tools (such as statements, clauses, etc.) could be used to perform the operations. The logical hierarchy of solving the problem is sequenced as Tasks->Operations->Environment>Tools.
In this book, our focus is on the SAS hash object environment. Therefore, it is structured as follows:
● Classify and discuss the hash table operations.
● Exemplify the hash object tools needed to perform each operation.
● Demonstrate how various data processing tasks can be accomplished using the hash table operations.
1.6.2 General Data Table Operations
Let us consider a general data table -not necessarily a hash table, at this point. In order for the table to serve as a programmable data storage and retrieval medium, the software must facilitate a number of standard table operations generally known as CRUD -an abbreviation for Create, Retrieve, Update, Delete. (Since the last three operations cannot be performed without the Search operation, its availability is tacitly assumed, even though it is not listed.) For instance, an indexed SAS data set is a typical case of a data table on which all these operations are supported via the DATA step, SQL, and other procedures. A SAS array is another example of a data table (albeit in this case, the SAS tools supporting its operations are different). And of course, all these operations are available for the tables of any commercial database.
In this respect, a SAS hash table is no different: The hash object facilitates all the basic operations on it and, in addition, supports a number of useful operations dictated by their specific nature. They can be subdivided into two levels: One related to the table as a whole, and the other - to the individual table items. Below, the operations are classified based on these two levels:
Table 1.3 Hash Table Operations Classification
Level Operation
Create
Table
Function
Create a hash object instance and define its structure.
Delete Delete a hash object instance.
Clear
Remove all items from the table without deleting the table.
Output Copy all or some data portion variable values to a file.
Describe Extract a table attribute.
Search
Item
Determine if a given key is present in the table.
Insert Insert an item with a given key and associated data.
Delete All
Retrieve
Update All
Order
Enumerate by Key (Keynumerate)
Selective Delete
Selective Update
Enumerate All
Delete all items (the group of items) with a given key.
Extract the data portion values from the item with a given key
Replace the data portion values of all items with a given key.
Permute same-key item groups into an order based on their keys.
Retrieve the data from the item group with a given key sequentially
Delete specific items from the item group with a given key
Update specific items in the item group with a given key
Retrieve the data from all or some table items sequentially, without using a key. Requires the hash iterator object
1.6.3 Hash Object Tools Classification
The hash table operations are implemented by the hash object tools. These tools, however, have their own classification, syntactic form, and nomenclature, different from other SAS tools with similar functions. Let us consider these three distinct aspects.
The hash object tools fall into a number of distinct categories listed below:
Table 1.4
Hash Object Tools Classification
Hash Tool
Statement
Purpose
Declare (and, optionally, create an instance of) a hash object.
Operator Create an instance of a hash object.
Attribute Retrieve hash table metadata.
Method Manipulate a hash object and its data.
Argument tag Specify or modify actions performed by operators, and methods.
Generally speaking, there exists no one-to-one correspondence between a particular hash tool and a particular standard operation. Some operations, such as search and retrieval, can be performed by more than one tool. Yet some others, such as enumeration, require using a combination of two or more tools.
1.6.4 Hash Object Syntax
Unlike the SAS tools predating its advent, most hash tools are invoked using the object-dot syntax. Even though it may appear unusual at first to those who have not used it, it is not complicated and is easy to learn
Another random document with no related content on Scribd:
HORSESHOE FALL FROM BELOW.
The height of the Horseshoe Fall is 165 feet and the stupendous nature of the Fall is more impressive when the visitor stands at the water’s edge in the gorge and looks upward at the flood descending in such graceful lines
High-Resolution
TERRAPIN POINT IN WINTER.
The scene at Terrapin Point in winter is one of brilliancy and splendor. The spray-cloud of the Horseshoe Fall is wafted to the shores of Goat Island where King Winter’s breadth congeals it all to a marble-like formation, and the snowy whiteness of the spectacle is dazzling in the bright sunlight High-Resolution
ICE MOUNTAIN AND ICE BRIDGE.
The beauty of this scene varies yearly, for the wind and weather have all to do with the magnitude of the formations. When the weather is exceedingly cold the ice mountain, between the American Fall and the Inclined Railway, attains a magnificent height. The ice also forms from shore to shore, enabling people to pass at will to the Canadian side, and forming what is popularly called the ice bridge.
High-Resolution
OBSERVATION TOWER VIEW OF GOAT ISLAND AND RAPIDS.
This view shows “the dividing of the waters” of Niagara River, and in the immediate front the American Rapids are seen flowing tumultuously onward towards the towering cataract Far across beyond Goat Island are the Canadian Rapids The greater grandeurs and immense boundaries of which are best seen from Victoria Park on the Canadian side
BRINK OF THE AMERICAN FALL.
Probably there is no one sight which impresses itself more strongly upon the great majority of beholders than this view of the brink of the American Fall. Such mighty on-rushing torrents, so powerful, yet so smoothly and alluringly moving on over the precipice, and so near is the visitor to what seems an abyss of destruction that the scene is never forgotten. High-Resolution
THE AMERICAN FALL FROM GOAT ISLAND.
This view across American Fall is one never to be forgotten. Here the brink of the Fall is seen in all its beauty, while far across the Fall, Prospect Park, with its constant crowds, forms part of the picture. A fine view of the Upper Steel Bridge is also here enjoyed.
HORSESHOE FALL BY SEARCHLIGHT.
This photographic masterpiece, the crest of the Horseshoe Fall by searchlight, taken from Falls View, is the only one of its kind ever made. “The scene is entrancing as the searchlight kisses the water into new beauty.”
ON THE BRIDGE AT MIDNIGHT.
This is a night scene, the Upper Steel Bridge, made possible by the recent advancement in photography
WHIRLPOOL RAPIDS.
The Whirlpool Rapids begin within sight of the Falls. The gorge narrows to 300 feet and the current rushes onward at a speed of 40 miles an hour and the foam-crested waters are entrancingly beautiful.
THE WHIRLPOOL.
The Whirlpool is about two miles below the Falls and is the greatest known river pocket Into it the Rapids plunge in all their fury, and a gyrating motion is given the entire body of water Here the river turns at right angles, causing one of the most mysterious and fascinating features of this mighty stream of water.
INCLINED RAILWAY AMERICAN FALL. LUNA ISLAND. CAVE OF THE WINDS. GOAT ISLAND. HORSESHOE FALL. TABLE ROCK. VICTORIA PARK.
PANORAMIC VIEW OF NIAGARA FALLS FROM THE CANADIAN SIDE.
High-Resolution (full) Left / Middle / Right
THE CANADIAN RAPIDS AND HORSESHOE FALL, FROM FALLS VIEW STATION.
This is one of the grand views to be had from the Canadian side of the river The rapids, by their great descent and vastness, convey an impressive effect to the mind, and, together with the Falls and scenery of Victoria Park, combines to make one of the most pleasing pictures about Niagara The entire length of the park is traversed by the electric cars, which are so great a convenience about Niagara, for tourists
AMERICAN FALL FROM CANADIAN SIDE.
Standing in Victoria Park, one gets a full front view of the American Fall, while at the right of the scene is Center Fall, flowing between Luna and Goat Islands. The American Fall has a width of 1,000 feet, a height of 158 feet, while the Rapids above descend forty feet in a half mile. All visitors should go to the Canadian side for the Canadian Fall and Rapids, the most imposing features of the Falls, are there best seen with their wonderful rainbow and mist effects, while the beauties of Victoria Park itself well repay a visit.
THE GORGE.
The life work of Niagara River has been and continues to be the digging of the Niagara Gorge through which it flows Those who have studied the subject thoroughly have reached the conclusion that the great trench was excavated by the running of the river itself. In its length, the gorge is in one sense a measure of the age of the river.
SCENE ON THE “GORGE ROUTE.”
This Electric road runs along the New York Shore, for much of the way, about twenty feet up from the water, and affords unequaled views of the Whirlpool Rapids, the great bridge and cliffs, the Whirlpool and all scenic features. The objects of interest along the Gorge Route are only second to the two great cataracts themselves. High-Resolution
HORSESHOE OR CANADIAN FALL IN WINTER.
The intensity of the mighty grasp of winter is at no point better portrayed than in its effects on the Horseshoe Fall. Gradually the waters are chilled and frozen until where yesterday the river plunged over the precipice in gleeful, laughing manner, huge stalactites of ice are hung reaching from the cliff-top to the slope below
ICE FORMATION AT CAVE OF THE WINDS.
In February, 1896, for a period of four days, the Cave of the Winds was dry, the water of the Fall being kept back by the ice formation. Visitors sought the cavern and roamed about admiring the icy scenes on every side The photograph for the picture above was then made, and it may never be possible to obtain the same again