Cs 101 an introduction to computational thinking 1st edition sarbo roy - Read the ebook online or do

Page 1


Visit to download the full and correct content document: https://textbookfull.com/product/cs-101-an-introduction-to-computational-thinking-1stedition-sarbo-roy/

More products digital (pdf, epub, mobi) instant download maybe you interests ...

Critical Thinking An Introduction to Reasoning Well

Watson

https://textbookfull.com/product/critical-thinking-anintroduction-to-reasoning-well-watson/

An Introduction to Computational Science Allen Holder

https://textbookfull.com/product/an-introduction-tocomputational-science-allen-holder/

An Introduction to Computational Origami Tetsuo Ida

https://textbookfull.com/product/an-introduction-tocomputational-origami-tetsuo-ida/

Introduction to IoT 1st Edition Misra Mukherjee Roy

https://textbookfull.com/product/introduction-to-iot-1st-editionmisra-mukherjee-roy/

An Introduction to Computational Macroeconomics Economic Methodology Anelí Bongers

https://textbookfull.com/product/an-introduction-tocomputational-macroeconomics-economic-methodology-aneli-bongers/

Computational Materials Science: An Introduction, Second Edition Lee

https://textbookfull.com/product/computational-materials-sciencean-introduction-second-edition-lee/

Introduction to mathematical thinking Alexandru Buium

https://textbookfull.com/product/introduction-to-mathematicalthinking-alexandru-buium/

An Introduction to Computational Risk Management of Equity-Linked Insurance First Edition Feng

https://textbookfull.com/product/an-introduction-tocomputational-risk-management-of-equity-linked-insurance-firstedition-feng/

Developing Creative Thinking Skills An Introduction for Learners 1st Edition Brad Hokanson

https://textbookfull.com/product/developing-creative-thinkingskills-an-introduction-for-learners-1st-edition-brad-hokanson/

CS 101

AN INTRODUCTION TO COMPUTATIONAL THINKING

WHY SHOULD I READ THIS BOOK?

There’s no question that computers play an integral role in almost every aspect of society. Everyday tasks, such as deciding where to eat and figuring out how to get there, are more efficient with the use of computers. Problems that were once deemed impossible, ranging from sequencing the human genome to remotely diagnosing rare diseases, are now feasible thanks to advances in computational speed and intelligence. Almost everyone uses some sort of computer on a regular basis, but very few actually understand what it is they’re using. We are made to believe that only those with a specialized degree or years of complex math under their belt can truly understand how computers work. Even those that are in the process of securing these prerequisites, blindly memorize seemingly archaic concepts, lost at their practical applicability and dazed with their complexity. You do not need a degree of any type to understand computer science, nor do you need any mathematical skills greater than multiplying and dividing numbers. Not only are you smart enough to understand the fundamentals of computing, you have the ability to invent the basic structure of computer science. That is the goal of this book. Not to aimlessly hurl definition after definition in hopes of you understanding the content, but for you to be able to say “Hey I could have made that too.” You don’t have to follow along on your computer or do coding exercises to truly understand this material. This book isn’t designed for that type of learning. In fact, some small details might be left out of the code so you can focus on what really matters. Don’t worry about what language the code is written in or what software to download so you can run it on your computer. Those elements are important, but they

are not needed to understand the fundamentals. All you have to do is read.

Click on any Image to Enlarge It

Any feedback on this content is always appreciated and can be sent to sarboroy99@gmail.com.

Special thanks to Dawn Labs for making it ridiculously easy to create beautiful looking code.

ONES AND ZEROES

You finally decide that you will no longer be baffled with the seemingly magical ways computers solve complex problems. You want to start this journey of becoming technologically proficient, but where to begin? How do computers even work? Just like every other question, you begin typing half of it in before Google completes your thought for you. You click on the first result and quickly learn that computers operate on only two numbers: one and zero.

At first this notion seems almost impossible. How can everything from simple text to high definition video to virtual reality all be represented by just two numbers? Even if this concept were intuitive enough to grasp, why specifically two numbers? Surely anything that can be represented by two digits can be represented more efficiently with 3 or 4. After all, our very own number system has ten numbers (0–9) and removing 8 of them would make our lives anything but easier. Answering these two basic questions will not only give us an insight into how computers work, it will also establish a foundation to which every other concept of computer science can be built on top of. Let’s begin with the why.

Why do computers only use two numbers?

Let’s suppose that we wanted to design our very own computer. Where would we even start? After all, what exactly is a computer? A traditional computer might be seen as a device that displays information to a screen, but at its core, a computer is just a machine that manipulates numbers. Every color shown on a monitor or a laptop screen is a combination of red, green and blue shown at different levels of intensity. Therefore, if we had a way to represent every shade of those three colors with numbers, we could then reproduce any image or any video (sequences of images) using just numbers. This concept can also be extended to apply to letters and words. If we assigned every letter in

the English alphabet with a number, then we could theoretically show everything from simple words to the most complex novels, once again using just numbers. This is important because if we can imagine a device solely focused on representing numbers and if we had a system in place that could convert those numbers into colors and text if needed, we could then by definition create a computer.

We’ve immensely simplified the complex idea of a computer into a machine solely focused on manipulating numbers. But how would such a device even work? We know that every computer operates on electricity, so we would have to design a system that takes electricity as an input and spits out a number as an output. Chances are we don’t really understand the core principles behind electricity. The only thing we might know about electric current is that it varies in strength. In fact, this is all we need to know. Maybe if we had a device with nine different switches, with each switch representing a number from 1-9, we could represent any number. The stronger the electric current fed into the device, the more switches that would turn on. After all nine switches are turned on, increasing the amount of electricity will no longer create any change to the output. Similarly, if no current is running, none of the switches would be on and it could represent the number 0. If we had four of these little devices, we could represent numbers as high as 9999, as each device would at maximum represent the number 9. If we put enough of these machines together we could then represent any number, therefore any color and any word. Indeed, this is a solution to our problem of representing numbers using electricity, but is this the best possible solution?

The first problem with our solution is sheer efficiency. Let’s say it takes N amount of electricity to turn on the first switch. If the amount of electric current needed to activate these switches were direct multiples of each other, it would take 9N quantities of electricity to switch on the last switch. Therefore, it would take nine times more electricity to produce the number 9 in our machine than the number 1. No problem, we’ll just make the amount of electricity needed to flip the first switch so small that nine times that will still be insignificant. However, our device would also have to measure electricity precisely enough to distinguish between these nine levels in order to produce a number.

Unfortunately, combining thousands of these devices together would produce enough interference to make it nearly impossible to precisely measure such values of electricity. Instead of our device showing a 9, it might show an 8 or a 7 due to the interference from the other devices. Think of this problem like understanding what a group of people is saying when every member is talking. It might be easy to understand everyone when there are only four or five members to a group, but if a thousand or ten thousand people were talking it would certainly be impractical to understand what every individual is saying. Our solution to representing numbers isn’t at all wrong, it just isn’t realistic enough to produce at scale. On top of this, the numbers it would present could be off from the actual values we want. Can we think of a better way?

Maybe instead of nine different switches we could have eight. After all, it would be much easier measuring only eight different levels of electricity instead of nine. But our number system only has nine numbers (excluding 0) so how would we represent numbers using only eight switches? We would have to design an entirely different method of counting involving only eight numbers (and 0). For the moment, let us assume that redesigning our system of counting was possible. Why stop at eight switches then? If we had seven switches, surely, we could measure electricity even more accurately than eight. We can extend this concept further, from seven switches to six switches to five, all the way down to one switch. With one switch, any level of electricity, no matter how big or small would turn the switch on. If there is no electricity, the switch would be off. We could represent the off state using zero and the on state using one. This would eliminate our electricity measuring problem! However, we now have to figure out how to represent every number using just 1s and 0s. Maybe we would have a tally-like system where the number 99 could be represented with ninety-nine devices switched on. It would take a lot of devices, but once again it could work. If we could represent numbers with 1s and 0s more efficiently than tallies, we would able to create a computer without having to worry about the accuracy of the numbers themselves. By answering one question, we’ve created another. Nevertheless, we did answer it and it turns out, we answered it correctly.

Our little device that we imagined has a name. It’s called a transistor, and it’s the building block of every single functioning computer. When electrical current is flown through a transistor its output is 1, otherwise its output is 0, just as we pictured!

Gordan Moore predicted in 1965 that the number of transistors in a device would double roughly every two years, mainly due to rapidly decreasing transistor size. You could then fit more of them inside the same amount of space. For decades his observation has been so exact that it is now referred to as Moore’s Law. Transistors are so tiny today that modern smartphones are made up of billions of them. The abstract computer we designed doesn’t just work, it works exactly the same way modern computers do! However, we only solved half our problem. That leaves us with the next question.

How can we efficiently represent numbers using only two digits?

We want to design an entirely different number system, one that operates on two digits instead of ten. One way to approach this problem would be to simply start from scratch, but since we are already familiar with our system of ten digits why not start with more than what we need and work our way backwards? The first thing we have to do is unravel how our current number system operates. We’ve gotten so good at recognizing numbers that we don’t consciously recognize the core principles behind how they work. To truly understand how we count numbers let’s go back to our elementary school days when we first learned about them.

It doesn’t take us much time to recognize that this number is one hundred twenty-three, but those three digits by themselves don’t give us the full picture. What we’re really doing is multiplication in our head for each one of those digits. In the end we’re adding each of the results together. A more accurate picture would look like this.

When we look at these three digits, we’re automatically assigning a placeholder to each one of them. The first placeholder is multiplied by 1, hence the name “one’s place/” The second placeholder is multiplied by 10 and so on and so forth, until there are no more digits left. We didn’t choose these placeholders randomly, they are consistent with other qualities of our number system.

The number system we’re familiar with revolves around the number ten. In fact, another name for this system is base ten. It has ten digits (0-9) and more importantly, every placeholder is an incremental power of ten. The most accurate picture of one hundred twenty-three would look like the following.

The first number is always multiplied by ten raised to the 0th power. The second is multiplied by ten raised to the 1st power and this pattern continues until the final number is reached. In the end everything is added up. In this case, one hundred twenty-three comes out. Now that we understood the mechanisms behind our current number system, could we use the same principles to create another?

At the end of the day we want to design a system that relies solely on two digits, or in another words, a base two system. What if we simply replaced everywhere there was a 10 in the base ten system, with a 2? Instead of every placeholder being an incremental power of ten, let’s make each place an incremental power of two. Since base ten has ten numbers (0-9) and we only need two, let’s just take the first two numbers: zero and one. Our base two system with only two digits would look something like this.

Since the highest number in our new number system is 1, this is the highest number that can be represented with three digits. Remember we are only using the digits 0 and 1. In base ten, this number would be one hundred and eleven, but in base two, this number is (1 x 4) + (1 x 2) + (1 x 1), or 7. Clearly it takes more digits to represent the same number in base two than in base ten. Nevertheless, this system is vastly more efficient than our previous tally approach. If we used the tally scheme to represent one-hundred, it would take one-hundred transistors, each needing to be turned on. In our base two system, it would only take seven transistors, with only three of them needing to be on. The other four would not even require any electrical current, as transistors that are off represent 0 by default.

Binary isn’t just an alternative form of representing our familiar structure of ten digits, it is a fully functioning system of numbers. You can add and subtract in binary just as you can in base ten.

When we add two digits in our familiar number system and the sum exceeds ten, the resulting last digit stores the value of the sum minus ten. The rest of the sum is divided by ten and added to the next two digits.

In the binary system, a similar process is performed. When the sum of two digits exceeds 2, the last digit stores the value of the sum minus 2. The rest is divided by two and also added with the next pair of numbers.

In the visual above, the binary representations of 1 and 3 are added together. The final result is the binary representation of 4. Unraveling how our base ten number system works not only gives us a way to represent numbers with other bases, it allows the manipulation of these values to end up with the same result.

It turns out that the number system we’ve been taught our entire lives is completely arbitrary. We didn’t choose to use ten digits to count with because ten represents our world better than other numbers, it happened by chance. In that sense, binary has something our current number system never did, a physical manifestation.

VARIABLES AND HOW TO

STORE THEM

In the last chapter, we’ve established a bridge between the physical and digital worlds. If every piece of software ultimately boils down to manipulating numbers and if we can physically represent those numbers using transistors, it makes sense that we can create machines that are capable of doing the diverse array of tasks we use computers for. However, there is still one major problem we have still not addressed.

If transistors and computers are only capable of working with two numbers, how can we convert from the complex tasks we want completed, down to this basic system of two digits? Sure, we have binary, which serves as a middleman for our base ten system, but does that mean we have to type everything we want done using only two numbers? Do modern software engineers use only two keys on their keyboards their entire work day? Fortunately for the world, and our sanity, this isn’t the case.

We would need two things in order to not stare at 1s and 0s all day. The first would be some sort of syntax that makes it easy to express what we want the computer to do. The second would be a program that converts that syntax down to the binary numbers computers can understand. You might have heard our first requirement under a different name, a programming language. You might even know the names of some popular programming languages used today. These languages at their core are no different from what the English language tries to accomplish. In the same way we use predefined English words to express our thoughts and emotions, we use source code in programming

languages to solve problems with a computer. Eventually this code has to be converted down to the 1s and 0s transistors can understand. Depending on the language used, these programs are either called compilers or interpreters. They go through every character of code written in a specific order and create a structured version of the tasks needed to be accomplished. Each of those tasks are eventually converted into the binary instructions the computer can physically execute. Since every character needs to be changed into a binary format and the ordering of characters affects the structure, simply misplacing a letter or misspelling a command will result in different binary code. The computer might then do something you didn’t intend for it to do. These types of misinterpretations are called bugs and they can range from basic spelling mistakes to complex logical errors. We don’t need to understand exactly how compilers or interpreters work, but we do need to realize that they are there behind the scenes, converting source code down to machine code so that the physical components of a computer can understand what needs to be done.

One of the most powerful aspects of computer science is abstracting away tedious layers and building on top of them. Typing everything in binary would be too cumbersome, so early programmers created compilers and programming languages to make it easier to read and write commands to the computer. Now that we understand that this process exists, we don’t have to type every instruction in binary form. Instead, we can just build on top of the resources that already accomplish this task. This gives us the privilege of focusing all of our attention on solving problems using modern programming languages and not having to worry about converting our instructions into a machinereadable format. We are finally at the level where we can write actual code!

We can think of every program as mapping an input to an output. Navigation systems take an address as an input and return directions as an output. Web browsers display a webpage from a given URL. Calculators happen to use numbers as both inputs and outputs. Whatever programming language we decide to use, needs to define ways of declaring different inputs. What types of inputs need to be defined?

We’ve already established the central role numbers play in the process of computing, so we definitely need a way to represent them in code. We could type all our numbers in binary, but our education system has already spent years drilling the concept of base ten numbers into our head. It will be much easier to write out numbers the same way we do in real life and allow the interpreter or compiler to convert them down to binary.

The numbers we use can either be integers or decimal values, so we need to represent both cases. Representing numbers is only half the issue as we also need to manipulate them. Performing operations on numbers would become exceptionally burdensome if we had to keep track of the every single number we wanted to manipulate. A much cleaner way would be to store numbers with unique variable names. Instead of telling the computer to multiply 193 with 4673, we could instead store 193 under the variable x and 4673 under the variable y.

Then we could tell the computer to multiply x with y. Every time we wanted to use any of these values again, we could intuitively refer to the names we associated with them. Later on, if we decided to change a number, simply assigning a different value to the variable name would replace that number everywhere it is used. Representing numbers using variables would require three things, a number, a variable name and the type of number it is. Assigning the integer 193 to the variable x could be defined as such.

Notice the order of this statement. We start with the variable type followed by the name of the variable. After the equals sign comes the value itself. Even though we only defined one type of variable (int) we can still make this syntax consistent.

What about numbers with decimals? It wouldn’t make sense to declare a decimal number using the int keyword, since only whole numbers can be integers. We would need another type of variable. Most programming languages call a number with a decimal, a double. Knowing this, we can now declare precise numbers such as 155.6 using the same convention.

Using doubles and ints we can now use real numbers as inputs for our program, but what about words? Once again, we would need new types of variables to represent these different types of inputs. Words can either consist of a single letter or multiple letters. Words with single letters are called chars.

How would we create a char called test representing the letter x? If we used the same format as before it would look something like this.

At first everything seems fine, but what if we declared another char with the name of x representing the letter y, before our previous character.

At first everything seems fine, but what if we declared another char with the name of x representing the letter y, before our previous character.

Since we are setting test equal to x and another variable is also named x, would test equal the value of x or the variable x? The answer seems ambiguous. In fact, this is so ambiguous that programming languages require declaring words and letters with quotes around the values. You always declare chars with single quotes. Words larger than one character are declared with double quotes. This would be the correct declaration of the previous two chars.

Chars don’t have to be just letters. They can be punctuation marks, single digit numbers, or even an empty space. However, chars by definition can only have a single character. Declaring a char with more than one character is not allowed and will not be accepted by the compiler or interpreter.

Words with multiple characters would look like a string of chars. Most programming languages call them strings and as mentioned earlier, they are declared using double quotes.

Even though strings usually have multiple letters, it’s perfectly fine to declare a string with a single character. Notice that they still require double quotes.

Remember that strings and chars can use numbers and other characters. They are not limited to just letters.

Once we create a variable of any kind we can modify it later in our program using just its name.

Why use chars at all if strings can be single characters too?

At first it seems pointless for chars to even exist when strings can replace all their functionality. Would it make any difference if we declared a single character as a string instead of a char? To answer this question, we need to address how exactly the computer is storing these variables. In the previous chapter we defined a computer as a machine capable of manipulating numbers. This definition holds true because we can map practically anything to a number. We use binary to map numbers in base ten, down to machine code the computer can execute, but how do we map letters and other characters?

The simplest way would be to assign every key on the keyboard a unique number and then convert that number down to binary. A major problem would arise if every computer manufacturer used a different set of numbers to map the same key. To resolve this issue a standard was created to make sure everyone agrees on the same character-to-number mapping. This standard is called the American Standard Code for Information Interchange (ASCII). A partially complete ASCII table is below.

What we’re really doing when declaring a char is storing the corresponding ASCII number in binary. The standard ASCII table has 128 characters numbered from 0-127. Notice that every number on the keyboard also has an appropriate ASCII value. This means that creating an int variable with the value of 9 stores different binary digits than a char variable with the value of 9. This relationship between a character and the number it is mapped to is referred to as a character’s encoding.

Do other countries have their own ASCII?

The ACII encoding maps English characters to numerical values that could then be translated into binary. Other countries have their own set of encoding values for characters. Most of these encoding schemes are not compatible with each other, as many characters in different languages map to the same numerical value. In the early days of computing, this was not much of an issue. Information that needed to be sent from one country to another could be delivered via mail or through fax. When the global internet standardized and computers became consumer oriented, this all changed. Suddenly, information could be instantly sent from one part of the globe to another. Different encoding schemes meant that this information could be interpreted in radically different ways depending on its origin. A universal standard had to be created for any hopes of meaningful international communication. During the late 20th century, in the city of Mountain View, California, the Unicode Standard was created to tackle this very issue. With Unicode, characters from almost every single language were mapped to a unique number. Not only did Unicode standardize character mapping, it was backwards compatible with other major encoding sets such as ASCII. The Unicode Consortium is the organization that maintains the Unicode Standard and it continues to add new types of characters and symbols.

Where do computers store binary?

We can now convert both numbers and letters to binary, but where does the final binary representation go? When we declare variables, the computer allocates a certain number of binary digits in a special part

called memory. We call each binary digit, a bit. Each bit of memory the computer reserves has a unique address associated with it. The computer can then access the value of any memory location when its address is specified. This is how the computer creates variables.

Whenever a variable is created, the computer, under the hood, reserves a set number of bits for that variable and remembers their addresses. When we want to retrieve the value of the variable, the computer can go to the address it stored and retrieve the corresponding binary data. If a variable is created at a memory address of 543, the computer does not have to sequentially check every block of memory until it finds the appropriate value. All it has to do is go to location 543 and retrieve the data. In another words, the computer can access any random memory location at any time. You might have seen this concept of random-access memory from its abbreviation, RAM.

As pictured above, the number 127 in base ten uses 7 bits total. Since 127 is the highest ASCII value, every char could potentially use 7 bits. Therefore, the computer should allocate at least 7 bits per char, as any char at any time could be changed to the ASCII value of 127. Even though the standard ASCII set can fit in 7 bits, early commercial computers used 8 bits of memory for every character. This additional bit was reserved for error checking. While the binary system minimized electricity mismeasurements, errors could still be occasionally made inside early computers. There were a lot of times where what should be have been a 0, appeared as a 1 and vice versa. To avoid this common issue, an additional bit was used for error detection. This extra bit was called a parity bit and it allowed multiple ways to ensure correct binary data. If the number of 1’s in the original data was an even number, the parity bit could be set to 1, otherwise it could be set to 0. Under this approach, if a single bit held the wrong value, the number of 1’s in the data would switch from either even to odd or from odd to even. If this did not match the parity bit’s value, an error was made somewhere along the process.

Another random document with no related content on Scribd:

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.