Foreword Polygon is a tribute to the scholarship and dedication of the faculty at Miami Dade College in interdisciplinary areas. Miami Dade Collegeâ&#x20AC;&#x2122;s esteemed faculties have contributed their scholarly works in this edition (2014) and previous issues of Polygon. The interdisciplinary articles and approaches to teaching and learning are a true tribute to and scholarly pursuits of faculty at Miami Dade College. The sharing of information, data and teaching/learning enhances the commitment to education every day. Thank you for continuing to enrich our lives.

Mattie

Mattie Roig-Watnik, Ed.D. President, Hialeah Campus Miami Dade College 1780 W.49th St. Rm 301 Hialeah, Fl 33012

CONTENTS

The Four 4’s Problem (Enhancing Creativity Skills)

1-3 Dr. Jack Alexander

POKER is The Essence of PROBABILITY (Calculating Winning Hands)

4-6 Dr. Jack Alexander

The Street Game of CRAPS

710

The Impact of Globalization and Environmentalism on Performance for Quantitative Reasoning, Communication and Critical – Creative Thinking at MDC-Hialeah Campus

11- Dr. Jaime 30 Bestard

The Evolution of Creationism in Historic and Legal Context

31- Dr. Melissa 51 Lammey

Generalized Pearson System of Probability Distributions

52- Dr. M. 60 Shakil

Dr. Jack Alexander

The Four 4’s Problem (Enhancing Creativity Skills) By Dr. Jack Alexander Professor, Department of Mathematics Miami Dade College, North Campus ABSTRACT: Many years ago, I came across a small book on mathematical intrigues in the Bowling Green State University Library. While I cannot, at this point, remember the exact title or the author, I do remember, very well, the Four 4’s Problem. This problem challenges the reader to construct the natural numbers from 1 to 100 using only four 4’s. You are allowed to use any of the four operations of addition, subtraction, multiplication and division. It is also permitted to use powers, square roots, factorials, greatest integer function, and decimal representations. INTRODUCTION: Some natural numbers are quite easy to construct. The first ten are listed below. An associated challenge is to see how many different ways a specific number can be constructed. For example, 1 can be expressed as: (4 + 4)/(4 + 4), 4/4 + (4 – 4), or (4 + 4)(4 – 4). My List of the First 10 Natural Numbers (4 + 4)(4 – 4)

=1

4 + (4 + 4)/4 = 6

4/4 + 4/4

=2

4 + 4 – 4/4

=7

(4 + 4 + 4)/4 = 3

4 + 4 + (4 – 4) = 8

4 +(4 – 4)/4

=4

4 + 4 +4/4

=9

(4 x 4 + 4)/4 = 5

(44 – 4)/4

= 10

While I was working at Spelman College and Distinguished Professor Dr. J. Ernest Wilkins was at Clark Atlanta University, we collaborated on the Four 4’s Problem. Dr. Wilkins quickly came up with constructions for 1 through 100. He also claimed that he could construct any number using only four 4’s. He never gave a proof. I have endeavored to develop a proof and to this date have been unsuccessful. I challenge the

1

reader to try to provide either a proof or a strategy for developing natural numbers beyond 10. To assist in this endeavor I have listed constructions from 11 to 30. List of constructions for 11 through 30 4/,4 + 4/4

= 11

4! – (4 – 4/4)

= 21

(44 + 4)/4

= 12

44/(4/√4)

= 22

44/4 + √4

= 13

4! – 4(4 – 4)

= 23

4/.4 +√4 + √4

= 14

44 – 4! + 4

= 24

44/4 + 4

= 15

4! +4(4 – 4)

= 25

4+4+4+4

= 16

4! + (4 + 4)/4

= 26

4 x 4 + 4/4

= 17

4! + 4 – 4/4

= 27

4 x 4 + 4√4

= 18

4! + 4 – (4 – 4)

= 28

4! – ( 4 + 4/4)

= 19

4! + (4 + 4/4)

= 29

4/.4 + 4/.4

= 20

4! + (4 + 4/√4)

= 30

Many of the constructions from 31 to 41 require the use of the Greatest Integer Function (INT). For example, one construction for 31 is: INT(4!/√√.4 + 4/4) and a construction for 41 is: INT(4!/√.4 + √4 + √4). Constructions for 42 through 50 are relatively easy. For example, a construction for 42 is: 44 – 4/√4 and a construction for 50 is: 44 + 4 +√4. Constructions for 51 through 100 are both hard and easy. For example, a construction for 53 is: INT(4!/√.4 + 4 x 4), while 100 can easily be expressed as: (4/.4) x (4/.4). Again, my challenge to the reader is two fold. Can you develop a construction for all natural numbers from 1 to100 and can you come up with a proof that any natural number can be constructed as Dr Wilkins suggested. I have already given you a good head start. Dr. Wilkins gave an elegant construction for 1000. i.e. 4 x 44 - 4!

2

CONCLUSION: In my view, the Four 4’s Problem provides an excellent way to hone and review the workings of mathematical operations and functions, while also enhancing the development of creativity skills. You will experience great excitement and joy when you come up with a difficult construction. For example, I was overjoyed when I developed 113. My construction is: INT(44/(√√√√√.4)/.4)). Can you come up with a different construction for 113?

3

POKER is The Essence of PROBABILITY                                                   (Calculating Winning Hands)                                                                                      By

Dr. Jack Alexander                                                                Department of Mathematics                                                           Miami Dade College, North Campus  ABSTRACT:      What contributes to the difficulty of correctly determining “probabilities” is the assessment of  the appropriate Sample Space associated with an event.  The game of Poker provides an excellent  vehicle for practice on how to calculate both simple and complex Winning Hands once the appropriate  Sample Space has been determined.  It turns out to be easy to determine the best winning hand –a  Royal Flush, but quite complicated to calculate the simplest winning hand – two‐of‐a‐kind.  Calculations  appropriate for these, as well as, all of the in between winning hands are detailed in this paper.          KEY WORDS:  Sample Space, Probability, Poker Winning Hands, Combinations    AMS Subject Classification 2010: 62‐07           INTRODUCTION:    As indicated in the Abstract, the most fundamental aspect of probability is the concept of  sample space.  The configuration of a modern deck of cards consists of thirteen different cards.  (Ace,  King, Queen, Jack, 10, 9, 8, 7, 6, 5, 4, 3, 2).  There are four suits:  Hearts, diamonds, Clubs, and Spades.   All together, this makes fifty‐two cards.  This is the size of the Sample Space.      If you are dealt 2 cards, the probability that you will get 2 Kings is given by (4/52) x (3/51). If you  are dealt 3 cards, the probability that you will get 3 Kings is (4/52) x (3/51) x (2/50).  And, if you are dealt  4 cards, the probability that you will get 4 Kings is (4/52) x (3/51) x (2/50) x (1/49).      A regular Poker hand is five cards.  Therefore, the total number of possible poker hands is given  by the combination of 52 things taken 5 at a time.  The mathematical way of designating this is 52C5 .   Using the formula nCr  = n!/[(n –r])!r!], this yields 52!/[(52 – 5)!5!] = 52!/47!5! = (52 x 51 x50 x 49 x 48)/(5  x 4 x 3 x 2 x  1 = 2,598,960.  That is to say, there are 2,598,960 possible poker hands.  This is the Sample  Space for all of the possible poker hands      An interesting exercise is to calculate the winning hands in poker.  The simplest of these would  be Two‐of‐Kind.  That is:  Two Aces, Two King, Two Queens, Two Jack, Two 10s and so forth.  This is not  all that easy to calculate since there is a lot to consider.      The first thing to consider is that there are 13 different cards and you must get 2 of the same  kind.  In other words, you must calculate 13 x 4C2.  Next, you must get 3 other cards that are not the  same as those of the pair.  This can be calculated as 12C3, but there are 4 of these in each case.  This is 4  to the 3rd power.  The complete calculation is, therefore:  13 x 4C2 x 12C3 x 43 = 1098240.  Hence, the  probability of being dealt Two‐of‐a‐Kind   is 1,098,240/2,598,960 = .422569.    1

The calculation of Three‐of‐a‐Kind fits a similar pattern.  In this case, we have 13 x 4C3 x 12C2 x 42  = 54912.  Hence the probability of Three‐of‐a‐Kind  is 54,912/2,598,960 =  .02112845.  To calculate Four‐ of‐a‐Kind, we can use basically the same pattern:  13 x 4C4 x 12C1 x 41 = 13 x 1 x 12 x 4 = 624.  Hence, the  probability of Four‐of‐a‐Kind = 624/2,598,960 = .00024.    Another winning hand is Two Pair.  This can be calculated by 13C2 x 4C2 x 4C2 x 11C1 x 4 = 78 x 6 x 6  x 11 x 4 = 123,552.  Therefore, the probability of Two Pair = 123,553/2,598,960 = .0475394.  These  calculations indicate that Three‐of‐a‐Kind would beat Two Pair.     The highest level winning hand is a Royal Flush.  This is a hand of Ace, King, Queen, Jack, 10 all in  the same suit.  Clearly there are only 4 of these.  The probability of getting a hand like this is 4/2,598,960  = .000001539.    Another hard hand to get is a Straight Flush.  This is 5 cards with consecutive numbers, all in the  same suit.  Excluding the Royal Flush, there are 36 of these.  Consider the list below.  Note that there are  9 ways we can have 5 card in consecutive order.  Since there are 4 suits, this yields 36 possibilities.                   1          Ace  2  3  4  5                                                                             2    2  3  4  5  6

3

4

5

6

7

4

4

5

6

7

8

5

5

6

7

8

9

6

7

8

9

10

7

8

9

10

Jack

8

9

10

Jack

Queen

9

10

Jack

Queen King

The probability is 36/2,598,960 = .00001385.    A  Flush is 5 cards of the same suit, excluding a Royal Flush and a Straight Flush.  This is easy to  determine.  It is simply 4 x 13C5 – 40 = 4 x 1287 – 40 = 5108.  And, the probability is 5108/2,598,960 =  .0019654.  While this is a low probability, it is not as hard to get as a Full House.        To calculate a Full House, we must consider obtaining 3 of one kind of card and 2 of another  kind.  For example, we could get 3 kings and 2 Jacks, or 2 Kings and 3 Jacks.  The set‐up is 2 x 13C2 x 4C3 x  4C2 = 2 x 78 x 4 x 6 = 3744.  The probability is 3744/2,598,960 = .00144.    2

A Straight is 5 cards in sequence, with aces allowed to be either 1 or 13 and with the cards  allowed to be of the same suit or from some different suits.  Typically a Straight excludes Straight  Flushes and Royal Flushes. The number of such hands is 10 x ( 4C1)5 – 40 = 10200.  The probability is  10,200/2,598,960 = .0039246468.

From all of these calculations, we can set up a Hierarchy for the winning hands.

1) 2) 3) 4) 5) 6) 7) 8) 9)

Probability

Royal Flush    Straight Flush    Four‐of‐Kind    Full House     Flush      Straight      Three‐of‐a‐Kind    Two Pair      Two‐of‐a‐Kind

.00000153908 .0000138517  .000240096  .00144058  .00196540  .0039246468  .02112845  .04753940  .42256903

The sum of these probabilities is approximately .49998324468.  Therefore, the probability that you will  be dealt a hand that has none of these winning hands is  approximately 1 – .49988324468 =  .5011675532.  CONCLUSION:     This article touches on the basic concept of Sample Space, and makes the point that  probabilities cannot be determined unless it is clear how many ways in totality an even can occur.  The  beginning discussion of being dealt 2, 3, and 4 cards demonstrates how the sample space is adjusted  since dealing cards is typically done Without Replacement.      To consider the sample Space for all possible Poker hands of 5 cards, We must use the formula  for combinations:  n C r  = n! /[(n – r)!r!].  Once the total number of possible hands is determined, we can  then go about the business of calculating the Winning hands.   Lastly, a Hierarchy of the winning hands is  presented.  After that, we can easily calculate the probability of not getting any of the winning hands by  calculating 1 – P(Winning).  Note that there is more than a 50% chance of not getting any of the winning  hands.

3

REFERENCES: Blitzer, Robert  2011.  Thinking Mathematically, 5th Edition, Page 664;Prentice Hall, Boston  Triola, M. F. 2014.  Elementary Statistics, 12th Edition, Pearson Education, Inc. Boston  http://www.math.hawaii.edu/~ramsey/Probability/PokerHands.html

4

The Street Game of CRAPS                                                                                    By                                                                      Dr. Jack Alexander                                                              Department of Mathematics                                                        Miami Dade College, North Campus    ABSTRACT:  The Street Game of CRAPS is at least one hundred years old.  Typically, the game is  played by a group of 4 to 7 players.  Each player has a turn to toss two dice until he or she wins or loses.   The thrower wins on the first toss if a total of 7 or 11 is tossed and loses on the first if a total of 2, 3, or  12 is thrown.  If a total other than the aforementioned totals is tossed , this is called a point. The  thrower then tosses the dice repeatedly until a total of 7 or the point is tossed.  He or she wins if the  point is tossed again before 7 is tossed.  If a total of 7 is tossed before the point it is called “CRAPS” and  the thrower has to pay all of the other players.    What is the probability that the thrower will win?  KEYWORDS:  dice, point, probability, infinity, Street Game of Craps  AMS Subject  Classification  2010: 62‐07  INTRODUCTION:  My fascination with this game is that there are people who  have played the game for  20 to 30 years and do not realize that having the dice is a losing proposition.  What follows is a detailed  analysis of how to determine the probability of winning if it is your turn to toss the two dice.  Consider  the grid below that illustrates all of the possible toss results.

FIGURE 1

6   |    7   8   9   10   11   12

5    |   6   7   8     9    10   11                                                                4    |   5   6   7     8      9   10                                             1ST DIE       3    |   4   5   6     7      8     9                                                                2    |   3   4   5     6      7     8                                                                1    |   2   3   4     5      6     7                                                                    ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐                                                                           1   2    3    4      5     6                                                                                   2ND DIE    1

Narrative:

From the grid in Figure 1 it is easy to see that the probability of tossing a sum of 7 is 6/36 and  the probability of tossing an 11 is 2/36.  Therefore, the probability of winning on the first toss is (6 +  2)/36 = 8/36 = 2/9, which is about 22%.  The probability of tossing a 2 is 1/36 and the probability of  tossing a 3 is 2/36, while the probability of tossing a 12 is 1/36.  Hence the probability of losing on the  first toss is (1 + 2 + 1)/36 = 4/36 =1/9, which is about 11%.          We can also easily calculate the point probabilities which are in pairs as indicated below.

Point probabilities:

P(4)  =  P(10) =  3/36

P(5)  =  P(9)   =  4/36

P(6)  =  P(8)  =  5/36

The probability of winning on the second toss is, therefore (3/36)(3/36), if the point is 4 or 10.   The probability of winning on the second toss is (4/36)(4/36), if the point is 5 or 9.  And, the probability  of winning on the second toss is (5/36)(5/36), if the point is 6 or 8.  Continuing with this line of analysis;  the probability of winning on the third toss is (3/36)(27/36)(3/36), if the point is 4 or 10.  Note that the  probability of not tossing a sum of 7 or the point is (36 – 6 – 3)/36 = 27/36.  The probability of winning  on the third toss is (4/36)(26/36)(4/36), if the point is 5 or 9.  And, the probability of winning on the  third toss is (5/36)(25/36)(5/36), if the point is 6 or 8.      Considering the probability of winning on the forth toss reveals a definite pattern.  Note that the  probability of winning on the forth toss is (3/36)(27/36)2(3/36), if the point is 4 or 10.  The probability of  winning on the forth toss is (4/36)(26/36)2(4/36), if the point is 5 or 9.  And, the probability of winning  on the forth toss is (5/36)(25/36)2(5/36), if the point is 6 or 8.    In general, the probability of winning on the nth toss where n > 1, is given by the three  expressions below.    1) (3/36)2(27/36)n‐2, if the point is 4 or 10  2) (4/36)2(26/36)n‐2, if the point is 5 or 9  3) (5/36)2(25/36)n‐2, if the point is 6 or 8    As n approaches infinity, the probability of winning approaches zero.  For example the  probability of winning on the 10th toss is given by:  2x((3/36)2(27/36)8 + (4/36)2(26/36)8 +  2

(5/36)2(25/36)8) = .0053.  The probability of winning on the 100th toss is:  2x((3/36)2(27/36)98 +  (4/36)2(26/36)98 + (5/36)2(25/36)98) = .000000000000008.                                                                                                                                                                            Considering the probability of winning on the first toss, which is .222 . . ., along with the probabilities of  winning on the 2nd , 3rd , 4th , or more tosses, the sum of these probabilities approaches .49292929 . . .  .    The calculations involved are significant.  Hence, a JAVA program was written to do the calculations until  the solution is derived.  A copy of that program is presented below.          JAVA Program  //This program calculates the probability of winning the game of CRAPS import java.io.*;  Public class CRAPS  {  Public static void main (String [] args) throws IOException  {    InputStreamReader reader = new InputStreamReader (System.in);    BufferedReader input = new BufferedReader (reader);    System.out.print (“Input the number of tosses: “);    String text = input.readLine();    Double c = new Double (text);    double q = c.doubleValue ();    double t [] = new double [110];    double st [] = new double [10];    double tt = 0;    double s = 0;    double d = 36;    double n = 30;    t[0] = .22222222;    System.out.println (“The probability of a win on toss 1 = “ + (float) t[0];    Int i = 2;    Do                {    int j = 3;                do                {  st[j] = 2 *Math.pow((j/d),2)* Math.pow((n – j)/d,(I – 2));  tt =  tt + st[j];  ++j;  }  while (j <= 5);  ++i;  t[i] = tt;  int in = i‐1;  System.out.println (“The probability of a win on toss “ + (float) in );  System.out.println (“ is “ + (float) tt);  double pie = 2*(1/tt);  tt = 0;  3

} while (i<= q);  Int k = 2;  do  {                                                                                                                                                       s =s +t[k];  ++k;  }  while (k<=q);  double p = .22222222 + s;  System.out.println (“The Overall Probability is = “ + (float) p);  }  }         This program is interactive.  The user is asked to input the number of tosses.  Once that  information is inserted, the program prints the probability that the person tossing wins on the first and  subsequent tosses and lastly prints out the overall probability.  For example, if 100 is input the overall  probability calculated is:  .49292928         CONCLUSION:      The calculated probability indicates that if a person is tossing the dice, there is less than a 50%  chance of winning.  This is counter to what most people who play the game believe.  They think that the  person tossing the dice controls the game.  The above analysis indicates the opposite.  Every player gets  a turn to toss the dice.  However, if you can get another player to buy the dice, this is permitted.  This  writer has observed people willingly buying the dice.  This, of course, means that you would be paying to  have less chance to win.  So much for the concept of “Street  Smart”.                                                                               REFERENCES    McGervey, John D. 1986.  Probabilities in Everyday Life.  Ballantine Books (Random House, Inc.), New  York.  Mood, A. M.,  Graybill, F. A. and Boes, D. C. 1963.  Introduction to the Theory of Statistics, 3rd Edition,  McGraw‐Hill Book Company,  New York.

4

The Impact of Globalization and Environmentalism on Performance for Quantitative Reasoning, Communication and Critical – Creative Thinking at MDC-Hialeah Campus By Jaime Bestard, Ph. D. Associate Professor Senior, Mathematics, MDC- Hialeah Campus E-mail address: jbestard@mdc.edu

Theme: Social Civic and Environmental Responsibility, Students Learning Assessment Key Words: Globalization, Internationalization, Environmentalism, Assessment

Abstract This paper analyzes how college students show a resilient low performance related to three General Education Learning Outcomes: quantitative reasoning, quantitative communication, and creative - critical thinking, altering the completion schedule for their graduation in terms of the application of the competencies of the course and how the introduction of global and sustainable oriented assignments may improve performance.

The analysis of the application of sustainability and globalization concepts in the motivation to improve the performance of students in specific activities of their coursework is intended to promote ideas to implement the sustainability principles applied to mathematical problems.

1

A statistical analysis in how traditional compared versus global and environmental oriented coursework affects the performance of students and permits monitoring the ability of students to improve their critical thinking in the solution of a problem applied to a global scenario that may produce environmental consequences, using mathematical modeling, and computerized statistical analyses.

The curriculum integration obtained as per the implementation of the recommendations of this research attempts to develop success among the mathematics students in their coursework.

2Â Â

Introduction: College students show a resilient low performance related to three General Education Learning Outcomes: quantitative reasoning, quantitative communication, and creative - critical thinking (3)

This research is directed to analyze the impact of the application of sustainability and globalization concepts in the motivation to improve the performance of students in specific activities of their coursework.

This research explains some aspects on how traditional, compared versus global and environmental oriented coursework may impact, the performance of students and to monitor the ability of students to improve their critical thinking in the solution of a problem applied to a global scenario that may produce environmental consequences, using mathematical modeling, and computerized statistical analyses.

Methods:

Since critical thinking is, in short, self-directed, self-disciplined, self-monitored, and selfcorrective thinking, it presupposes assent to rigorous standards of excellence and appropriate command of their use. It entails effective communication and problem solving abilities and a commitment to overcome the native ego-centrism and socio-centrism (10).

3Â Â

The study consists on a plan to implement research techniques that promote, STEM Students to develop such critical thinking, social and civic responsibility and understanding systems skills and the corresponding results compared to the traditional instructional techniques implemented college-wide at MDC today. The sections are two MAC1105”College Algebra” that are taught by the same instructor and same course syllabus but directed to different topics in projects about analysis of functions.

While the placebo is the MAC1105College Algebra, reference number 724667 traditional course with the typical assignments related to investigate functions that most of the times are general or applied to specific real life scenario, the experimental unit is MAC1105College Algebra, reference number 724670 both taught during Spring 2012-02 semester during the typical schedule of sixteen weeks, both with similar population of students in terms of demographics and previous academic track.

The research question: Are the students in sections that utilize the incorporation of critical thinking strategies to analyze a global scenario from the perspective of an environmental, sustainable perspective performing better, in terms of performance, than those exposed to regular instruction? The experimental unit was assigned to a system of integrative techniques of global instruction and environmental policies.

The use of critical thinking strategies develops generalization skills in every new topic of globalization and environmentalism and producing the retroactive effect of bringing previous principles and keeping them alive to apply to the current situation using a “building blocks” or

4

integrative learning strategy and enabling a full comprehensive quantitative reasoning as conducted by other institutions nation-wide and world-wide like the University of Kentucky System (11)

Materials:

The MAC1105 College Algebra course competencies orient to assess learning on properties of piece-wise; quadratic; polynomials; radical; rational; exponential and logarithmic functions, as part of the main concepts of the mathematical modeling foundations students acquire in the development of quantitative reasoning.

The importance of the assessment of the learning outcomes related to quantitative communication, reasoning; critical thinking and global perspective with social responsibility promotes the application of very real life scenarios case exercises (1, 2, and 8). Several assignments are usually assigned to students in the typical course design, in this matter of properties of functions, but it is very important to set the students for real life scenarios that may permit they have training analyzing cases they may be exposed to in their respective professional fields(4,5,6,7, 9, 10).

For the typical instruction (placebo), the three projects in the analysis of functions are currently: a) P1: Parabolic profit function; b) P2: Mechanical Force function (Polynomial function) c) P3: C14 radioactive decay function (Exponential â&#x20AC;&#x201C; logarithmic function).

5Â Â

While for the experimental unit the projects were directed to the following themes: a) Pe1: Parabolic economic model of the loser countries after WWII (Japan, Italy and Germany); b) Pe2: GDP of countries in the time (Polynomial function) c) Pe3: Population pyramid by gender and group age in specific countries with special social circumstances (Exponential – logarithmic function).

Results:

The results in those assignments were recorded, collected and classified in placebo and experimental units ( Pi and Pei, respectively), and then processed statistically using software MINITAB 15. ANOVA and t-TESTS were conducted to magnify differences between treatments with the corresponding displays. Figure 1 shows the data display of independent normally distributed grades for both sections. Sample sizes between 24 and 27 Figure 1. Data Display Row P1 P2 P3 Pe1 Pe2 Pe3 1 68 81 77 72 58 68 2 63 83 75 85 70 94 3 70 89 72 77 72 90 4 61 65 67 90 61 87 5 65 71 68 82 58 93 6 78 87 81 68 83 83 7 81 67 68 93 82 76 8 61 80 89 82 70 76 9 71 77 90 77 58 92 10 75 74 75 74 59 81 11 56 83 68 71 68 72 12 81 78 74 76 59 69 13 69 62 94 73 71 73 6

14 15 16 17 18 19 20 21 22 23 24 25 26 27

82 62 72 82 81 84 68 76 82 81 72 73 85 69

75 63 76 81 63 67 90 73 65 65 85 90 74

84 67 80 94 90 91 94 87 89 90 87

91 85 75 81 73 86 94 92 93 94 80 77 72 71

68 78 60 58 72 66 83 67 63 67 69 93

91 84 75 68 90 79 88 69 92 73 83

To test for significant differences among the components of the respective typical and or global/sustainability instructional processes, ANOVA one way was conducted resulting with significant differences (α =0.05) for both, the placebo and the experimental unit; this confirm the effect of the different treatments, as shown in Figure 2 and Figure 3 with the corresponding analysis of residuals, previous normality test of the data.

Figure 2. Placebo One-way ANOVA: P1, P2, P3 Source DF SS MS F P Factor 2 926.4 463.2 5.75 0.005 Error 74 5964.1 80.6 Total 76 6890.5 S = 8.978 R-Sq = 13.45% R-Sq(adj) = 11.11% Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev ------+---------+---------+---------+--P1 27 72.889 8.229 (-------*--------) P2 26 75.538 9.008 (--------*--------) P3 24 81.292 9.724 (--------*--------) ------+---------+---------+---------+--72.0 76.0 80.0 84.0 Pooled St Dev. = 8.978

7

Residual Plots for P1, P2, P3 Normal Probability Plot

Versus Fits

99.9

10

90

Residual

Percent

99

50 10

0 -10

1 0.1

-30

-15

0 Residual

15

-20

30

74

76 78 Fitted Value

80

82

Histogram

Frequency

10.0 7.5 5.0 2.5 0.0

-15

-10

-5 0 Residual

5

10

15

Observe how the coefficient of determination of the linear models defined by ANOVA are very low(R sq. < 40 % in both cases, Figures 2 and 3) result that eliminates the possibility to establish a linear association from the perspective of the practical application Figure 3. Experimental unit One-way ANOVA: Pe1, Pe2, Pe3 Source DF SS MS F P Factor 2 3195.0 1597.5 22.52 0.000 Error 73 5178.9 70.9 Total 75 8373.9 S = 8.423 R-Sq = 38.15% R-Sq(adj) = 36.46% Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev --+---------+---------+---------+------Pe1 24 81.833 8.318 (----*-----) Pe2 27 68.148 7.863 (-----*----) Pe3 25 81.560 9.083 (-----*-----) --+---------+---------+---------+------66.0 72.0 78.0 84.0 Pooled St Dev. = 8.423 8Â Â

Residual Plots for Pe1, Pe2, Pe3 Normal Probability Plot

Versus Fits

99.9

10

90

Residual

Percent

99

50 10

-10

1 0.1

0

-20

-10

0 Residual

10

20

6

12

70

75 Fitted Value

80

Histogram

Frequency

16 12 8 4 0

-12

-6

0 Residual

The significant differences resulted among the components of each instructional system, document course progression as per the analysis of the respective descriptive measures at the 5% significance level. It is remarkable how the side by side box plots traced in the MINITAB displays of the Figures 2 and 3 illustrate the significant differences between the assignments in the placebo( traditional instructional system ) and the experimental ( Global/ sustainable method)

Figure 4 shows the corresponding boxplots for each instructional assignment in both the placebo and the experimental unit. Observe how in the assignment Pe2 (GDP of countries in the time) there was a step-back by performing lower than the typical instructional assignment. This is the case of the polynomial function where it seems that the model assigned may not correspond that well to the data the students accessed, therefore the association to the polynomial model might not be the optimum. This factor is to be considered when implementing new instructional models 9Â Â

since there are exceptions that may be directed from previous experiences that may not be the case of the real data collected, and that situation may impact negatively the performance of the students as in this experimental case.

Discarding the negative effect of the Pe2 assignment, it is evident, from the exploratory data analysis of the remaining data sets that the experimental units tend to perform in a higher level than the typical instructional practices (Figure 4) Observe how both: median in the center of the boxes for Pe1and Pe3 stand slightly ahead compared to P1 and P3 result that is also observed in the case of the boxes or the central 50th %, showing similar extensions that represent the consistency of the data.

Figure 4.Comparative Boxplots for All the Assignments in Placebo and Experimental Unit Boxplot of P1, P2, P3, Pe1, Pe2, Pe3 P1 P2 P3 Pe1 Pe2 Pe3 55

60

65

70

75 Data

10Â Â

80

85

90

95

When comparing assignment versus respective global /sustainable assignment (Figures 5, 6, 7) it is observed that the assignment P1-Pe1 and P2 - Pe2 resulted significantly different at α <0.05, while the P3-Pe3 failed to show significant differences at α = 0.05.

Again, the lack of significant differences in the case of this comparison may be due to the fact that the countries taken for analysis might not have an actual exponential-logarithmic model of population aging in the last years, which is another factor that may constitute a flaw in applying the global/ sustainable models.

Below, the comparison for each assignment using a t-Test, that results significant at 5% significance level, except in the case of P3vs Pe3, where it tested not significant at 5% significance level providing not sufficient evidence of significant differences between the placebo and the experimental unit. Observe the side by side box plots showing the descriptive data analysis, boxes showing the significant differences.

Figure 5. Two-Sample T-Test and CI: P1, Pe1 Two-sample T for P1 vs. Pe1 N Mean St Dev. SE Mean P1 27 72.89 8.23 1.6 Pe1 24 81.83 8.32 1.7 Difference = mu (P1) - mu (Pe1) Estimate for difference: -8.94 95% CI for difference: (-13.61, -4.28) T-Test of difference = 0 (vs not =): T-Value = -3.85 P-Value = 0.000 DF = 48

11

Boxplot of P1, Pe1 95 90 85

Data

80 75 70 65 60 55 P1

Pe1

Figure 6. Two-Sample T-Test and CI: P2, Pe2 Two-sample T for P2 vs. Pe2 N Mean St. Dev. SE Mean P2 26 75.54 9.01 1.8 Pe2 27 68.15 7.86 1.5 Difference = mu (P2) - mu (Pe2) Estimate for difference: 7.39 95% CI for difference: (2.72, 12.06) T-Test of difference = 0 (vs not =): T-Value = 3.18 P-Value = 0.003 DF = 49

12

Boxplot of P2, Pe2 90 85

Data

80 75 70 65 60 P2

Pe2

Figure7. Two-Sample T-Test and CI: P3, Pe3 Two-sample T for P3 vs. Pe3 N Mean St Dev. SE Mean P3 24 81.29 9.72 2.0 Pe3 25 81.56 9.08 1.8 Difference = mu (P3) - mu (Pe3) Estimate for difference: -0.27 95% CI for difference: (-5.68, 5.15) T-Test of difference = 0 (vs. not =): T-Value = -0.10 P-Value = 0.921 DF = 46

13

Boxplot of P3, Pe3 95

90

Data

85

80

75

70

P3

Pe3

As a matter of regularity in the analysis observe that P value < α assures sufficient evidence to reject the null Hypothesis in each statistical treatment as presented in the displays of the analysis.

14

Conclusions: 1) The application of the global / environmental –sustainable models to the analysis of functions is an excellent opportunity to improve motivation of students and their performance in “at risk – gate keeper” traditional low performing courses. 2) It is remarkable to observe the significant differences in performance that occur when it is assured the applicability of the global sustainable model (P1 vs. Pe1), in terms of adequate functional correspondence to the data students access. 3) Further data analysis in other courses are recommended to assure the correct implementation of mathematical global /sustainable models, for that reason, it is recommended to assign such task to the STA2023 Statistical Methods students as a service learning project/ class assignment to validate statistically the applicability of the data collected using correlation –regression analysis to farmers forecast of the rainfall in agricultural regions (Appendix).

15

References: [1] Angelo, T. A. and Cross, K. P. ; Classroom Assessment Techniques: A Handbook for College Teachers, 2nd Edition. San Francisco: Jossey- Bass, 1993, pp 19-25; 105-158; 213- 230; 299-316. [2] Chickering, A. W. and Gamson, Z.F. Seven Principles for Good Practice in Undergrad Education. AAHE Bulletin, March, 1987. [3] Collective; CSLOA- MDC Reports and National Comparisons, 2007-2012. [4] Douglas Montgomery; Design and Analysis of Experiments 7th Edition: Wiley, 2008; Section 6.3; p 215. [5] FITCH,C. E: “The Contamination Control Balance”, BFPR Journal, vol. 11, no. 1, FPRC/OSU, Stillwater, October, 1997. [6] Huitt, Teaching Strategies. AAHE Bulletin, March 2004 [7] McGlynn, Angela Provitera Successful Beginnings for College Teaching, Engaging your students from the first day, Atwood Publishing, Volume 2, Teaching techniques/ Strategies Series, 2001. [8] Multiple Authors. Course instructor packet MAC1105, Mathematics Department, MDC, Hialeah Campus, July, 2008. [9] Multiple Authors. Quality Enhancement Plan. Mathematics, MDC, 2004 [10] Richard Paul and Linda Elder. The Miniature Guide to Critical Thinking Concepts and Tools, Foundation for Critical Thinking Press, 2009. [11] Proceedings 29 International Conference of Critical Thinking, Berkeley, California, 2009.

16

Appendix. Lesson Plan Outline “Developing a statistical model to forecast rainfall for a local agricultural region.” I. Subject: STA2023 Statistical Methods Topic: “Enhancing curricula via Global/Sustainability Perspectives.” Grade: College Level (sophomore) Instructor: Jaime Bestard, Associate Professor, Senior, Mathematics, MDC II. Goal: To promote sustainability ideas via the use of correlation –regression analysis to forecast precipitation in a local rural area using farmers criteria. Enhancement of quantitative literacy via the globalization –sustainability principles. III. Instructional objectives: 1. Use sampling techniques to allocate information and to collect data from a real time data base. 2. Organize real time data. 3. Display data 4. Understand data by using descriptive and inferential procedures, and producing a writing report communicating effectively the results using sustainable understanding of systems. IV. Content Outline: 1. Identify the problem and to collect data 2. To display the data in scatter plot 3. Analyze the assumptions for correlation. Conduct the test. 17

4.

Analyze the assumption for regression and conduct the test.

5. Write the report of the findings. V. Instructional Strategies. 1) Description of the activity: This activity is embedded in the tentative schedule for the syllabus of the course by the week XIII-XV approximately when students have a clear idea of the inferential procedures and the activity is intended to develop awareness of the cultural influences in several international regions and how to quantify the strength of the relationship between the popular beliefs of the farmers in different geographic regions and the idea of sustainable development with the real data in terms of the rain fall to predict the agricultural weather during the year 2) Methods, materials and activities: Using MINITAB statistical software the students collect real live data from sites like http://www.sfwmd.gov/portal/page/portal/xweb%20weather/rainfall%20historical%20%2 8daily%29 in specific agricultural regions of the South Florida Water Resources Management. Comparisons with other geographic regions and then enter in a discussion of the results from the statistical analysis of correlation and the related inferential procedures. Students will collect data for the daily average rainfall during January and correlate to the monthly average in the last 10 years using real time data systems. Using technology they will construct the Minitab display of the scatter plot and compute the correlation coefficient and its significance of the predictor â&#x20AC;&#x201C; response. In addition the

18Â Â

students will produce the least square linear regression equation and the graph with the 95%Confidence interval for the prediction and the 95% prediction estimate interval 3) From the map of the South Florida Water and Management District in real time the students will collect data in their respective assigned area. This activity constitutes by itself a home learning extension activity 4) Assessment Plan: A written statistical report producing descriptive and inferential conclusions about the forecast of the monthly precipitation based on the first twenty four days of January collecting data from the last ten years. Assessment is conducted by the out of class time statistical written report the students submit electronically to the instructor, in the form of a typical statistical analysis of correlation, according to the following rubric: 5) Rubric: As per typical rubric for STA2023 Statistical Methods assignments: Exemplary: To produce exemplary comparative data analysis, with detailed estimation and step by step hypotheses testing procedure concluding about regularities in the agricultural weather forecast from the early rainfall during the first days of January. Describing the specific data collection and association between two variables (daily average rainfall during the first 24 days of January (12 in and 12 out)vs. the monthly average during the last ten years). Include residual analysis. Explain with examples the different approaches of farmers across different cultures and countries, and how it is utilized to forecast the weather with natural observations and empirical estimates using cosmic signs and phases of planets and celestial bodies. Proficient: Missing one of the exemplary topics mentioned above. Missing, either justification, or assumption to conduct inferential procedures. 19Â Â

Developing: Missing more than one topic above mentioned or error in the analysis Emerging: Missing most of the components of analyses and errors in the evidence provided. Incomplete descriptive analysis or inferential procedures, and missing justification to conduct a procedure,

Assessment is formative in/out class by written report in the comparative settings with several countries. This activity is incorporated to the syllabus by the weeks XIV-XVI when chapter 10 is explained 6) a) MDC General Education Targeted Learning Outcomes: 1, 2 ,3 ,4 5, 6, 10( as per LOAT) b) MDC Mathematics Discipline Targeted Learning Outcomes: 1, 2 (as per DLO) c) Targeted STA2023 Course Competencies: 1, 2, 3, 4, 5 (as per course syllabus)

Â

20Â Â

The Evolution of Creationism in Historic and Legal Context Dr. Melissa Lammey Associate Professor of Philosophy Miami Dade College Hialeah Campus E-mail: mlammey@mdc.edu ABSTRACT The 1991 publication of Darwin on Trial marked a new turn in the creationist movement that was led by its author, Phillip E. Johnson. This book and others publications by Johnson introduce a strategy of attacking the philosophical foundation of evolutionary theory, which he believes is a view called naturalism. Evolutionary theory alone, he claims, does not contradict creationism. Johnson contends that it is only when evolutionary theory is guided by philosophical naturalism that it becomes what he labels Darwinism and defines as a type of metaphysical dogma. His strategy, which he calls the ‘wedge strategy,’ is to eliminate philosophical naturalism from the sciences so that supernatural explanations can work their way in. Johnson is recognized by many as the founder of the intelligent design movement which aims to provide just those sorts of explanations. Intelligent design has been the most recent face of creationism but it has not found success in winning over the scientific community or in its attempts to affect US public education. Here, I will first discuss key elements of the history of creationism until Johnson, focusing specifically on its legislative activism regarding public education in the US through the 1980’s. After doing so, I will explain arguments for intelligent design theory from Johnson and others and consider what objections have been raised to these. Finally, I will conclude with a discussion of Kitzmiller v. Dover Area School District in which intelligent design theory suffered its most recent courtroom defeat.

Keywords: creationism, evolution, wedge strategy, intelligent design

1

Numbers, Ronald. The Creationist in But Is It Science? 2nd ed. Robert Pennock and Michael Ruse, eds. Amherst: Prometheus Books. 2009. pp. 192-230. 2

Pennock, Robert T. Tower of Babel: The Evidence Against the New Creationism. Massachusetts: MIT Press. 1999.

2

Lawson, Edward J. Summer for the Gods. New York: Basic Books. 1997. p. 23. Ibid., p. 24. 5 Ibid., pp. 51-58. 4

3

6

Ibid., p. 71 Larson explains that Darrow appealed to the Tennessee state constitution as the US Supreme Court had not yet determined that the establishment clause of the US constitution applied to state laws. Tennessee, though, guaranteed the similar right of citizens to determine their own religious conscious. Ibid., p. 163.

7

4

Once the trial was underway, the prosecution was satisfied to call as witnesses the school children who could testify to what happened in the classroom, the Superintendent of Education, and a member of the school board who had a conversation with Scopes in which he recognized the unlawfulness of teaching evolution. The defense strategy, however, was more complex and introduced a now familiar strategy in negotiating the debate between evolution and creationism. Their strategy was to show that Scopes could not have violated the law because there is no necessary conflict between evolutionary theory and the creation story in the Bible. They argued that whether an individual interprets the two as conflicting, compatible, or equally true is a matter of personal faith that cannot be legislated. To support the compatibilist view, they brought witnesses regarded as experts in the scientific and religious communities. As Bryan had clearly argued for the incompatibility of the two in championing the legislation, the defense readily denounced him and claimed that he did not represent the whole of the Christian faith in the United States. After the defense explained their strategy and called their first witness, Stewart immediately objected, saying that any testimony about evolution and whether or not it necessarily conflicts with religion was irrelevant because the law stipulated that evolution may not be taught, regardless of its meaning or whether or not it conflicts with the Biblical account of creation. As history has shown, this has become a common, though less than academically honest, strategy of creationists with the political agenda of repudiating evolutionary theory in public schools. Because the outcome for education depends on the success of arguments in the courtroom, legal strategy is often employed in attempts to win the day for creationists, regardless of whether the logic of the academic debate is on their side. They have continued to be unsuccessful in such attempts, as they were on this day in the Scopes Trial when the court ruled that the defense witness would be heard, though the ruling was tentative. The legal context aided the defense here also, as the language of the act itself was appealed to insofar as it explicitly prohibited the teaching of evolution that denied the Biblical account of creation. The act introduced some measure of ambiguity as it prohibited the teaching of evolution and the teaching of any theory that denied the Biblical account of creation. Since the Scopes trial, legislation that has been proposed to challenge evolutionary theory has been a bit more carefully crafted. As experts in biology and theology took the stand, the real debate began before the public eye. The involvement of the public is significant as it revealed the widely held attitude that human origins and religion are matters that any one person is as good a judge of as any other. The crowd scoffed at the expertise of scientists and theologians and prosecutors maintained that the will of the citizens must be done in public education despite expert consensus on the subject in question. A matter at stake here was the very nature of public education. Perhaps Butler revealed a popular view on this matter when he introduced the antievolution legislation as he aimed to “promote citizenship based on Biblical concepts of morality.”8 Speaking for the defense, Dudley Malone revealed another: “We feel we stand with science. We feel we stand with intelligence. We feel we stand with fundamental freedom in America.”9 The crowd was swayed by Malone on this matter as their sense of justice was roused. The court of public opinion leaned toward the 8 9

Ibid., p. 50. Ibid., p. 179.

5

10

Numbers. 2009. pp. 192-230. Pennock. 1999. p. 3. 12 Wexler, Jay D. From the Classroom to the Courtroom: Intelligent Design and the Constitution, in Not In Our Classrooms. Eugenie Scott and Glenn Branch, eds. Boston: Beacon Press, 2006. pp. 83-104. 11

6

Initially introduced by John C. Whitcomb, Jr.’s book, The Genesis Flood, creation-science aimed to show that elements in the traditional creation story such as a sudden creation of the universe and a great flood are verified by scientific evidence.13 In this way the strategy was to suggest that creation-science actually was science, effectively abandoning Bryan’s strategy of willful adherence to Biblical literalism that proved disastrous for him during the Scopes Trial. The strategy, though, was also to introduce the ‘dual model’ argument that still persists in the creationist strategy today. The dual model argument suggests that of the two, evolution and creation, one is necessarily true and one is necessarily false. If this approach to the debate is adopted, then both positive evidence and successful attacks against the other side are evidence of truth. For this reason, creation-science attempts to use science to critique evolutionary theory and suggests that the evidence for it is far from complete.14 Creationists were successful in getting creation-science into Arkansas classrooms with the passage of Act 590 in 1981. This act applied to elementary and secondary schools and required the balanced treatment of creation-science and evolution-science. While the act succumbed to the fact that evolution was indeed science, a claim that was denied by creationists in the Scopes era, labeling it ‘evolution-science’ qualified it as only a type of science, and a type of science that is rivaled by another type – namely, creationscience. The language of the act placed creation-science within its rights not only in advancing scientific evidence for acts of creation, but also in attacking the evidence for evolutionary theory. Further, it claimed to disallow the teaching of any religious view and did not require instruction on the origin of the universe and of life it all. It only required that where schools chose to introduce evolutionary theory, they must introduce creationscience as well. As it was structured, Act 590 attempted to avoid violating the Establishment Clause by not explicitly advancing religion in the science classroom. Act 590 was challenged in the 1982 case, Bill McLean et al. v. Arkansas Board of Education. Mc Lean, also supported by the ACLU, was the most significant trial in the larger debate since the Scopes Trial. It not only addressed a violation of the Establishment Clause, but it also entered into academic territory by ruling on the definition of science itself. It brought together scholars and experts in the scientific and religious community and it introduced arguments from philosophy of science. Of the expert testimony presented at the case, arguments were made that the attacks on evolutionary theory employed by creation-scientist were fallacious and that creationscience was indeed religion, and not science. The testimony that proved most influential to Judge William R. Overton’s ruling came from Michael Ruse, a philosopher of science. In his testimony, Ruse discredited the dual model approach, illustrated that creation-science assumes on the existence of a Creator – even though the language of Act 590 explicitly attempts to avoid this assumption – and defined the standards by which a theory may properly be deemed scientific.15 Speaking specifically on the matter of the origin of life on this planet, Ruse rejects the dual model approach as a false dilemma because it assumes that the only two options are creation and abiogenesis, which is the view that life arose from inorganic matter through natural processes. As he explains, 13

Pennock. 1999. p. 4 Ibid., pp. 181-185. 15 Ruse, Michael. Witness Testimony Sheet: McLean v. Arkansas Board of Education, in But Is It Science? 2nd ed. Robert Pennock and Michael Ruse, eds. Amherst: Prometheus Books. 2009. 14

7

there are in fact other theories, including that life began on this planet as a result of the actions of intelligent beings elsewhere in the universe and that our planet passed through a cloud of organic matter which took root here. In regards to the necessity of a creator, he explains that ‘originally created’ must mean something other than ‘by natural processes,' and that this only makes sense in light of a Creator who does the creating. He adds that the language of the act includes ‘kinds of plants and animals’, noting that ‘kind’ is not a scientific term and appears only in the Bible when used in the relevant context. Finally, in offering the characteristics that a theory must possess in order to counts as a scientific theory, Ruse delineates the following: 1) It is guided by natural law; 2) It has to be explanatory by reference to natural law; 3) It is testable against the empirical world; 4) Its conclusions are tentative, i.e., are not necessarily the final word; and 5) It is falsifiable.16 In order for a theory to meet Ruse’s condition (1), it must be explained wholly in terms of natural forces. The introduction of supernatural forces in any theory places it outside the realm of science and, according to Ruse, in the realm of religion. To meet condition (2), a theory must explain how two phenomena are related by natural law in a way that is immediate and necessary. Condition (3) is met if the theory is supported by empirical evidence and condition (4) means that the theory is not understood as ultimate truth. It must be possible to show that a scientific theory would be false under certain conditions that could be empirically demonstrated and if this is possible, then the theory meets condition (5). In his decision in the case, Judge Overton adopted Ruse’s conditions near verbatim and set a legal precedent on the meaning of science. He ruled that creationscience does not meet the conditions for a scientific theory because it relies on the assertion that creation was sudden and came from nothing. This notion of creation depends on a supernatural force unguided by natural law, and so inexplicable in terms of natural law. As such, it also fails to be testable or falsifiable. Overton goes further to rule that creation-science does not conform with the standards of science because: (1) it makes reference to terminology such as ‘kinds’ and ‘relatively recent inception’ which have no scientific meaning, (2) it attempts to establish limits to changes within species that are not guided by natural law, (3) it simply asserts without evidence that a catastrophic flood occurred, and (4) it fails to fit into what scientists actually think and do.17 Overton recognizes that the methodology of creation-science is not to use data to infer conclusions, but that “they take the literal wording of the book of Genesis and attempt to find scientific support for it.”18 In addition to ruling that creation-science did not meet the criteria of science, Overton also considered the historical context from which Act 590 arose and the purposes for which it was advanced. In the second section of his ruling, he recounts the history of creationist activism from the days of the fundamentalists to the inception of creation-science and makes note of correspondences revealing that proponents of the act were intentionally promoting a religious crusade. He also appeals to the fact that the state 16

Listing taken from Judge Overton’s opinion as reprinted in Pennock & Ruse, p 294. Ibid., pp. 294-295. 18 Ibid., p. 296. 17

8

could present no evidence that the act was introduced because it had any particular educational value. For these reasons, he ruled that Act 590 violated the test of secular legislative purpose as set forth in Lemon v. Kurtzman.19 The historical account he provides comprised the bulk of this section and factors significantly into his ruling. Since McLean, the use of the history of creationism and the accepted criteria of a scientific theory to overturn legislation that advances creationists’ changing strategy has been a method evoked by judges in landmark cases. For instance, the US Supreme Court overturned a similar balanced-treatment law in Louisiana in the 1987 case, Edwards v. Aguillard. In this case, the law in question promoted religion by introducing a theory that depends on a supernatural being.20 Unsurprisingly, the creationist movement shifted strategies again given the necessity of further distancing themselves from religion in the eyes of the court and the need to establish themselves as promoting legitimate science. Their next, and perhaps most significant strategic shift, was made possible largely due to the efforts of Phillip E. Johnson who was, at the time, a law professor from UC Berkeley. JOHNSON’S SHIFT In addition to being a legal scholar at the time he entered the debate with the publication of his book, Darwin on Trial, Johnson was also a self-professed born-again Christian. Of course, this personal motivation likely influences his interest in challenging evolutionary theory and the arguments he uses to do so, but those who criticize his contributions in the area seem to believe that this is his only claim to be involved. He has been attacked for lacking expertise in science, but I think this is unfair. As the debate has taken place within the legal system since 1925, it seems clear that a legal scholar could have legitimate academic interest in it as well as a significant contribution to make. After all, if the debate were restricted to the community of scientists, there would likely be no debate. This fact might suggest that there is no rightful debate regarding public science education but as public education is ruled by law, it is for the courts to decide what controversies are relevant. Johnson’s contribution that causes him to be celebrated by creationists is that he moves the debate into the realm of philosophy where consensus on scientific data is less relevant. His key argument is that evolutionary theory is advanced within the context of a type of metaphysical commitment that he recognizes goes by many names, but most often calls naturalism. He explains his understanding of naturalism in Darwin on Trial. Here, he says: Naturalism does not explicitly deny the mere existence of God, but it does deny that a supernatural being could in any way influence natural events, such as evolution, or communicate with natural creatures like ourselves.21 He understands this primarily as a type of philosophical naturalism and claims that it is not necessary to evolutionary theory. In fact, he believes that evolutionary theory does not directly contradict creationism, which is a distinctly new step in the creationist strategy. For Johnson, it is only when evolutionary theory is understood within the context of philosophical naturalism – and he thinks this is the context in which the 19

Ibid., pp. 283-290. Pennock, 1999. p. 6. 21 Johnson, Phillip E. Darwin on Trial. Open Source: TaleBooks.com. 1991. p. 83. 20

9

scientific community at large understands it – that it becomes what he calls scientific naturalism: Scientific naturalism makes the same point by starting with the assumption that science, which studies only the natural, is our only reliable path to knowledge. A God who can never do anything that makes a difference, and of whom we can have no reliable knowledge, is of no importance to us.22 It is the assumption that science is the only reliable path to knowledge that is problematic to Johnson. He thinks this is a type of metaphysical dogma of the same sort that religious commitments are typically accused of being. He calls it Darwinism and attributes it to the scientific community as it claims to know how complex organisms came into being in the first place. This, he thinks, is a matter of pure philosophy and is not a conclusion one is entitled to on the basis of empirical data alone. It is not clear, though, that Johnson is correct in claiming that the scientific community purports to know how complex organisms came into being in the first place. If he means that there is general consensus on how organisms have increased in complexity in order to adapt to their changing environments then, yes, Darwinian evolution is a theory of that and there is general consensus about it. However, that he always adds in the first place to his charge suggests that perhaps an equivocation is at work here. If what he really means, and I suspect that he does, is that the scientific community has no rightful claim to know the ultimate origins of life, whether complex or otherwise, then he is likely right about that. Yet, it is not clear that this type of belief is in fact advanced by scientists. In his testimony during the McLean trial, Ruse explicitly stated that evolutionary theory “attempts to explain how life developed after it was formed. Evolutionary theory does not focus on how life began, but only on what happens to life after it began.”23 Johnson’s emphasis on the word know also suggests a problem to me. Ruse testified that “science knows no ultimate truth not subject to revision.”24 However, it is clear that Johnson is arguing that scientists believe that they are possessed of immutable truths. He claims that the most important priority of scientists is to “maintain the naturalistic worldview and with it the prestige of ‘science’ as the source of all important knowledge.”25 To make matters worse, he argues that the community attempts to maintain its prestige by setting up and enforcing the rules of scientific methodology of the sort testified to by Ruse, then accuses him of getting away with a “philosophical snow job.”26 His argument for this is simply his assertions that scientists don’t take their conclusions to be tentative at all and hold metaphysical commitments that are essentially dogmatic. It seems, though, that the scientific community only advances that scientific explanations must appeal to natural causes, not that meaningful explanations must appeal to natural causes. 22

Ibid. p. 83. Ruse. p. 264. 24 Ibid., p. 274. 25 Johnson. p. 84 26 Ibid., p. 81. 23

10

Ibid., p. 6. Behe, Michael. Darwin’s Black Box. New York: Free Press. 1996.

11

complexity and so indicate that they are the result of intelligent design. In fact, he says this revelation should be regarded as “one of the greatest achievements in the history of science” and rivals discoveries of thinkers such as Newton and Einstein.29 He defines ‘design’ as “the purposeful arrangement of parts” and argues that “the conclusion of intelligent design flows naturally from the data itself – not from sacred books or sectarian beliefs.”30 Behe’s description of evidence for the design of biochemical systems and the inability of evolutionary theory to account for it has been refuted by scientists. My concern here is his discussion of the detection of design and his claim that science refuses to admit it. Behe offers a number of examples in order to illustrate his claim that inferences to design play a regular part of our day to day existence. He begins this discussion with the following: Imagine a room in which a body lies crushed, flat as a pancake. A dozen detectives crawl around, examining the floor with magnifying glasses for any clue to the identity of the perpetrator. In the middle of the room, next to the body, stands a large, gray elephant. The detectives carefully avoid bumping into the pachyderm’s legs as they crawl, and never even glance at it. Over time the detectives get frustrated with their lack of progress but resolutely press on, looking even more closely at the floor. You see, textbooks say detectives must “get their man,” so they never consider elephants.31 Behe wants us to believe that scientists investigating the development of life are like those detectives and that the elephant represents intelligent design. Yet, his own account states that the detectives ‘never even glance’ at the elephant, which emphasizes that the elephant is there to be seen with the naked eye if only the detectives would look at it. Intelligent design is not this sort of thing. It is not a brute perception, but results from inference. No matter, though, because Behe goes on to describe a number of examples in which the ‘designer’ is not there to be seen, but can clearly be inferred. He asks us to imagine that we are playing a game of Scrabble, we leave the room, return, and the lettered tiles have been arranged to spell out “TAKE US OUT TO DINNER CHEAPSKATES.” Then we are to imagine that we see flowers near the student center spelling out the name of the university, a machine whose gears are set into motion by pulling a lever, and a trap made of vine hanging from a tree branch.32 In each of these cases, we do not see him, but the designer is clearly inferred. Behe thinks that these examples are analogous to intelligent design, but they are not. At some point we have had a brute experience of persons who might arrange Scrabble tiles to form words, gardeners who might arrange flowers to spell the name of a university, a machinist who might build a machine, and a trapper who might build a trap. Further, we have had some experience of the projects these people are inferred to be engaged in and of the processes they might use to complete them. If we had not had experience of such persons or if we did not have experiential knowledge of the sorts of project Behe 29

Ibid., p. 233. Ibid., p. 193 31 Ibid., p. 192. 32 Ibid., pp. 193-195. 30

12

describes, then we probably would have no idea that the tiles spelled a sentence, that the flowers spelled the name of a university, that the pile of parts was actually a machine, or that the vine attached to a branch was a trap. Behe’s analogies break down because we have had experience of not only what the designers in the cases are – humans – we also have some familiarity with the way that humans work to produce designs such as language and technology. We have had no experience with what he takes to be the intelligent cause that created biochemical systems or of the processes by which such a designer might build them. Behe tries to account for this by attempting to present us with cases in which there is no direct experience of the relevant designer. He asks us to consider how archeologists are able to infer that stones they have found with pictures of camels, cats, griffins and dragons have been designed. But all he has done is made the human designer more remote to us. We are still familiar with humans, and we understand what it means for them to have existed in the past. We are certainly familiar with the images on the stones and the process by which they might be created as well. His final attempt is a reference to the movie 2001: A Space Odyssey. In it, he explains, there is a scene in which an astronaut comes across a towering monument. He says that the astronaut knows immediately that it is the work of an alien life form. He also explains that later in the movie it is revealed that there is life on Jupiter, so perhaps attributing a monument on the moon to the work of an alien was not such a leap of imagination for the astronaut. But even if the astronaut had no idea that aliens existed, what more is this than a further inference from what we have experienced? Not only is the monument the sort of thing that we have experienced here on earth, but it also seems clear that the monument would have been built by the same sort of processes that humans engage in to build them. Perhaps not, though. Perhaps in the movie, the alien is able to wave a magic wand and the monument appears. We would only accept that sort of process, and many of us also only accept that aliens exist, insofar as we are watching a movie and that in itself requires that we suspend disbelief. If we weren’t engaged in the process of ‘movie watching’, we would find such representations nonsensical. And even when we are in suspended belief, we still don’t fully grasp what kind of process ‘waving a magic wand’ involves – that’s why we call it magic.33 The point is that in all of these cases the conclusion of design does not simply follow from the data at hand. The inference to design is drawn from the data and our background knowledge of human beings, the processes they use to design and, at least in one case, our understanding of what it means to watch a movie. We have no such background knowledge whatsoever, especially no brute experience, of the sort of designer that would be required to build biochemical systems or of the process that such a designer would employ in doing so.34 In fact, the only ‘background knowledge’ – to use the terminology loosely here – that we could possibly claim toward understanding such a designer or the process of design itself is of the god concept and the miracles he is taken to perform as illustrated in a ‘sacred book.’ So Behe seems to be fundamentally wrong in 33

Behe’s analogies that I am considering here are found on p 197. A related point, I think, is made by Reginald Williams in his 2011 article, Nagel and Intelligent Design, where he says: “The problem is that it makes no sense to infer, from an unlikelihood of something’s existing independently of purposeful action, that it came to exist via such action when no purposeful action whatever has been known to engender the relevant sort of thing.” p. 40.

34

13

claiming that this not the source of observed intelligent design rather than ‘the data itself.’ To further demonstrate this, consider the following passage: Why does the scientific community not greedily embrace its startling discovery? Why is the observation of design handled with intellectual gloves? The dilemma is that while one side of the elephant is labeled intelligent design, the other side might be labeled God.35 I don’t think it is a question of what the other side ‘might’ be labeled, as Behe suggests. The only concept we possess that by definition works in such mysterious and magnificent ways it the god concept. Further, Behe is a senior fellow at the Discovery Institute’s Center of Science and Culture, whose ‘wedge strategy’ – developed by Johnson – is to: “defeat scientific materialism and its destructive moral, cultural, and political legacies. To replace materialistic explanations with the theistic understanding that nature and human beings are created by God.”36 Given that this is the case, it is clear that he believes the designer is god as well, despite the fact that he does not go further and directly state this in his book. Another senior fellow at the Discovery Institute, William Dembski, attempts to explain how intelligent design can be recognized without simply appealing to analogy.37 According to Dembski, intelligent design can be understood as a theory of information where ‘complex specified information’ is a reliable indicator of design. A system is more or less ‘complex’ depending upon how many bits of ‘information’ it involves. For instance, the complexity of a computer program can be measured in terms of how many computational steps it involves, how much memory it occupies, or a combination of the two. Another illustration Dembski uses to explain complexity is that two copies of Hamlet are no more complex than one copy because they contain identical information. No additional information is added by the second copy, the information is only repeated.38 On the other hand, a system is ‘specific’ to the extent that it ‘follows a pattern’, or serves a purpose which defines its function. For instance, if an archer draws a target on the wall with a circle around it then shoots an arrow and hits the target, the information involved is specified as it reveals the skill of the archer. It the archer had simply shot an arrow at a wall or had shot an arrow at a wall then drew a circle around where it landed, the information would not be specified because it would not reveal the skill of the archer.39 According to Dembski, a system is necessarily the result of intelligent design and could not result from chance if it is comprised of complex specified 35

Behe, 1996. p. 233. Scott, Eugenie. The Once and Future of Intelligent Design, in Not In Our Classrooms. Eugenie Scott and Glenn Branch, eds. Boston: Beacon Press, 2006. p. 24 37 Dembski, William. Intelligent Design as a Theory of Information, in Intelligent Design and Its Critics: Philosophical, Theological, and Scientific Perspectives. Robert T. Pennock, ed. Massachusetts: MIT Press: 2001. pp. 553-572. 38 Ibid. p. 559. 39 Ibid., p. 560-561. 36

14

information.40 He calls this the ‘Law of Conservation of Energy.’ Where we find this in living things, the human immune system, for instance, intelligent design is indicated. Pennock gives a thorough analysis of arguments offered by Dembski in Tower of Babel. In response to this particular argument, he contends that Dembski has not argued for his conclusion that complex specified information cannot result from chance, but rather that he merely asserts it. Dembski has defined complexity and specificity as independent properties. He explains that complex unspecified information can result from chance and also noncomplex specified information can result from chance. Pennock argues that it is equally possible that chance could conjoin the two properties and produce complex specified information. For instance, Dembski offers a telephone number as an instance of complex specified information. A phone number possesses complexity and specification to the extent that it belongs to a particular individual and when that number is dialed, that individual is reached. In response, Pennock points out that, by chance, wrong numbers do cause the person to be reached, especially in cases where the number is similar to the number of a local pizza parlor. If a phone number is complex and specified to the extent that it reaches you and only you and you can be reached by chance, then chance can produce complex specified information.41 Pennock also responds to Johnson in Tower of Babel, thus striking at the heart of intelligent design theory.42 As he understands it, Johnson’s effort is to portray evolutionary scientists as blindly clinging to metaphysical dogma because they will not accept the possibility of a Creator and to paint intelligent design proponents as openminded and the more reasonable of the two. In Johnson’s view, the evidence for evolution is weak and creationism would clearly triumph over it if the dogma of naturalism were removed. In reply, Pennock first argues that Johnson’s is a version of the dual model argument where he attempts to rule out the other possibilities that make it a false dilemma. Johnson defines creationism broadly and tries to avoid referencing any particular view. However, as Johnson defines Darwinism as a commitment to a specific evolutionary process and as the denial of divine intervention then proceeds to attack both, Pennock thinks he reveals his commitment to a particular brand of creationism. As Pennock explains, if he were defending only the broad view of creationism, then he would only need to attack the denial of divine intervention. Since Johnson spends much time attacking particular evolutionary processes, though, Pennock claims that he must have a “specific conflicting Creationist scenario in mind such as the one-week instantaneous creations story.”43 In replying to Johnson’s key philosophical argument, Pennock explains that there is more than one understanding of the term ‘naturalism.’ He distinguishes between ‘ontological naturalism’ and ‘methodological naturalism.’ While ontological naturalism makes the broad claim that what exists in nature is all that exists and is the position that Johnson attacks, methodological naturalism does not make this strong commitment. Rather, methodological naturalism is a view about what methods are reliable for investigating the world that leaves claims about supernatural entities as open questions.44 40

Ibid., p. 570. Pennock. 1999. pp. 256-257. 42 Ibid., pp. 185-214. 43 Ibid., p. 201. 44 Ibid., pp. 190-191. 41

15

Pennock argues that evolutionary biologists have no necessary reason to be committed to ontological naturalism, even though methodological naturalism is characteristic of their field. Without methodological naturalism, there is no lawful regularity to constrain scientific research. As such, induction cannot produce reliable results because there is no possibility for controlled experimentation.45 In other words, if scientists cannot rely upon set laws of nature, then there is not much they can conclude from their observations. Therefore, far from being a dogmatic metaphysical commitment, methodological naturalism is an indispensable framework for scientific research. MILLER’S COMPATIBILISM While there are a number of scientists who are methodological naturalists as well as ontological naturalists, there is no necessary connection between the two. For instance, Kenneth Miller is a professor of biology at Brown University and self-professed Catholic who explains the consistency of evolutionary biology and Catholicism in his book, Finding Darwin’s God.46 Miller contends that the findings of science are a testament to the greatness of god and, as such, finds compatibility between faith and reason. As such, he rejects the dual model approach because he finds truth in evolution and truth in religion. He also has an important reason for rejecting the dual model approach that creationists would be wise to accept as their own. He says: By arguing, as they have repeatedly, that nature cannot be selfsufficient in the formation of new species, the creationists forge a link between the limits of natural processes to accomplish biological change and the existence of a designer (God). In other words, they show proponents of atheism exactly how to disprove the existence of God – show that evolution works, and it’s time to tear down the temple. As we have seen, this is an offer that enemies of religion are all too happy to accept.47 He thinks this strategy works against the creationist because science has adequately explained things that were previously thought to be the work of god and evidence suggests natural phenomena will have naturalistic explanations. Miller’s view is that evidence for god should not be sought in science’s lack of ability to explain a particular phenomenon, but that god should be understood as using natural processes to create and maintain the universe. He thinks it is a greater conception of god to believe that he created a universe that is self-sufficient than to think that he created a “creaky little machine requiring constant and visible attention.”48 In his commitment to a self-sufficient universe and so naturalistic explanations for it, he embraces randomness. This is necessary in order for his view to be consistent with accepting the random variation that ultimately results in the exact species that in fact exist. He acknowledges that this is problematic to creationists because it suggests that humans might not have existed and the Bible states that man was created in the image of

45

Ibid., pp. 194-195. Miller, Kenneth. Finding Darwin’s God. New York: Harper Collins. 1999. 47 Ibid., p. 266. 48 Ibid., p. 268. 46

16

Ibid., p., 274.

17

Ibid., p. 279. Ibid., p. 279.

18

My discussion here is informed by Gordy Slack’s personal account of the trial and the circumstances surrounding it as represented in his book, The Battle Over the Meaning of Everything: Evolution, Intelligent Design, and a School Board in Dover, PA. San Francisco: Jossey-Bass. 2007, and also from Pennock and Ruse. 1999. pp. 434-455. 53 Slack. 2007. p. 40.

19

Pennock and Ruse. 2009. p. 455. Ibid., pp. 506-535.

20

considerations and the testimony on whether or not intelligent design is science supported Jones’ decision that the school board violated the Establishment Clause. He addressed the question of science, in his words, “in the hope that it may prevent the obvious waste of judicial and other resources which would be occasioned by a subsequent trial involving precisely the question which is before us.”56 Jones ruled that intelligent design was not science because it invoked supernatural explanations, because it employed the same dual model argument that was invoked by creation-science, and because it and its attacks on evolutionary theory were not accepted by the scientific community.

56

Ibid., p. 517.

21

Generalized Pearson System of Probability Distributions Dr. Mohammad Shakil Department of Mathematics Miami Dade College, Hialeah Campus Hialeah, Fl 33012, USA E-mail: mshakil@mdc.edu Abstract In recent years, many researchers have considered a generalization of the Pearson system, known as generalized Pearson system of probability distributions. In this paper, we have reviewed these new classes of continuous probability distribution which can be generated from the generalized Pearson system of differential equation. We have identified as many identified as many as ten such distributions. It is hoped that the proposed attempt will be helpful in designing

a new approach of unifying different families of distributions based on the generalized Pearson differential equation. Keywords: Generalized Pearson differential equation, generalized Pearson system of probability distributions.

1. Introduction: Pearson System of Distributions A continuous distribution belongs to the Pearson system if its pdf (probability density function) f satisfies a differential equation of the form

xa 1 df X ( x)  2 , f  x  dx bx  cx  d

(1)

where a , b , c , and d are real parameters such that f is a pdf . The shapes of the pdf depend on the values of these parameters, based on which Pearson (1895, 1901) classified these distributions into a number of types known as Pearson Types I – VI. Later in another paper, Pearson (1916) defined more special cases and subtypes known as Pearson Types VII - XII. Many well-known distributions are special cases of Pearson Type distributions which include Normal and Student’s t distributions (Pearson Type VII), Beta distribution (Pearson Type I), Gamma distribution (Pearson Type III) among others. For details on these Pearson system of continuous probability distributions, the interested readers are referred to Johnson et al. (1994).

2. Generalized Pearson System of Distributions In recent years, many researchers have considered a generalization of (1), known as generalized Pearson system of differential equation (GPE), given by m

1 df X ( x)  f  x  dx

a j0

j

,

n

b j0

xj

j

x

(2)

j

1

where m , n  N /0 and the coefficients a j and b j are real parameters. The system of continuous univariate pdf  s generated by GPE is called a generalized Pearson system which includes a vast majority of continuous pdf  s by proper choices of these parameters. For example:

(i) Roy (1971) studied GPE, when m  2, n  3, b0  0 , to derive five frequency curves whose parameters depend on the first seven population moments. (ii) Dunning and Hanson (1977) used GPE in his paper on generalized Pearson distributions and nonlinear programming. (iii) Cobb et al. (1983) extended Pearson's class of distributions to generate multimodal distributions by taking the polynomial in the numerator of GPE of degree higher than one and the denominator, say v  x  , having one of the following forms: (I) (II) (III) (IV)

v  x   1,    x   , v x   x , 0  x   , v x   x 2 , 0  x   , v  x   x 1  x , 0  x  1 .

(iv) Chaudhry and Ahmad (1993) studied another class of generalized Pearson distributions when a a m  4, n  3, b0  b1  b2  0, 4   2  , 0  2  , b 3  0 . 2 b3 2 b3 (v) Rossani and Scarfone (2009) have studied GPE in the following form

a  a1 x  a 2 x 2 1 df X ( x)  0 , f  x  dx b0  b1 x  b2 x 2 and used it to generate generalized Pearson distributions in order to study charged particles interacting with an electric and/or a magnetic field.

3. Some Recently Developed Generalized Pearson System of Distributions In what follows, we provide a brief description of some new classes of distributions generated as the solutions of the generalized Pearson system of differential equation (GPE) (2). Shakil et al (2010a) defined a new class of generalized Pearson distributions based on the following differential equation

df X ( x)  a 0  a1 x  a 2 x 2   f X ( x), b1  0 ,   dx b x 1  

(3)

which is a special case of the GPE (2) when m  2, n  1 , and b0  0 . The solution to the differential equation (3) is given by 2

f X ( x)  C x  exp   x 2   x ,   0,   0,   0, x  0 , where   

(4)

a a2 a ,   0 ,    1 , b1  0 , and C is the normalizing constant given by 2 b1 b1 b1

   1 / 2  2  C     1  exp   2 /  8    D (  1)   / 

2



,

(5)

where D p (z ) denotes the parabolic cylinder function. The possible shapes of the pdf f (4) are provided for some selected values of the parameters in Figure 1. It is clear from Figure 1 that the distributions of the random variable X are positively (that is, right) skewed and unimodal. (a)

(b)

Figure 1: PDF Plots of X for (a)   1 ,   0.5 ,   0.2, 0.5, 1, 2 (left), and (b)   1 ,   1 ,   0.2, 0.5, 1, 2 (right).

Shakil and Kibria (2010) consider the GPE (2) in the following form

a0  a p x p df X ( x)    b1 x  b p  1 x p  1 dx 

  f X ( x), b 1  0, b p  1  0 , x  0 ,  

when m  p , n  p  1, a1  a 2    a p  1  0 , and b 0  b2    b p  0 . The solution to the differential equation (6) is given by

3

(6)

f X ( x)  C x   1     x p 

where   b 1 ,   b p  1 ,  



a0  b 1 b1

, x  0,   0 ,   0 ,   0,   0, and p  0 ,

, 

a0 b p  1  a p b 1 p b 1 bp  1

(7)

, b 1  0, b p  1  0 , and C is the normalizing

constant given by 

p   p   p C ,      ,    p p  

(8)

where  . , . denotes the beta function. By definition of beta function, the parameters in (8) should be chosen such that  

 p

. The possible shapes of the pdf f (7) are provided for some selected values of the parameters in

Figure 2 (a, b) below. From these graphs, it is evident that the distribution of the RV X is right skewed. (a)

(b)

Figure 2: PDF Plots of X for (a)   1,   1,   2,   2, p  2, 4, 5, 8 (left); and (b)   1,   1,   2, p  3,   2, 2.5, 4, 5 (right).

Shakil, Kibria and Singh (2010b) consider the GPE (2) in the following form p 2p df X ( x)  a 0  a p x  a 2 p x   dx b p 1 x p 1 

  f X ( x), b p  1  0 , x  0 ,   4

(9)

where m  2 p , n  p  1, a1  a 2    a p  1  a p  1    a 2 p  1  0 , and b 0  b1  b2    b p  0 . The solution to the differential equation (9) is given by

f X ( x)  C x  1 exp    x p   x  p , x  0,   0 ,   0 ,       , where   

a2 p p bp  1

, 

(10)

a p  bp  1 a0 ,  , b p  1  0 , p  0 , and C is the normalizing constant given p bp  1 bp  1

by 

1 p  2p C    2    K 2  

,

(11)

p

where K  2  

 denotes the modified Bessel function of third kind. The possible shapes of the pdf

f (10) are

p

provided for some selected values of the parameters in Figure 3 (a, b). It is clear from Figure 3 (a, b), the distributions of the random variable X are positively (that is, right) skewed with longer and heavier right tails. (a)

(b)

Figure 3: PDF Plots of X for (a)   1,   1,   0, p  1, 2, 3, 4 (left), and (b)   1,   0.5, p  1,    1, 0, 1, 2 (right).

4. Hamedani’s Generalized Pearson System of Distributions Hamedani (2011) has defined a new variation of SKS continuous probability distribution given in (10) in a bounded domain. The pdf of this distribution is given by

5

f  x   C px

  p 1

  x exp x 2p

p

 x

p

1

  2p , 0 x  ,  

where   0 ,   0 , and p  0 are parameters and C  exp 2  corresponding to the pdf (12) is given by

F  x   C exp  x  x p

p

,

(12)

 is the normalizing constant. The cdf

1

  2p 0 x  .  

(13)

For the special case of    , we have f x    p e

2

x

  p 1

1

  2p exp   ( x  x ) , 0  x    ,  

1  x   2p

p

p

(14)

where   0 and p  0 are parameters. It is easy to see that the pdf f given by (12) satisfies the following differential equation 1 df  x   2 p    p  1 x p  2  p x 2 p    p  1 x 3 p   2 px 4 p  , f x  d x x p 1  x 3 p 1 which is a special case of GPE (2). For characterizations of the pdf (12) when p  N /0, the interested readers are referred to Hamedani (2011). For the special case of    , the possible shapes of the pdf f (14) are provided for some selected values of the parameters     0.2,0.5,1,2 for p  1,2,5 , in the following Figure 4 (a, b, c). The effects of the parameters can easily be seen from these graphs. For example, it is clear from the plotted Figure 4 (a, b, c) of the pdf that the newly proposed probability density function is unimodal. Also, for some selected values of the parameters, the distributions of the random variable X are both right and left skewed, whereas, for (i)     2 , p  1 , (ii)     0.5 , p  2 , and (iii)     0.2 , p  5 , the distributions appear to be symmetric. (a)

(b)

(c)

Figure 4: PDF for     0.2,0.5,1,2 , for p  1,2,5 , respectively. 6

5. Ahsanullah, Shakil and Kibria’s Generalized Pearson System of Distributions Recently, Ahsanullah, Shakil and Kibria (2013) defined a new class of distributions as solutions of the GPE (2). They considered the following differential equation df X (x )  a1  a2 x  a3 x 2    f X (x ) , 2 3 dx  b3 x  b 4 x 

(15)

which is a special case of the generalized Pearson Eq. (2) when m  2, n  3 . Putting b3 = 1, b4 =γ, a1 =β γ, a2 = β – γ + γ ν, a3 = ν + µ - 2, x > 0; in (3), we have 1 df X (x )   (      )x  (     2)x 2   1   1      , f X (x ) dx x x  x 2 x 3 x 2

where we assume that β > 0, γ > 0, 0 < ν < 1, 0 < µ < 1, 1 - µ > ν > 0. Integrating the above equation, we have f X (x )  C x   1  x   

  1

exp    x 1  , 0  x   ,

(16)

Using the equation (3.471.7), Page 340 of Gradshteyn and Ryzhik (1980), we easily obtain the following normalizing constant as  1       1  1 / 2      2  1     exp  W  1   ,    ,   C  2  2 2 

(17)

where W(.) denotes the Whittaker function which is defined as the solution of the following differential equation d 2W  1  1 / 4   2      W  0, dx 2  4 x x2 

(See, for details, Abramowitz Milton and Stegun, Irene A. eds. Handbook of Mathematical Functions, chapter 13, Dover publications, New York, 1970, page 505). The possible shapes of the pdf f(x) as given in (16) are provided for some selected values of the parameters in the following Figure 5 (a, b). It is clear from Figure 5 (a, b), that the newly proposed distribution is right skewed and the effects of the parameters can easily be seen from these graphs.

7

Figure 5(a): PDF for 

 0.1, 0.3, 0.5, 0.7 when   2,   1,   0.2 Figure 5(b): PDF for   1,3,5, 7 when   3,   0.3,   0.4

4. Concluding Remarks In this paper, we have reviewed some new classes of continuous probability distribution which can be generated from the generalized Pearson system of differential equation. It is hoped that the proposed attempt will be helpful in designing a new approach of unifying different families of distributions based on the generalized Pearson differential equation.

References Abramowitz, M., and Stegun, I. A. (1970). Handbook of Mathematical Functions, with Formulas, Graphs, and Mathematical Tables. Dover, New York. Ahsanullah, M., Shakil, M., and Kibria, B. M. G. (2013). On a probability distribution with fractional moments arising from generalized Pearson system of differential equation and its characterization. International Journal of Advanced Statistics and Probability, 1 (3), 132-141. Chaudhry, M. A., and Ahmad, M. (1993). On a probability function useful in size modeling. Canadian Journal of Forest Research, 23(8), 1679–1683. Cobb, L., Koppstein, P., and Chen, N. H. (1983). Estimation and moment recursion relations for multimodal distributions of the exponential family. Journal of the American Statistical Association, 78(381), 124-130. Dunning, K., and Hanson, J. N. (1977). Generalized Pearson distributions and nonlinear programming. Journal of Statistical Computation and Simulation, Volume 6, Issue 2, 115 – 128. Gradshteyn, I. S. and Ryzhik, I. M. (1980). Table of Integrals, Series, and Products (6th edition). Academic Press, San Diego. Hamedani, G. G. (2011). Characterizations of the Shakil-Kibria-Singh Distribution. Austrian Journal of Statistics, 40 (3), 201–207. 8

Johnson, N. L., Kotz, S., and Balakrishnan, N. (1994). Continuous Univariate Distributions, (volume 1, second edition). John Wiley & Sons, New York. Pearson, K. (1895). Contributions to the mathematical theory of evolution, II: Skew variation in homogeneous material. Philosophical Transactions of the Royal Society of London, A186, 343-414. Pearson, K. (1901). Mathematical contributions to the theory of evolution, X: Supplement to a memoir on skew of variation. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 197, 343-414. Pearson, K. (1916). Mathematical contributions to the theory of evolution, XIX: Second supplement to a memoir on skew of variation. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 216, 429-457. Rossani, A., and Scarfone, A. M. (2009). Generalized Pearson distributions for charged particles interacting with an electric and/or a magnetic field. Physica, A, 388, 2354-2366. Roy, L. K. (1971). An extension of the Pearson system of frequency curves. Trabajos de estadistica y de investigacion operativa, 22 (1-2), 113-123. Shakil, M., and Kibria, B. M. G. (2010). On a family of life distributions based on generalized Pearson differential equation with applications in health statistics. Journal of Statistical Theory and Applications, 9 (2), 255-282. Shakil, M., Singh, J. N., and Kibria, B. M. G. (2010a). On a family of product distributions based on the Whittaker functions and generalized Pearson differential equation. Pakistan Journal of Statistics, 26(1), 111-125. Shakil, M., Kibria, B. M. G., and Singh, J. N. (2010b). A new family of distributions based on the generalized Pearson differential equation with some applications. Austrian Journal of Statistics, 39 (3), 259â&#x20AC;&#x201C;278.

9

Polygon 2014

Polygon is Hialeah Campus’ multi-disciplinary online journal, featuring the academic work of our distinguished faculty and staff.