datamining methods and models

Page 272

254

CHAPTER 6

TABLE 6.5

Before Shock After

GENETIC ALGORITHMS

Weights Before and After Mutation

W1A

W1B

W2A

W2B

W3A

W3B

W0A

W0B

W AZ

WB Z

W0Z

0.1 None 0.1

−0.4 −0.05 −0.45

0.7 None 0.7

−0.5 −0.07 −0.57

0.4 None 0.4

0.7 0.02 0.72

−0.1 None −0.1

0.1 None 0.1

−0.6 None −0.6

0.9 None 0.9

−0.3 None −0.3

The results in the Classifier output window show that naive Bayes achieves a very impressive 96.34% (658/683) classification accuracy. This obviously leaves little room for improvement. Do you suppose that all nine attributes are equally important to the task of classification? Is there possibly a subset of the nine attributes, when selected as input to naive Bayes, which leads to improved classification accuracy? Before determining the answers to these questions, let’s review WEKA’s approach to attribute selection. It’s not unusual for real-word data sets to contain irrelevant, redundant, or noisy attributes, which ultimately contribute to degradation in classification accuracy. In contrast, removing nonrelevant attributes often leads to improved classification accuracy. WEKA’s supervised attribute selection filter enables a combination of evaluation and search methods to be specified, where the objective is to determine a useful subset of attributes as input to a learning scheme.

Network before Mutation Node 1

Node 2

Node 3

W0A = - 0.1 W1A = 0.1 W1B = - 0.4

Node A

WAZ = - 0.6

W2A = 0.7 W2B = - 0.5 W3A = 0.4 W3B = 0.7

Node Z Node B

WBZ = 0.9 W0Z = -0.3

W0B = 0.1

Network after Mutation of Weights Incoming to Node B Node 1

Node 2

Node 3

W0A = - 0.1 W1A = 0.1 W1B = - 0.45

Node A

WAZ = - 0.6

W2A = 0.7 W2B = - 0.57 W3A = 0.4 W3B = 0.72

Node Z Node B

WBZ = 0.9 W0Z = -0.3

W0B = 0.1

Figure 6.7

Mutation in neural network weights.


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.