Neutrosophic Compound Orthogonal Neural Network and Its Applications in Neutrosophic Function Approx

Page 1

NeutrosophicCompoundOrthogonalNeural NetworkandItsApplicationsinNeutrosophic FunctionApproximation

JunYe* andWenhuaCui

DepartmentofElectricalandInformationEngineering,ShaoxingUniversity,508HuanchengWestRoad, Shaoxing312000,China;wenhuacui@usx.edu.cn

* Correspondence:yehjun@aliyun.com;Tel.:+86-575-88327323

Received:16January2019;Accepted:28January2019;Published:29January2019

Abstract: Neuralnetworksarepowerfuluniversalapproximationtools.Theyhavebeenutilizedfor functions/dataapproximation,classification,patternrecognition,aswellastheirvariousapplications. Uncertainorintervalvaluesresultfromtheincompletenessofmeasurements,humanobservation andestimationsintherealworld.Thus,aneutrosophicnumber(NsN)canrepresentbothcertain anduncertaininformationinanindeterminatesettingandimplyachangeableintervaldepending onitsindeterminateranges.InNsNsettings,however,existingintervalneuralnetworkscannot dealwithuncertainproblemswithNsNs.Therefore,thisoriginalstudyproposesaneutrosophic compoundorthogonalneuralnetwork(NCONN)forthefirsttime,containingtheNsNweight values,NsNinputandoutput,andhiddenlayerneutrosophicneuronfunctions,toapproximate neutrosophicfunctions/NsNdata.IntheproposedNCONNmodel,singleinputandsingle outputneuronsarethetransmissionnotesofNsNdataandhiddenlayerneutrosophicneurons areconstructedbythecompoundfunctionsofboththeChebyshevneutrosophicorthogonal polynomialandtheneutrosophicsigmoidfunction.Inaddition,illustrativeandactualexamples areprovidedtoverifytheeffectivenessandlearningperformanceoftheproposedNCONNmodel forapproximatingneutrosophicnonlinearfunctionsandNsNdata.Thecontributionofthisstudy isthattheproposedNCONNcanhandletheapproximationproblemsofneutrosophicnonlinear functionsandNsNdata.However,themainadvantageisthattheproposedNCONNimpliesa simplelearningalgorithm,higherspeedlearningconvergence,andhigherlearningaccuracyin indeterminate/NsNenvironments.

Keywords: Neutrosophiccompoundorthogonalneuralnetwork;Neutrosophicnumber; Neutrosophicfunction;Functionapproximation

1.Introduction

Neuralnetworksarepowerfuluniversalapproximationtools.Theyhavebeenutilizedfordata modeling,functionapproximation,classificationanalysis,patternrecognition,aswellastheirvarious applications.Uncertainorintervalvaluesresultfromtheincompletenessofmeasurements,human observationandestimationsintherealworld.Hence,BakerandPatil[1]proposedanintervalneural network(INN),whichusedintervalweightsratherthanintervaldatainputtoapproximateaninterval function.Then,Huetal.[2]presentedanINNwithintervalweights,wherethenetworkismodeled liketheproblemofthesolutionequations,implyingitscomplexityinthesolutionprocess.Rossi andConan-Guez[3]introducedamultilayerperceptronneuralnetworkonintervaldataforthe classificationanalysisofintervaldata.Forprocessingtheintervalneuralnetwork,Patiño-Escarcina[4] presentedanINN,whereoneofitsinput,output,andweightsetsisintervalvaluesandtheoutputset

Symmetry 2019, 11,147;doi:10.3390/sym11020147 www.mdpi.com/journal/symmetry

symmetry SS Article

isabinaryone,thereforeitsoutputsarebinariestooforclassifiers.Recently,Luetal.[5]introduceda neuralnetwork-basedintervalmatchercorrespondingtolinguisticIF-THENconstructions,whichisan intervalpatternmatchertoidentifypatternswithintervalelementsusingtheneuralnetwork,which canhandleintervalinputsvaluesandintervaloutputvaluesbasedonatraditionalneuralnetwork andisonlysuitableforintervalpatternmatching.KowalskiandKulczycki[6]presentedtheinterval probabilisticneuralnetwork(IPNN)fortheclassificationofintervaldata,wheretheIPNNstructureis basedonSpecht’sprobabilisticnetwork[7].

Inindeterminateenvironments,neutrosophictheory[8–10]hasbeenusedforvarious applications[11–14].Sinceaneutrosophicnumber(NsN)[8–10]canrepresentbothcertainand uncertaininformationinindeterminatesettingsandcontainachangeableintervaldepending onitsindeterminateranges,NsNshavebeenwildlyappliedtodecisionmaking[15–17],fault diagnoses[18,19],linearandnonlinearoptimizationproblems[20–23],expressionandanalysisofthe rockjointroughnesscoefficient(JRC)[24–27].However,thereisnostudyonneutrosophicneural networkswithNsNsinexistingliterature,whileexistingINNsalsocannotdealwithuncertain problemswithNsNs.Therefore,thisoriginalstudyproposesaneutrosophiccompoundorthogonal neuralnetwork(NCONN)forthefirsttime,whichcontainstheNsNweightvalues,NsNinputand outputneuronsandhiddenlayerneutrosophicneurons,toapproximateneutrosophicfunctionsand NsNdata.IntheproposedNCONNmodel,singleinputandsingleoutputdataareNsNs(changeable intervalnumbers)andhiddenlayerneutrosophicneuronfunctionsarecomposedoftheChebyshev neutrosophicorthogonalpolynomialandneutrosophicsigmoidfunction.Inaddition,illustrativeand actualexamplesareprovidedtoverifytheeffectivenessandperformanceoftheproposedNCONN modelinapproximatingneutrosophicnonlinearfunctionsandNsNdata.Thecontributionofthisstudy isthattheproposedNCONNcanhandletheapproximatingandmodellingproblemsofneutrosophic functionsandNsNdataforthefirsttime.ThemainadvantageisthattheproposedNCONNimplies asimplelearningalgorithm,higherspeedlearningconvergence,andhigherlearningaccuracyin indeterminate/NsNenvironments.

Thisstudywasformedasthefollowingframework.Thesecondsectionintroducesthebasic conceptsandoperationsofNsNs.ThethirdsectionproposesaNCONNstructureanditslearning algorithm.Then,twoillustrativeexamplesaboutneutrosophicnonlinearfunctionapproximations andanactualexample(arealcase)abouttheapproximationproblemofrockJRCNsNsarepresented inthefourthsectionandthefifthsection,respectively,toverifytheeffectivenessandperformance oftheproposedNCONNinapproximatingneutrosophicnonlinearfunctionsandNsNdataunder indeterminate/NsNenvironments.Thelastsectioncontainsconclusionsandfuturework.

2.BasicConceptsandOperationsofNsNs

Inanuncertainsetting,Smarandache[8–10]introducedtheNsNconceptrepresentedbythe mathematicalform N = c + uI for a, b ∈ R (allrealnumbers)and I (indeterminacy),inwhichthecertain part c withitsuncertainpart uI for I ∈ [I ,I+]arecombined.Hence,itcandepictandexpressthe certainand/oruncertaininformationinindeterminateproblems.

ProvidedthereistheNsN N =5+3I,itdepictsthatthecertainvalueisfiveanditsuncertain valueis3I. Then,someintervalrangeoftheindeterminacy I ∈ [I ,I+]ispossiblyspecifiedinactual applicationstosatisfysomeappliedrequirement.Forinstance,theindeterminacy I isspecifiedassuch apossibleinterval I ∈ [0,2].Thus,itisequivalentto N =[5,11].If I ∈ [1,3],thenthereis N =[8,14].It isobviousthatitisachangeableintervaldependingonthespecifiedindeterminaterangeof I ∈ [I ,I+], whichisalsodenotedby N =[c + uI ,c + uI+].

Insomespecialcases,aNsN N = c + uI for N ∈ U (U isallNsNs)mayberepresentedaseithera certainnumber N = c for uI =0(thebestcase)oranuncertainnumber N = uI for c =0(theworstcase).

Symmetry 2019, 11,147 2of9

ProvidedthattherearetwoNsNs N1 = c1 + u1I and N2 = c2 + u2I for N1, N2 ∈ U and I ∈ [I ,I+], thentheiroperationallawsareintroducedasfollows[21]:

N1 + N2 = c1 + c2 +(u1 + u2)I =[c1 + c2 + u1 I + u2 I , c1 + c2 + u1 I + + u2 I + ] (1)

N1 N2 = c1 c2 +(u1 u2)I =[c1 c2 + u1 I u2 I , c1 c2 + u1 I + u2 I + ] (2)

N1 × N2 = c1c2 +(c1u2 + c2u1)I + u1u2 I2 = 

 min (c1 + u1 I )(c2 + u2 I ), (c1 + u1 I )(c2 + u2 I + ), (c1 + u1 I + )(c2 + u2 I ), (c1 + u1 I + )(c2 + u2 I + ) , max (c1 + u1 I )(c2 + u2 I ), (c1 + u1 I )(c2 + u2 I + ), (c1 + u1 I + )(c2 + u2 I ), (c1 + u1 I + )(c2 + u2 I + )

(3) (4)

      (3) N1 N2 = c1 + u1 I c2 + u2 I = [c1 + u1 I , c1 + u1 I + ] [c2 + u2 I , c2 + u2 I + ] =  min c1+u1 I c2+u2 I+ , c1+u1 I c2+u2 I , c1+u1 I+ c2+u2 I+ , c1+u1 I+ c2+u2 I , max c1+u1 I c2+u2 I+ , c1+u1 I c2+u2 I , c1+u1 I+ c2+u2 I+ , c1+u1 I+ c2+u2 I

  (4)

RegardinganuncertainfunctioncontainingNsNs,Ye[21,22]definedaneutrosophicfunctionin n variables(unknowns)as y(x,I): Un → U for x =[x1, x2, , xn]T ∈ Un and I ∈ [I , I+],whichisthena neutrosophicnonlinearorlinearfunction.

Regarding an uncertain function containing NsNs, Ye [21,22] defined a neutrosophic function in n variables (unknowns) as y(x, I): Un → U for x = [x1, x2, …, xn]T ∈ Un and I ∈ [I , I+], which is then a neutrosophic nonlinear or linear function.

For example, for x ∈ U and I ∈ [I , I+] is a neutrosophic nonlinear function, while for x = [x1, x2]T∈ U2 and I ∈ [I , I+] is a neutrosophic linear function.

Forexample, y1(x, I)= N1 x cos(x)=(c1 + u1 I)x cos(x) for x ∈ U and I ∈ [I , I+]isa neutrosophicnonlinearfunction,while y2(x, I)= N1 x1 + N2 x2 + N3 =(c1 + u1 I)x1 +(c2 + u2 I)x2 + (c3 + u3 I) for x =[x1, x2]T∈ U2 and I ∈ [I , I+]isaneutrosophiclinearfunction. Generally,thevaluesof x and y(x)areNsNs(usually,butnotalways).

Generally, the values of x and y(x) are NsNs (usually, but not always).

3.NCONNwithNsNs

3. NCONN with NsNs

ThissectionproposesaNCONNstructureanditslearningalgorithmbasedontheNsNconcept forthefirsttime.

This section proposes a NCONN structure and its learning algorithm based on the NsN concept for the first time.

A three-layer feedforward NCONN structure with a single input, single output, and hidden layer neutrosophic neurons are indicated in Figure 1. In Figure 1, the weight values between the input layer neuron and the hidden layer neutrosophic neurons are equal to the constant value 1 and the NsN weight values between the hidden layer neutrosophic neurons and the output layer neuron are wj (j = 1, 2, , p); xk (k = 1, 2, …, n) is the kth NsN input signal; yk is the kth NsN output signal; and p is the number of the hidden layer neutrosophic neurons.

Athree-layerfeedforwardNCONNstructurewithasingleinput,singleoutput,andhiddenlayer neutrosophicneuronsareindicatedinFigure 1.InFigure 1,theweightvaluesbetweentheinputlayer neuronandthehiddenlayerneutrosophicneuronsareequaltotheconstantvalue1andtheNsN weightvaluesbetweenthehiddenlayerneutrosophicneuronsandtheoutputlayerneuronare wj (j =1,2,..., p); xk (k =1,2, , n)isthe kthNsNinputsignal; yk isthe kthNsNoutputsignal;and p is thenumberofthehiddenlayerneutrosophicneurons.

Figure 1. A three-layer feedforward neutrosophic compound orthogonal neural network (NCONN structure).

Symmetry 2019, 11,147 3of9
   
Symmetry 2019, 11, x FOR PEER REVIEW 3 of 10
+++   ++++   ++++   =   ++++   ++++        1111 (,)cos()()cos() yxINxxcuIxx ==+ 21122311122233 (,)()()() yINxNxNcuIxcuIxcuI =++=+++++ x xk yk Inputlayerneuron Hiddenlayer neutrosophicneurons Outputlayerneuron w 1 w 2 w p Figure1. Athree-layerfeedforwardneutrosophiccompoundorthogonalneuralnetwork (NCONNstructure).
2 1212122112 11221122 11221122 11221122 11221122 () ()(),()(), min, ()(),()() ()(),()(), max ()(),()() NNcccucuIuuI cuIcuIcuIcuI cuIcuIcuIcuI cuIcuIcuIcuI cuIcuIcuIcuI −−−+ +−++ −−−+ +−++ ×=+++  ++++   ++++  =  ++++   ++++             1111111 2222222 11111111 22222222 11111111 22222222 [,] [,] min,,,, max,,, NcuIcuIcuI NcuIcuIcuI cuIcuIcuIcuI cuIcuIcuIcuI cuIcuIcuIcuI cuIcuIcuIcuI −+ −+ −−++ +−+− −−++ +−+− +++ ==

Inthelearningprocess,wheneachNsNinputsignalisgivenby xk = ck + uk I =[ck + uk I , ck + uk I + ] (k =1,2,..., n)for I∈ [I , I+],theactualoutputvalueisgivenas:

yk = p ∑ j=1 wj qj, k = 1,2,..., n (5)

wheretheneutrosophicneuronfunctionsofthehiddenlayer qj for j =1,2, , p aretheChebyshev compoundneutrosophicorthogonalpolynomial: q1 =[1,1], q2 = X, qj = 2X · qj 1 qj 2,and X is specifiedasthefollowingunipolarneutrosophicsigmoidfunction(theneutrosophicS-function):

X = 1 1 + e αxk (6)

TheneutrosophicS-functioncantransformNsNintotheinterval(0,1)andthedifferentscalar parametersof α canchangetheslantdegreeoftheneutrosophicS-functioncurve. Then,thesquareintervalofoutputerrorsbetweenthedesiredoutput yd k = cd k + ud k I andtheactual output yk = ck + uk I for I ∈ [I , I+]isgivenasfollows: E2 k =[(cd k + ud k I ck uk I )2 , (cd k + ud k I + ck uk I + )2] (7)

Whereas,thelearningperformanceindexoftheproposedNCONNisspecifiedasthe followingrequirement:

E = 1 2 n ∑ k=1 E2 k (8)

TheNCONNweightvaluescanbeadjustedbythefollowingformula: Wk (l + 1)= Wk (l)+ λEk Q, k = 1,2,..., n (9) where Wk (l)=[w1(l), w2(l),..., wq (l)]T and Qk (l)=[q1(l), q2(l),..., qp (l)]T istheNsNweightvector andthefunctionvectorofthehiddenlayerneutrosophicneurons, λ isthelearningrateoftheNCONN todeterminetheconvergencevelocityfor λ ∈ (0,1),and l isthe lthiterationlearningoftheNCONN. Thus,thisNCONNlearningalgorithmcanbedescribedbelow:

Step1:Give Wk (0) bysmallrandomvalues, Step2:InputaNsNandcalculatetheactualoutputofaNCONNbasedonEquations(5)and(6), Step3:CalculatetheoutputerrorbyusingEquations(7)and(8), Step4:AdjustweightvaluesbyusingEquation(9), Step5:InputthenextNsNandreturntoStep2.

IntheNCONNlearningprocess,thelearningterminationconditiondependsontherequirement ofthespecifiedlearningerrororiterationnumber.

SinceNsNcanbeconsideredasachangeableintervaldependingtoitsindeterminacy I ∈ [I , I+], thelearningalgorithmofNCONNpermitschangeableintervaloperations,whicharedifferentfrom existingneuralnetworkalgorithmsandshowitsadvantageofapproximatingneutrosophicnonlinear functions/NsNdatainanuncertain/NsNsetting.

Generally,themorethehiddenlayerneutrosophicneuronsare,thehighertheapproximation accuracyoftheproposedNCONNis.Then,thenumberofthehiddenlayerneutrosophic neuronsdeterminatedinactualapplicationswilldependontheaccuracyrequirementsofactual approximationmodels.

4.NsNNonlinearFunctionApproximationAppliedbytheProposedNCONN

Toprovetheeffectivenessofapproximatinganyneutrosophicnonlinearfunctionbasedonthe proposedNCONNmodel,wepresenttwoillustrativeexamplesinthissection.

Symmetry 2019, 11,147 4of9

Example1.Supposingthereisaneutrosophicnonlinearfunction:

4. NsN Nonlinear Function Approximation Applied by the Proposed NCONN

y1(x, I)= 1 + 0.3I +(0.5 + 0.2I)x cos(πx) for I ∈ [0,1]

To prove the effectiveness of approximating any neutrosophic nonlinear function based on the proposed NCONN model, we present two illustrative examples in this section.

For x ∈ [ 1+0.02I,1+0.023I]and I ∈ [0,1],theproposedNCONNneedstoapproximatetheabove neutrosophicnonlinearfunction.

Example 1. Supposing there is a neutrosophic nonlinear function: for I ∈ [0, 1].

ToprovetheapproximationabilityoftheproposedNCONN,wegivetheproposedNCONN structurewitheighthiddenlayerneutrosophicneurons(p =8)andlearningparameters,whichare indicatedinTable 1.

1 (,)10.3(0.50.2)cos() y xIIIxx π =+++

For x ∈ [ 1+0.02I, 1+0.023I] and I ∈ [0, 1], the proposed NCONN needs to approximate the above neutrosophic nonlinear function.

Table1. TheNCONNstructureandlearningparameters.

To prove the approximation ability of the proposed NCONN, we give the proposed NCONN structure with eight hidden layer neutrosophic neurons (p = 8) and learning parameters, which are indicated in Table 1.

NCONNStructure αλ TheNumberoftheSpecified LearningIteration E 1 × 8 × 12.50.2520[3.2941,8.5088]

Table 1. The NCONN structure and learning parameters.

NCONN Structure α λ

The Number of the Specified Learning Iteration

Then,thedesiredoutput y1d =[y1d, y+ 1d ] andactualoutput y1 =[y1 , y+ 1 ] oftheproposed NCONNsareshowninFigure 2.Obviously,thedesiredoutputcurvesandtheactualoutputcurves wereveryclosetoeachother,todemonstratethebetterapproximationaccuracyintheneutrosophic nonlinearfunctionapproximationoftheproposedNCONN.Hence,theproposedNCONNindicated thebetterapproximationperformanceregardingtheneutrosophicnonlinearfunction.

y 1 =[y 1, y 1+]

1×8×1 2.5 0.25 20 [3.2941, 8.5088]

Then, the desired output and actual output of the proposed NCONNs are shown in Figure 2. Obviously, the desired output curves and the actual output curves were very close to each other, to demonstrate the better approximation accuracy in the neutrosophic nonlinear function approximation of the proposed NCONN. Hence, the proposed NCONN indicated the better approximation performance regarding the neutrosophic nonlinear function.

2

1.8

1.6

1.4

1.2

1

0.8

0.6

E % 111[,]yyyddd −+ = 111[,]yyy −+ = -1 -0.5 0 0.5 1 1.5 0.2

2.2 x=[x, x +]

0.4

y 1y 1 + y 1dy 1d + 111[,]yyyddd −+ = 111[,]yyy −+ =

Figure 2. The desired output and actual output of the proposed NCONN.

Figure2. Thedesiredoutput y1d =[y1d, y+ 1d ] andactualoutput y1 =[y1 , y+ 1 ] oftheproposedNCONN.

Example2.Consideringaneutrosophicnonlinearfunction: y2(x, I)=(0.6 + 0.3I) sin(πx)+(0.3 + 0.15I) sin(3πx)+(0.1 + 0.05I) sin(5πx) for I ∈ [0,1]. For x ∈ [0+0.002I,1+0.002I]and I ∈ [0,1],theproposedNCONNneedstoapproximatethe aboveneutrosophicnonlinearfunction.

Example 2. Considering a neutrosophic nonlinear function:

ToprovetheapproximationabilityoftheproposedNCONNmodel,wealsogivetheNCONN structurewitheighthiddenlayerneutrosophicneurons(p =8)andlearningparameters,whichare indicatedinTable 2

Symmetry 2019
5of9
, 11,147
5 of 10
Symmetry 2019, 11, x FOR PEER REVIEW

Symmetry

[0, 1]. For x ∈ [0+0.002I, 1+0.002I] and I ∈ [0, 1], the proposed NCONN needs to approximate the above neutrosophic nonlinear function.

I

To prove the approximation ability of the proposed NCONN model, we also give the NCONN structure with eight hidden layer neutrosophic neurons (p = 8) and learning parameters, which are indicated in Table 2.

Table2. TheNCONNstructureandlearningparameters.

NCONNStructure αλ

TheNumberoftheSpecified LearningIteration E 1 × 8 × 180.320[0.5525,1.1261]

NCONN structure α λ

Table 2. The NCONN structure and learning parameters.

The Number of the Specified Learning Iteration

Thus,thedesiredoutput y2d =[y2d, y+ 2d ] andactualoutput y2 =[y2 , y+ 2 ] oftheproposedNCONN areindicatedinFigure 3.Itwasobviousthatthedesiredoutputcurvesandtheactualoutputcurves werealsoveryclose,soastodemonstratethebetterapproximatingaccuracyandperformanceinthe neutrosophicnonlinearfunctionapproximationoftheproposedNCONN.

1×8×1 8 0.3 20 [0.5525, 1.1261]

Thus, the desired output and actual output of the proposed NCONN are indicated in Figure 3. It was obvious that the desired output curves and the actual output curves were also very close, so as to demonstrate the better approximating accuracy and performance in the neutrosophic nonlinear function approximation of the proposed NCONN.

y 2 =[y 2, y 2 + ]

1

0.8

0.6

0.4

0.2

0

2(,)(0.60.3)sin()(0.30.15)sin(3)(0.10.05)sin(5)y xIIxIxIx =+++++πππ E 222 y[,] ddd yy −+ = 222[,]yyy −+ = 0 0.2 0.4 0.6 0.8 1 1.2 1.4 -0.2

1.2 x=[x, x +]

y 2y 2 + y 2dy 2d +

Figure3. Thedesiredoutput y2d =[y2d, y+ 2d ] andactualoutput y2 =[y2 , y+ 2 ] oftheproposedNCONN.

Figure 3. The desired output 222[,]yyyddd −+ = and actual output 222[,]yyy −+ = of the proposed NCONN.

Correspondingtothelearningresultsobtainedfromtheabovetwoillustrativeexamples,we couldseethattheproposedNCONNshowedfasterlearningvelocityandahigherlearningaccuracy, whichindicatedabetterapproximationperformanceregardingtheneutrosophicnonlinearfunctions.

Corresponding to the learning results obtained from the above two illustrative examples, we could see that the proposed NCONN showed faster learning velocity and a higher learning accuracy, which indicated a better approximation performance regarding the neutrosophic nonlinear functions.

5.ActualExampleontheApproximationoftheJRCNsNsBasedontheProposedNCONN

5. Actual Example on the Approximation of the JRC NsNs Based on the Proposed NCONN

Inrockmachanics,theJRCofrockjointsimpliesuncertaintyindifferentsamplinglengthsand directionsofrockjoints.Therefore,JRCuncertaintymaymaketheshearstrengthofjointsuncertain becauseofthecorrespondingrelationshipbetweenJRCandtheshearstrength,whichresultsinthe difficultyofmakingassessmentsofsidestability[25–27].However,thelengthsofthetestingsamples canaffectJRCvalues,whichindicatestheirscaleeffect.Toestablisharelationshipbetweenthe samplinglengths L andtheJRCvaluesinanuncertain/NsNsetting,existingliterature[25–27]used theuncertain/neutrosophicstatisticmethodandfittingfunctionstoestablishsomerelatedmodelof L andtheJRC.SincetheproposedNCONNisabletoapproximateNsNdata,theproposedNCONN couldbeappliedtotherelativeapproximationmodelbetweenthesamplinglength L andtheNsN dataoftheJRCbyanactualexample(arealcase)inthissection,toshowitseffectiveness.

AccordingtothetestingsamplesofthespecifiedareainShaoxingcity,Chinaanddataanalysis, wefoundarelationshipbetweenthesamplinglength L andtheNsNdataofJRC,whichareshown inTable 3

Symmetry 2019
11,147 6of9
,
2019
11
x FOR PEER REVIEW 6 of 10
,
,
for

Table3. NsNdataofrockjointroughnesscoefficient(JRC)regardingdifferentsamplinglengthsfor I ∈ [0,1].

SampleLength L (cm) xk JRC yk 9.8+0.4I [9.8,10.2]8.321+6.231I [8.321,14.552] 19.8+0.4I [19.8,20.2]7.970+6.419I [7.970,14.389] 29.8+0.4I [29.8,30.2]7.765+6.529I [7.765,14.294] 39.8+0.4I [39.8,40.2]7.762+6.464I [7.762,14.226] 49.8+0.4I [49.8,50.2]7.507+6.64I[7.507,14.147] 59.8+0.4I [59.8,60.2]7.417+6.714I [7.417,14.131] 69.8+0.4I [69.8,70.2]7.337+6.758I [7.337,14.095] 79.8+0.4I [79.8,80.2]7.269+6.794I [7.269,14.063] 89.8+0.4I [89.8,90.2]7.210+6.826I [7.210,14.036] 99.8+0.4I [99.8,100.2]7.156+6.855I [7.156,14.011]

ToestablishtheapproximationmodeloftheproposedNCONNregardingtheactualexample,we tooktheNCONNstructurewitheighthiddenlayerneutrosophicneurons(p =8)andindicatedthe learningparametersinTable 4

Table4. TheNCONNstructureandlearningparametersregardingtheactualexample. NCONNStructure αλ TheNumberoftheSpecified LearningIteration E 1 × 8 × 180.115[3.2715,22.3275]

FromFigure 4,wecanseethattheproposedNCONNcouldapproximatetheJRCNsN dataregardingdifferentsamplinglengths L andshowedahigherspeedconvergenceandhigher approximatingaccuracyinitslearningprocessfortheactualexample.Obviously,theproposed NCONNcouldfindtheapproximatingmodelbetweendifferentsamplinglengths L andJRCNsN data,whileexistingneuralnetworkscannotdothemintheuncertain/NsNsetting. Symmetry 2019, 11, x FOR PEER REVIEW 8 of 10

y =[y, y + ] (JRC)

14

13

12

11

10

9

8

15 x =[x, x +] = L (cm)

yy + y dy d +

0 20 40 60 80 100 120 7

Figure4. TheproposedNCONNapproximationresultsoftheJRCNsNdataregardingdifferent samplinglengths L.

Figure 4. The proposed NCONN approximation results of the JRC NsN data regarding different sampling lengths L.

6. Conclusion

In a NsN setting, this original study presented a NCONN to approximate neutrosophic functions/NsN data for the first time. It is a three-layer feedforward neutrosophic network structure composed of a single input, a single output, and hidden layer neutrosophic neurons, where the single input and single output information are NsNs and hidden layer neutrosophic neuron functions are composed of both the Chebyshev neutrosophic orthogonal polynomial and the neutrosophic sigmoid function. Illustrative and actual examples were provided to verify the effectiveness and

Symmetry 2019, 11,147 7of9

6.Conclusions

InaNsNsetting,thisoriginalstudypresentedaNCONNtoapproximateneutrosophic functions/NsNdataforthefirsttime.Itisathree-layerfeedforwardneutrosophicnetworkstructure composedofasingleinput,asingleoutput,andhiddenlayerneutrosophicneurons,wherethesingle inputandsingleoutputinformationareNsNsandhiddenlayerneutrosophicneuronfunctionsare composedofboththeChebyshevneutrosophicorthogonalpolynomialandtheneutrosophicsigmoid function.Illustrativeandactualexampleswereprovidedtoverifytheeffectivenessandrationalityof theproposedNCONNmodelforapproximatingneutrosophicnonlinearfunctionsandestablishing theapproximationmodelofNsNdata.Therefore,thecontributionofthisstudyisthattheproposed NCONNcouldhandletheapproximatingandmodelingproblemsofuncertain/interval/neutrosophic functionsandNsNdata.Here,themainadvantageisthattheproposedNCONNimpliesa simplerlearningalgorithm,higherspeedlearningconvergence,andhigherlearningaccuracyin indeterminate/NsNenvironments.

Inthefuturework,weshallproposefurtherNCONNswithmulti-inputsandmulti-outputs andapplythemtothemodelingandapproximatingproblemsofneutrosophicfunctionsandNsN data,theclusteringanalysisofNsNs,medicaldiagnosisproblems,andpossibleapplicationsfor decision-makingandcontrolinrobotics[14,28]inanindeterminate/NsNsetting.

AuthorContributions: J.Y.proposedtheNCONNmodelandlearningalgorithm;W.H.C.providedthesimulation analysisofillustrativeandactualexamples;J.Y.andW.H.C.wrotethepapertogether.

Funding: ThispaperwassupportedbytheNationalNaturalScienceFoundationofChina(Nos.71471172, 61703280).

ConflictsofInterest: Theauthorsdeclarethatwehavenoconflictofinterestregardingthepublicationof thispaper.

References

1. Baker,M.R.;Patil,R.B.Universalapproximationtheoremforintervalneuralnetworks. Reliab.Comput. 1998, 4,235–239.[CrossRef]

2. Beheshti,M.;Berrached,A.;Korvin,A.D.;Hu,C.;Sirisaengtaksin,O.Onintervalweightedthree-layerneural networks.InProceedingsofthe31stAnnualSimulationSymposium,Boston,MA,USA,5–9April1998; pp.188–195.

3. Rossi,F.;Conan-Guez,B.Multi-layerperceptrononintervaldata.Classification,clustering,anddata analysis.In StudiesinClassification,DataAnalysis,andKnowledgeOrganization;Springer:Berlin,Germany, 2002;pp.427–434.

4. Patiño-Escarcina,R.E.;CallejasBedregal,B.R.;Lyra,A.Intervalcomputinginneuralnetworks:Onelayer intervalneuralnetworks.In IntelligentInformationTechnology.LectureNotesinComputerScience;Das,G., Gulati,V.P.,Eds.;Springer:Berlin/Heidelberg,Germany,2004;Volume3356,pp.68–75.

5. Lu,J.;Xue,S.;Zhang,X.;Han,Y.Aneuralnetwork-basedintervalpatternmatcher. Information 2015, 6, 388–398.[CrossRef]

6. Piotr,A.;Kowalski,P.K.Intervalprobabilisticneuralnetwork. NeuralComput.Appl. 2017, 28,817–834.

7. Specht,D.F.Probabilisticneuralnetworks. NeuralNetw. 1990, 3,109–118.[CrossRef]

8. Smarandache,F. Neutrosophy:NeutrosophicProbability,Set,andLogic;AmericanResearchPress:Rehoboth, DE,USA,1998.

9. Smarandache,F. IntroductiontoNeutrosophicMeasure,NeutrosophicIntegral,andNeutrosophicProbability;Sitech &EducationPublisher:Craiova,Columbus,2013.

10. Smarandache,F. IntroductiontoNeutrosophicStatistics;Sitech&EducationPublishing:Columbus,OH, USA,2014.

11. Ye,J.Amulticriteriadecision-makingmethodusingaggregationoperatorsforsimplifiedneutrosophicsets. J.Intell.FuzzySyst. 2014, 26,2459–2466.

Symmetry 2019, 11,147 8of9

Symmetry 2019, 11,147 9of9

12. Broumi,S.;Bakali,A.;Talea,M.;Smarandache,F.;Uluçay,V.;Sahin,M.;Dey,A.;Dhar,M.;Tan,R.P.; Bahnasse,A.;etal.Neutrosophicsets:Anoverview.In NewTrendsinNeutrosophicTheoryandApplications; Smarandache,F.,Pramanik,S.,Eds.;PonsPublishingHouse:Brussels,Belgium,2018;VolumeII,pp.403–434.

13. Broumi,S.;Bakali,A.;Talea,M.;Smarandache,F.;Vladareanu,L.Computationofshortestpathproblemina networkwithSV-trapezoidalneutrosophicnumbers.InProceedingsofthe2016InternationalConferenceon AdvancedMechatronicSystems,Melbourne,Australia,30November–3December2016;pp.417–422.

14. Gal,I.A.;Bucur,D.;Vladareanu,L.DSmTdecision-makingalgorithmsforfindinggraspingconfigurations ofrobotdexteroushands. Symmetry 2018, 10,198.[CrossRef]

15. Ye,J.Multiple-attributegroupdecision-makingmethodunderaneutrosophicnumberenvironment. J.Intell. Syst. 2016, 25,377–386.[CrossRef]

16. Chen,J.Q.;Ye,J.Aprojectionmodelofneutrosophicnumbersformultipleattributedecisionmakingof clay-brickselection. NeutrosophicSetsSyst. 2016, 12,139–142.

17. Ye,J.Bidirectionalprojectionmethodformultipleattributegroupdecisionmakingwithneutrosophic numbers. NeuralComput.Appl. 2017, 28,1021–1029.[CrossRef]

18. Kong,L.W.;Wu,Y.F.;Ye,J.Misfirefaultdiagnosismethodofgasolineenginesusingthecosinesimilarity measureofneutrosophicnumbers. NeutrosophicSetsSyst. 2015, 8,43–46.

19. Ye,J.Faultdiagnosesofsteamturbineusingtheexponentialsimilaritymeasureofneutrosophicnumbers. J.Intell.FuzzySyst. 2016, 30,1927–1934.[CrossRef]

20. Jiang,W.Z.;Ye,J.Optimaldesignoftrussstructuresusinganeutrosophicnumberoptimizationmodelunder anindeterminateenvironment. NeutrosophicSetsSyst. 2016, 14,93–975.

21. Ye,J.Neutrosophicnumberlinearprogrammingmethodanditsapplicationunderneutrosophicnumber environments. SoftComput. 2018, 22,4639–4646.[CrossRef]

22. Ye,J.Animprovedneutrosophicnumberoptimizationmethodforoptimaldesignoftrussstructures. NewMath.Nat.Comput. 2018, 14,295–305.[CrossRef]

23. Ye,J.;Cui,W.H.;Lu,Z.K.Neutrosophicnumbernonlinearprogrammingproblemsandtheirgeneralsolution methodsunderneutrosophicnumberenvironments. Axioms 2018, 7,13.[CrossRef]

24. Ye,J.;Chen,J.Q.;Yong,R.;Du,S.G.Expressionandanalysisofjointroughnesscoefficientusingneutrosophic numberfunctions. Information 2017, 8,69.[CrossRef]

25. Chen,J.Q.;Ye,J.;Du,S.G.Scaleeffectandanisotropyanalyzedforneutrosophicnumbersofrockjoint roughnesscoefficientbasedonneutrosophicstatistics. Symmetry 2017, 9,208.[CrossRef]

26. Chen,J.Q.;Ye,J.;Du,S.G.;Yong,R.Expressionsofrockjointroughnesscoefficientusingneutrosophic intervalstatisticalnumbers. Symmetry 2017, 9,123.[CrossRef]

27. Ye,J.;Yong,R.;Liang,Q.F.;Huang,M.;Du,S.G.Neutrosophicfunctionsofthejointroughnesscoefficient (JRC)andtheshearstrength:AcasestudyfromthepyroclasticrockmassinShaoxingCity,China. Math.Probl. Eng. 2016,4825709.[CrossRef]

28. Vlădăreanu,V.;Dumitrache,I.;Vlădăreanu,L.;Sacală,I.S.;Ton¸t,G.;Moisescu,M.A.Versatileintelligent portablerobotcontrolplatformbasedoncyberphysicalsystemsprinciples. Stud.Inform.Control 2015, 24, 409–418.[CrossRef] © 2019bytheauthors.LicenseeMDPI,Basel,Switzerland.Thisarticleisanopenaccess articledistributedunderthetermsandconditionsoftheCreativeCommonsAttribution (CCBY)license(http://creativecommons.org/licenses/by/4.0/).

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.