BOHRInternationalJournalofAdvancesinManagementResearch 2022,Vol.1,No.1,pp.7–14 https://doi.org/10.54646/bijamr.002 www.bohrpub.com
FraudDetectioninE-CommerceUsingMachineLearning
SamratRay
ISMSSankalpBusinessSchool,Pune,India
E-mail:samratray@rocketmail.com
Abstract. Ariseintransactionsisbeingcausedbyanincreaseinonlinecustomers.Weobservethattheprevalenceof misrepresentationinonlinetransactionsisalsoincreasing.Devicelearningwillbecomemorewidelyusedtoavoid misrepresentationinonlinecommerce.Thegoalofthisinvestigationistoidentifythebestdevicelearningcalculation usingdecisiontrees,naiveBayes,randomforests,andneuralnetworks.Therealitiestobeutilizedhavenotyet beenmodified.Engineeredminorityover-testingstabilityinformationismadeutilizingthestrategyframework. Theprecisionofthebrainnotentirelysettledbythedisarraynetworkappraisalis96%,trailedbynaiveBayes(95%), randomforest(95%),anddecisiontree(92%).
Keywords: AI,fraudidentification,algorithms,matrix,web-based.
INTRODUCTION
AccordingtoresearchonwebclientsinIndonesiapublishedintheOctober2019issueof FreeMarketeersMagazine,thecountry’s132millionwebclientsin2019alone representedanincreasefromthe142.3millionclients depictedinFigure1fromthepreviousyear.Therewere fartoomanypeopleusingtheweb-basedsystemand conductingweb-basedtransactionsduringCOVID-19,but wherethereareinventions,therearealsomanyproblems. Therearenumerousmethodsforgrowingane-commerce business[1, 3].
Basedoninformationfrommanydatasets,itispredicted thatby2022,theamountofretailonlinebusinesstransactionsinIndonesiawillexpandfromitscurrentpositionto 134.6%ofUS$15.3million,oralmost217trillion.Rapid technicaladvancementsthatmakeiteasierforcustomers toshoparesupportingthisgrowth.
Numerouse-commercetransactionspresentavarietyof challengesandnewproblems,particularlythee-commerce fraudshowninFigure 2.ThenumberofInternetbusinessrelatedscamshasalsocontinuouslyclimbedsincearound 1993.Accordingtoa2013survey,5.65penniesoutofevery $100inweb-basedbusinessexchanges’totalturnoverwere overstated.Morethan70trilliondollarswillhavebeen stolenby2019[4, 5].Fraudidentificationisonemethodto cutdownonmisrepresentationinonlinetransactions.
Thetechnologyfordetectingcreditcardfraudhas advancedquickly,movingfrommachinelearningtodeep learning[6].Butregrettably,theamountofresearchon e-commercefrauddetectionisstilltiny,anditisonlynow focusedonidentifyingthetraitsorqualities[7]thatwill beusedtoidentifywhetherane-commercetransactionis fraudulentornot.

Thedatasetsusedinthisstudyhadacombined140,130 insights,11,150datapoints,anda0.093rateforextortion measures.Datasetswithverysmallproportionsproduce lopsidedinformation.Whencomparedtominoritydata, irregularitydataproducesmoreaccurateresultsthatare moreheavilyweightedtowardbiggerportionsofinsights. Thecategorizationofmainlynon-extortionasopposed tomisrepresentationproducedmoreremarkablefindings fromthedatasetstudied.Usingthedestroyed(synthetic minorityoversampling)strategytoadapttodatairregularitiesworsenstheclassoutcomes[8, 9].
Thisstudyaimstoidentifythemosteffectivemodelfor identifyingdeceptioninanonlinetransaction.Extraction isincludedinrecentresearchonwheretofindfraudinecommerce[10, 11].Thispaperconcentratesonfrauddetectionine-commerce.Itconcentratesontheuseofdatasets fromKaggle,upgradegroupingAI,theuseofSMOTE, andSMOTEutilizationtakingcareofunbalancedrecords. AftertheuseofSMOTE,thedatasetwillbetrainedonthe useofcontraptiondominating.Decisiontrees,naiveBayes,
Figure1. Growthofinternetusers[2].

Figure3. Researchsteps.

Sincemisrepresentationsituationsaretypicallyabout 2%,theSMOTEtechniqueisusefulforreducingthegreater portionoftheclassinthedatasetandaddressinginformationdiscomfortissues.TheimplicationsoftheSMOTE datasetexchangemisrepresentationcyclewillbealteredif thebiggerpartclasscausesthegroupingtobemorecoordinatedtothelargerpartclasssuchthatthepredictionsof theorderarenotaccurate[12, 15].
Inthecharacterizationcycle,AIutilizedadecisiontree, irregularwoodland,counterfeitbrainorganization,and credulousBayes.Theweb-basedfirmusestheseAIcalculationstotakeintoaccountandthenlocatetheexchange dataset’sgreatestaccuracyoutcomes.
PreprocessingData

Figure2. Salesofe-commerce, statista.com [4].
irregularwoods,andbrainnetworkmachineexaminations areusedtodeterminetheexactness,correctness,andconsiderationofF1-rating,andG-mean.
MATERIALSANDMETHODS
Usingcomputationsfromdecisiontree,naiveBayes,randomforest,andneuralnetworks,thisstudyinvestigates extortionandnon-misrepresentationinonlinebusiness transactions.Thecyclehasended,asseeninFigure 3.
Thedataset’scomponentdeterminationprocessserves asthestartingpointforthecollectionframework. Change,normalization,andscaleofthecharacteristicsare employedtoexpresstherelationshipsothattheymaybe usedforthegameplanoncetheSMOTEprocedurehasfinishedthedepictioncycle.Afterthat,thereisnopermanent setup,whichisaccomplishedbypreprocessingdatausing principalcomponentanalysis(PCA).Theimportanceof destroyedisessentialforbalancingfaultydata.
NewelementsthatwillbeemployedintheAIcomputationcyclearesubjecttopreprocessing,whichremoves, modifies,scales,andstandardizesthem.Unreliabledata areconvertedintoreliabledatathroughpreprocessing.The highlightsofthePCApreprocessinginthisstudyinclude extraction,modification,normalization,andscaling.
Inordertoisolatehighlightsfrominformationata high-layeredscale,PCAisadirectmodificationthatis typicallyappliedininformationpressure.Furthermore, PCAcanreducecomplexinformationtomoremodest aspectstoshowobscurepartsandimprovetheconstructionofinformation.PCAcomputationsincludecomputationsofcovarianceframeworkstolimitdecreasesand boostchange.
DecisionTree
Decisiontreesarevaluableforinvestigatingextortion informationandfindingsecretconnectionsbetweenvariouslikelyfactorsandanobjectivevariable.Thedecision tree[20]consolidatesmisrepresentationinformationinvestigationanddisplaying,soitisgenerallyexcellentasthe mostimportantphaseinthedisplayingsysteminany
Figure4. Architectureofdecisiontrees.


event,whenutilizedasthelastmodelofafewdifferent procedures[16, 18].
Decisiontreesareexcellentfororderingcomputations andareatypeofcontrolledlearningcalculation.The decisiontreeorganizesthedatasetintoafewincreasing segmentsinlinewithchoiceprinciplesbyemphasizingthe connectionbetweeninformationandresultcredits.
• Rootnode:Thisaddressesthewholepopulationor test,andthisisadditionallyseparatedintoatleast two.
• Parting:Thisisthemostcommonwayofseparating ahubintotwoor,ontheotherhand,moresub-hubs.
• Whenasub-centerpointsplitsintoafewsmallersubcenterpoints,thedecisionnodeisactivated.
• Leaf/Terminalnode:Unspecifiedcenterpointsare calledleaforterminalcenterpoints.
• Pruning:Whenadecision’ssub-centerpointis removed.
• Branch/Sub-Tree:Subdivisionsofalltreesarecalled branchesorsub-trees.
• Parentandchildnode:Acenterpointthatisdivided intosub-centers[19].
AsshowninFigure 4,thefrauddetectionemploysa decisiontreewitharoothub,innerhub,andleafhub.
NaiveBayes
NaiveBayespredictsopendoorsbecauseofexperience[23].Itinvolvestheestimationequationasbeneath.
Figure5. Architectureofrandomforest.
P(A|B):speculationpossibilitygivenconditions (returnedopportunity)
P(A):probabilityofthehypothesis(priorpossibility) P(B|A):Probability—takingintoaccountthespeculativeconditions
P(B):PossibilityA
Theaforementionedequationcanbeusedtoaccessboth fraudulentandlawfultransactions.
RandomForest
Whenalotofdataisrequired,therandomforest(RF) algorithmisused.Theclassificationandregressiontree (truck)systemhasevolvedintoRFbyincludingthebootstraphoarding(firing)methodandunexpectedelement determinationarchitecture.InFigure 5,theRFisdisplayed.
Amodelcalleda“randomforest”ismadeupofallintelligentgroupactionfraudtrees.Themaximumdepthcall treesinthee-commercefrauddetectionsystemdepends onRFandemploysarandomvectordistributionthatis thesameacrossalltrees.Thedecisiontreeproducesthe topcategories,andtheyareusedtoselecttheclassification method’scategory.
NeuralNetwork
Aneuralnetworksystemwithnodesconnected,suchas thearchitecturalneuralnetworkseeninFigure 6,isapplied inthehumanbodyaspartofthealgorithmneuralnetwork artificialintelligencetechnique.
Where
B:copewiththestatisticswithobscuretraining
A:specificsplendoristhestatisticalhypothesis
Beforepreparing,therewere11informationlayers.After preprocessing,therewere17informationlayers.Thesecret layerwasdecidedontheneuralnetworkbyhereditary calculationsonthesecretlayernotwithstandingthenumberofinfolayers[18].Thisforecastingprocedureusesthe GA-NN[19]algorithm,whichisasfollows:
Figure6. Architectureofneuralnetwork.

Thesepredictionsareasfollows:
• Initializationcountiszero,fitnessisone,andthere arenocycles.
• Earlystagesofpopulationgrowth.Eachconsecutive genesequencethatmakesupchromosomecodesfor theinput.
• Suitablenetworkarchitecture.
• Giveweights.
• Trainyourbackpropagationskills.examinationsof fitnessmetricsandaccumulatederrors.thenassessed accordingtotheworthoffitness.Ifthecurrentvalue offitnessisgreaterthanthepriorvalueoffitness.
• Count = count +1.
• Selection:Aroulettewheelmechanismisusedto choosethetwomains.Crossover,mutation,and reproductionareexamplesofgeneticoperationsthat createnewcapabilities.
• Assumingthenumberofcyclesrisestothecount, returntonumber4.
• Networkguidancewithpickedattributes.
• Lookatexecutionutilizingtestresults.
ConfusionMatrix
Atechniquethatmaybeusedtoassesscategorization performanceistheconfusionmatrix.Adatasetwithjust twodifferentclasscategoriesisshowninTable 1 [20].
FalsePositiveandFalseNegativecountthenumberof positivelyandnegativelycategorizedobjects,respectively,
Table1. Confusionmatrix. ClassPredictivePositivePredictiveNegative ActualPositiveTPTN ActualNegativeFPFN
whereasTruePositiveandTrueNegativecountthe numberofpositivelyandnegativelyclassedobjects, respectively(FN).
Themostpopularmetricforassessingclassificationabilitiesisaccuracy,butifyouoperateinanunequalsetting, thisassessmentisflawedsincetheminorityclasswillonly makeupaverysmallportionoftheaccuracymetric.
TheF-1score,G-mean,andrecallevaluationcriteriaare advised.TheG-meanlistisutilizedtoquantifybyand largeexecution(ingeneralarrangementexecution),though theF-1scoreisutilizedtoevaluatehowminorityclasses areorderedinimbalancedclasses.
Recall,precision,F-1score,andG-meancategorization abilitywereexaminedinthisstudy.
Accuracy = TP + TN TP + TN + FN + FP (2)
Recall = TP TN + FP (3)
Precision = TP TP + FP (4) G-Mean = √TP TN(5)
F1-Score = 2 × Precision × Recall Precision + Recall (6)
RESULTS
Dataset
ThisstudyutilizesaKaggle-obtainedonlinebusinessfraud dataset.Thedatasethas151,112records.Ofthese,14,151 recordsareclassifiedasdeceitfulmovement,andthe extentoffalseactioninformationis0.094.Theextortion exchangedatasetresultsin152,122fullrecords,14,152 recordsclassifiedasmisrepresentation,andamisrepresentationinformationfractionof0.094,asshowninFigures 7 and 8.SMOTEreducesclasslopsidednessbyblending information.
Theimagehasbeenoversampled.
DecisionTrees
Datathathaveundergonepreprocessingarepreparedfor theexperimentalphaseusingthedecisiontreemodel.Subsequenttopreprocessing,theinformationwillbeoversampledbeforeanorderutilizingadecisiontreeisperformed. Moreover,thedecisiontreewilllikewisebeperformed usinginformationthathasnotbeenoversampled.The findingsofthesetwoexperimentswillbeutilizedto
Figure7. Ratiofraud.
Figure8. Ratiofraudafteroversampling.
Table2. ConfusionmatrixdecisiontreewithoutSMOTE.
ClassPredictivePositivePredictiveNegative ActualPositive3878238782 ActualNegative17462595
Table3. ConfusionmatrixdecisiontreewithSMOTE. ClassPredictivePositivePredictiveNegative ActualPositive386512342 ActualNegative17242617
analyzedecisiontreesanddemonstratetheclassification outcomesutilizingtheSMOTEoversamplingtechnique.
Thedecision-makingprocesswithoutSMOTEprecision is53.2%,F1-scoreis56.8%,accuracyis90%,recallis57.7%, andG-meanis76.3%.Resultsfromtheconfusionmatrix decisiontreewithoutSMOTEareshowninTable 2
DecisiontreethatproducesSMOTErecallis61.4%,precisionis90.5%,F1-scoreis90.2%,andG-meanis72.2%. Accuracyis90%.ResultsfromtheconfusionmatrixdecisiontreewithSMOTEareshowninTable 3
NaiveBayes
Gettingreadyinformationthathasrecentlybeenhandled duringpreprocessingisthemannerinwhichthenaive Bayesmodeltestisdone.Followingpreprocessing,the informationwillbeoversampledutilizingthetwosortsof information:informationthathasbeenoversampledand
Table4. ConfusionmatrixNaïveBayeswithoutSMOTE.
ClassPredictivePositivePredictiveNegative ActualPositive40764229
ActualNegative19932348
Table5. ConfusionmatrixNaïveBayeswithSMOTE.
ClassPredictivePositivePredictiveNegative ActualPositive40760233
ActualNegative19882353
Table6. ConfusionmatrixrandomforestwithoutSMOTE. ClassPredictivePositivePredictiveNegative ActualPositive40881112
ActualNegative19542387
informationthathasnot,aswellasnaiveBayesarrangementwillbefinishedutilizingthetwosortsofinformation. Throughaside-by-sidecomparisonofnaiveBayesandthe oversamplingapproach,thefindingsofthesetworesearch methodswillbeutilizedtodemonstratethegrouping outcomes.
WithoutSMOTEgeneration,naiveBayesrecallis52.1%, precisionis90.2%,F1-scoreis67.9%,andG-meanis72.3%. Accuracyis95%.Table 4 displaystheconclusionsfromthe confusionmatrixnaiveBayeswithoutSMOTE.
SimpleBayesusingSMOTEoutputrecallis53.1%,precisionis93.8%,F1-scoreis95.4%,andG-meanis72.2%. Accuracyis95%.Resultsfromtheconfusionmatrixnaive BayeswithSMOTEareshowninTable 5.
RandomForest
TheRandomForestmodeltrialprocedureiscarriedout bypreparingdatathathasalreadybeenprocessedduringthepretreatmentstep,theRandomForestmodeltrial procedureiscarriedout.Inthewakeofpreprocessing, theinformationwillbeexposedtoarrangementoversamplingutilizingrandomforest.Bothoversampledand non-oversampleddatawillbeusedintherandomforest process.UtilizingtheSMOTEoversamplingapproachand therandomforestcomparison,theclassificationfindings fromthesetwostudieswillbeshown.
Therandomforestresultis54%,precisionis93.3%,F1scoreis62.7%,andG-meanis73.1%withoutSMOTEgeneration.Accuracyis95%.Theresultsofaconfusionmatrix randomforestwithoutSMOTEareshowninTable 6


Precisionis80%,F1-scoreis94.3%,SMOTEresultis 58.1%,andG-meanis75.7%.Theseresultsweregenerated viarandomforest.Accuracyis95%.TheresultsoftheconfusionmatrixrandomforestutilizingSMOTEareshownin Table 7
NeuralNetwork
Datathathavepreviouslyundergonepreprocessing arepreparedforsearchingusingtheneuralnetwork
Table7. ConfusionmatrixrandomforestwithSMOTE.
ClassPredictivePositivePredictiveNegative ActualPositive40383610 ActualNegative18202521
Table8. ConfusionmatrixneuralnetworkwithoutSMOTE.
ClassPredictivePositivePredictiveNegative ActualPositive4111324 ActualNegative19322265
Table9. ConfusionmatrixneuralnetworkwithSMOTE.
ClassPredictivePositivePredictiveNegative ActualPositive385662539
ActualNegative958531487
Figure10. Recallresult.
Figure9. Accuracyresult.
model.Followingpreprocessing,classificationoversamplingusinganeuralnetworkandrandomforestwillbe performedonthedata.Neuralnetworkswillbeused withoversampleddata,whilerandomforestswillbe usedwithundersampleddata.Thefindingsofthesetwo experimentswilldemonstratehowclassificationoutcomes wereattainedutilizingneuralnetworkcomparisonand thesyntheticminorityoversamplingtechnique(SMOTE) oversamplingapproach.
NeuralnetworkcreationwithoutSMOTEprecisionis 96.1%,F1-scoreis95.1%,accuracyis96%,recallis56%,and G-meanis74.5%.Resultsfromaconfusionmatrixneural networkwithoutSMOTEareshowninTable 8.



TheneuralnetworkthatgeneratestheSMOTEresult hasa76.7%SMOTE,92.5%precision,85.1%F1-score,and 82.4%G-mean.Theaccuracyis85%.Table 9 displays findingsfromthedisorderframeworkbrainnetworkusing SMOTE.
Theaccuracynumbersfromexperimentsemploying variousmethodsaredisplayedinFigure 9.Theneural networkalgorithm’sbestaccuracyratingis96%.
Reviewvaluesarecreatedbytestsutilizingdifferent calculations,asdisplayedinFigure 10.WhenAIcomputationsandtheSMOTEareutilizedinplaceofonlydecision
Figure11. Precisionresult.
trees,randomforests,naiveBayes,andbrainnetworks, reviewvaluesincreasemorequickly.Theneuralnetwork computationandtheSMOTEprovidedthebiggestrisein reviewvalues.
AsdisplayedinFigure 11,resultsfromtestsutilizing differentcalculationsshowthataccuracyvaluesdecline whileAIcalculationsandtheSMOTEareutilizedrather thanjustthecommonlyusedalgorithms,whichwementioninthemethodology,withthemostnoteworthydecline happeningwhenneuralnetworkcalculationsandSMOTE areutilized.
AscanbeshowninFigure 12 fromexperimentsusing manyalgorithms,integratingmachine-learningalgorithms withtheSMOTEresultsinhigherF1-scorevaluesthanjust utilizingalgorithmsalone.Thecategorizationofminorityclassesintoimbalancedclassesisevaluatedusingthe F1-score.
RatherthanusingjusttheG-meancalculationtoevaluateingeneralexecution(byandlargeorderexecution),the G-meanvaluerosewhileutilizingAIcalculationvaluesas displayedinFigure 13.
Figure12. F1-scoreresult.
Figure13. G-meanresult.

CONCLUSIONANDFUTUREWORK
Ahereditarycalculationcanbeusedtodeterminethe numberofsecrethubsandlayers,aswellastoselectthe appropriatequalitiesforbrainorganizations.Thereview, F1-score,andG-meanqualitieswereexpandedinthe analysiswhileutilizingtheSMOTEapproach.Memory utilizingbrainnetworksrosefrom52%to74.6%,reviews utilizinggullibleBayesrosefrom41.2%to41.3%,reviews utilizingarbitrarywoodlandsrosefrom54%to57%,and reviewsutilizingchoicetreesrosefrom57.7%to62.3%.
ThevalueoftheF1-scoredeveloperhasincreasedfor allAItechniques,risingfrom69.8%to85.1%forneural networks,67.9%to94.5%fornaiveBayes,69.8%to94.3% forrandomforest,and56.8%to91.2%fordecisiontrees. However,SMOTEincreasesthevalue.

Inlightofthediscoveriesofthepreviouslymentioned try,itwasresolvedthatSMOTEhadtheoptiontowork ontheexhibitionofbrainorganizations,arbitrarytimberlands,choicetrees,andnaiveBayes.Addressthewebbasedbusinessmisrepresentationdataset’slopsidedness byexpandingG-meanandF-1scoresincontrastwith brainorganizations,choicetrees,irregulartimberlands,
andnaiveBayes.ThisshowstheviabilityoftheSMOTE approachinraisingtheclassificationofimbalancedinformationexecution.
Futureresearchisanticipatedtoenabletheuseofadditionalcomputationsorin-depthlearningforthelocationof onlinebusinessdeceptionaswellasotherinvestigationto increasetheaccuracyofthebrainnetworkemployingthe SMOTEapproach.
REFERENCES
[1] AsosiasiPenyelenggaraJasaInternetIndonesia,“Magazine APJI(AsosiasiPenyelenggaraJasaInternetIndonesia)”(2019):23 April2018.
[2] AsosiasiPenyelenggaraJasaInternetIndonesia,“Mengawaliintegritaseradigital2019–MagazineAPJI(AsosiasiPenyelenggaraJasa InternetIndonesia)”(2019).
[3] Laudon,KennethC.,andCarolGuercioTraver.E-commerce:business,technology,society.2016.
[4] statista.com.retaile-commercerevenueforecastfrom2017to2023 (inbillionU.S.dollars).(2018).RetrievedApril2018,fromIndonesia: https://www.statista.com/statistics/280925/e-commerce-revenuef orecast-in-indonesia/
[5] Kiziloglu,M.andRay,S.,2021.Doweneedasecondenginefor Entrepreneurship?Howwelldefinedisintrapreneurshiptohandle challengesduringCOVID-19?.In SHSWebofConferences (Vol.120). EDPSciences.
[6] Roy,Abhimanyu,etal.“Deeplearningdetectingfraudincreditcard transactions.”2018SystemsandInformationEngineeringDesign Symposium(SIEDS).IEEE,2018.
[7] Zhao,Jie,etal.“Extractingandreasoningaboutimplicitbehavioralevidencesfordetectingfraudulentonlinetransactionsin e-Commerce.”Decisionsupportsystems86(2016):109–121.
[8] Zhao,Jie,etal.“Extractingandreasoningaboutimplicitbehavioralevidencesfordetectingfraudulentonlinetransactionsin e-Commerce.”Decisionsupportsystems86(2016):109–121.
[9] Pumsirirat,Apapan,andLiuYan.“Creditcardfrauddetection usingdeeplearningbasedonauto-encoderandrestrictedboltzmann machine.”InternationalJournalofadvancedcomputerscienceand applications9.1(2018):18–25.
[10] Srivastava,Abhinav,etal.“Creditcardfrauddetectionusinghidden Markovmodel.”IEEETransactionsondependableandsecurecomputing5.1(2008):37–48.
[11] Lakshmi,S.V.S.S.,andS.D.Kavilla.“MachineLearningForCredit CardFraudDetectionSystem.”InternationalJournalofApplied EngineeringResearch13.24(2018):16819–16824.
[12] Ray,S.andLeandre,D.Y.,2021.HowEntrepreneurialUniversity ModelischangingtheIndianCOVID–19Fight?.Entrepreneur’s Guide, 14(3),pp.153–162.
[13] Bouktif,Salah,etal.“Optimaldeeplearninglstmmodelforelectric loadforecastingusingfeatureselectionandgeneticalgorithm:Comparisonwithmachinelearningapproaches.”Energies11.7(2018): 1636.
[14] Xuan,Shiyang,GuanjunLiu,andZhenchuanLi."Refinedweighted randomforestanditsapplicationtocreditcardfrauddetection."InternationalConferenceonComputationalSocialNetworks. Springer,Cham,2018.
[15] Samrat,R.,2021.WhyEntrepreuneralUniversityFailstoSolve PovertyEradication?.HeraldTuvaStateUniversity.No.1Socialand HumanSciences,(1),pp.35–43.
[16] Zhao,Jie,etal.“Extractingandreasoningaboutimplicitbehavioral fordetectingfraudulentonlinetransactionsine-Commerce.”Decisionsupportsystems86(2016):109–121.
[17] Sharma,Shiven,etal.“Syntheticoversamplingwiththemajority class:Anewperspectiveonhandlingextremeimbalance.”2018IEEE InternationalConferenceonDataMining(ICDM).IEEE,2018.
[18] Kim,Jaekwon,YoungshinHan,andJongsikLee.“Dataimbalance problemsolvingforsmotebasedoversampling:Studyonfaultdetectionpredictionmodelinsemiconductormanufacturingprocess.” AdvancedScienceandTechnologyLetters133(2016):79–84.
[19] Sadaghiyanfam,Safa,andMehmetKuntalp.“ComparingthePerformancesofPCA(PrincipleComponentAnalysis)andLDA(Linear DiscriminantAnalysis)TransformationsonPAF(ParoxysmalAtrial Fibrillation)PatientDetection.”Proceedingsofthe20183rdInternationalConferenceonBiomedicalImaging,SignalProcessing.ACM, 2018.
[20] Harrison,PaulaA.,etal.“Selectingmethodsforecosystemservice: Adecisiontreeapproach.”Ecosystemservices29(2018):481–498.
[21] Ray,S.,2021.AreGlobalMigrantsAtRisk?ACovidReferralStudy ofNationalIdentity.InTransformationofidentities:theexperience ofEuropeandRussia(pp.26–33).