International Journal of Advances in Applied Sciences (IJAAS) Volume 9, issue 3, Sep. 2020

Page 1

ISSN: 2252-8814

IJAAS

International Journal of

Advances in Applied Sciences

Advances in Applied Sciences (IJAAS) is a peer-reviewed and open access journal dedicated to publish significant research findings in the field of applied and theoretical sciences. The journal is designed to serve researchers, developers, professionals, graduate students and others interested in state-of-the art research activities in applied science, engineering and technology areas, which cover topics including: industrial engineering, materials & manufacturing; mechanical, mechatronics & civil engineering; food, chemical & agricultural engineering; telecommunications, computer science, instrumentation, control, electrical & electronic engineering; and acoustic & music engineering.

Editor-in-Chief Qing Wang, Shandong University of Science and Technology, China

Managing Editors Chen-Yuan Chen, National Pingtung University of Education, Taiwan Guangming Yao, Harbin Normal University, China Habibolla Latifizadeh, West Virginia University, United States Md. Shakhaoath Khan, RMIT University, Australia Mohammad Hossein Ahmadi, Shahrood University of Technology, Iran

Editorial Board Members Aabha Jain, Prestige Institute of Engineering Management and Research, India Abdelhamid Bensafi, Abou Bekr Belkaid University of Tlemcen, Algeria Ajit Behera, National Institute of Technology Rourkela, India Alireza Heidari, California South University, United States Andrews Jeyaraj, Sathyabama Institute of Science and Technology, India B.V.S.N. Hari Prasad, Chaitanya Group of Colleges (Autonomous), India Bing Yang, China State Shipbuilding Corporation, China Eka Cahya Prima, Universitas Pendidikan Indonesia, Indonesia EL Mahdi Ahmed Haroun, University of Bahri, Sudan Fardin Dashty Saridarq, Technische Universiteit Eindhoven, Netherlands Guruswamy Revana, Jawaharlal Nehru Technological University, India Ho Soon Min, INTI International University, Malaysia K.V.L.N. Acharyulu, Bapatla Engineering College, India Kewen Zhao, Qiongzhou University, China Kirk R. Smith, University of California, United States Laith Ahmed Najam, College of Science Mosul University, Iraq Matthew Vechione, The University of Texas at El Paso, United States Mu-Song Chen, Da-Yeh University, Taiwan, Province of China Özen Özer, Kırklareli University, Turkey Prabang Setyono, University of Sebelas Maret, Indonesia Rutuja Shivraj Pawar, Yeshwantrao Chavan College of Engineering, India Shuaichen Ye, Beijing Institute of Technology, China Shubham Sharma, Punjab Technical University, India Siamak Hoseinzadeh, Islamic Azad University West Tehran Branch, Iran Totok R Biyanto, Sepuluh Nopember Institute of Technology, Indonesia Vahab Ghalandari, Shahid Bahonar University of Kerman, Iran VPS Naidu, National Aerospace Laboratories India, India Waleed Khalil Ahmed, United Arab Emirates University, United Arab Emirates Yee-Loo Foo, Multimedia University, Malaysia Yonathan Asikin, University of the Ryukyus, Japan

Published by:

Institute of Advanced Engineering and Science (IAES) Website: http://ijaas.iaescore.com Email: ijaas.iaes@gmail.com, info@iaesjournal.com


Information for Authors International Journal of Advances in Applied Sciences (IJAAS) is a peer-reviewed and open access journal dedicated to publish significant research findings in the field of Applied Sciences, Engineering and Technology. The journal is designed to serve researchers, developers, professionals, graduate students and others interested in state-of-the art research activities in applied science, engineering and technology areas, which cover topics including: industrial engineering, materials & manufacturing; mechanical, mechatronics & civil engineering; food, chemical & agricultural engineering; telecommunications, computer science, instrumentation, control, electrical & electronic engineering; and acoustic & music engineering. Submission of a manuscript implies that it contains original work and has not been published or submitted for publication elsewhere. It also implies the transfer of the copyright from the author to the publisher and gives permission to reproduce any published material.

Paper Submission You must prepare and submit your papers as word document (DOC or DOCX) file format. For more detailed instructions please take a look and download at: http://ijaas.iaescore.com/index.php/IJAAS/about/submissions#onlineSubmissions You can download the IJAAS template at: http://iaescore.com/gfa/ijaas.docx Manuscript must be submitted through our on-line system: http://ijaas.iaescore.com Once a manuscript has successfully been submitted via the online submission system authors may track the status of their manuscript using the online submission system. The manuscript will be subjected to a full review procedure and the decision whether to accept it will be taken by the Editor based on the reviews.

Ethics in publishing For information on Ethics in publishing and Ethical guidelines for journal publication (including the necessity to avoid plagiarism and duplicate publication) see: http://ijaas.iaescore.com/index.php/IJAAS/about/editorialPolicies#sectionPolicies

Peer Review Process This journal operates a conventional single-blind reviewing policy in which the reviewer's name is always concealed from the submitting author. Authors should present their papers honestly without fabrication, falsification, plagiarism or inappropriate data manipulation. Submitted papers are evaluated by anonymous referees for contribution, originality, relevance, and presentation. Papers will be sent for anonymous review by at least two reviewers who will either be members of the Editorial Board or others of similar standing in the field. In order to shorten the review process and respond quickly to authors, the Editors may triage a submission and come to a decision without sending the paper for external review. The Editor shall inform you of the results of the review as soon as possible, hopefully in 8 weeks. The Editors’ decision is final and no correspondence can be entered into concerning manuscripts considered unsuitable for publication in this journal. All correspondence, including notification of the Editors’ decision and requests for revisions, will be sent by email.


IJAAS

International Journal of

Advances in Applied Sciences Characterization for the necessity of thermophilic biogas digester of tea waste and cooked waste for biogas production Nirmal Halder

159-170

Fault analysis in power system using power systems computer aided design Amanze Chukwuebuka Fortune1, Amanze Destiny Josiah

171-179

Two bio-inspired algorithms for solving optimal reactive power problem Lenin Kanagasabai

180-185

Real power loss reduction by hyena optimizer algorithm Lenin Kanagasabai

186-191

Effect of heating temperature on quality of bio-briquette empty fruit bunch fiber Nofriady Handra, Anwar Kasim, Gunawarman, Santosa

192-200

Evidential reasoning based decision system to select health care location Md. Mahashin Mia, Atiqur Rahman, Mohammad Shahadat Hossain

201-210

Spectroscopic properties of lithium borate glass containing Sm3+ and Nd3+ ions I. Kashif, A. Ratep, S. Ahmed

211-219

Method for cost-effective trans aortic valve replacement device prototyping Angelique Oncale, Charles Taylor, Erika Louvier, G. H. Massiha

220-226

A comparison of the carbon footprint of pavement infrastructure and associated materials in Indiana and Oklahoma Rachel D. Mosier, Sanjeev Adhikari, Saurav K. Mohanty

227-239

A study secure multi authentication based data classification model in cloud based system Sakshi kaushal, Bala buksh

240-254

Responsibility of the contents rest upon the authors and not upon the publisher or editors

IJAAS

Vol. 9

No. 3

pp. 159-254

September 2020

ISSN 2252-8814



International Journal of Advances in Applied Sciences (IJAAS) Vol.9, No. 3, September2020, pp. 159~170 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i3.pp159-170

159

Characterization for the necessity of thermophilic biogas digester of tea waste and cooked waste for biogas production Nirmal Halder Department of Mechanical Engineering, Indian Institute of Technology Kanpur, India

Article Info

ABSTRACT

Article history:

Characterization of tea waste, cooked waste has been done by various authors but for the first time it has been done for understanding the necessity of thermophilic digestion. And for this kind of digestion takes place in thermophilic digester for efficient biogas production. Detailed morphological analysis of feedstock has been determined. In the present study, thermo gravimetric analysis carried out For easy and fast digestion of cooked waste, a novel design of thermophilic digester is proposed and tested.

Received Dec 9, 2019 Revised Mar 4, 2020 Accepted Apr 24, 2020 Keywords: Biogas Cooked waste Tea waste Thermophilic digester

This is an open access article under the CC BY-SA license.

Corresponding Author: Nirmal Halder, Departement of Mechanical Engineering, Indian Institute of Technology Kanpur, India. Email: subho.nirmal@gmail.com

1.

INTRODUCTION Bio energy production from local bio resources has a great potential. It is important to reduce dependency on fossil fuels and decrease green house gas emission.Tea plants (Camellia sinensis) are commonly grown in the north east region of India. High quality tea is harvested from the three top leaves of the shoot on tea plant in the tea garden. While tea producer cut the top tea leaves with special tea shears, some overgrown woody shoots, which may include six-seven top leaves, are mixed in the tea harvest. During the tea production procedure, this woody overgrown shoots are not treated by tea factory and formed into tea waste. Tea manufacturing industries throw out lots of waste teas, daily, as left over. According to Tea Waste (Control) Order1959, every tea manufacturing unit should declare at least 2% of its production as tea waste [1]. Approximately 857,000 tonnes of tea is produced in India per year, which is 27.4% of total world production. After processing, tea factory waste is about 190,400 tonnes [2]. Assam, situated in the north east region of India is the single largest contagious tea growing area in the world and as such is the hub for bulk production of tea waste. Cooked waste is another high moisture content waste produced in bulk in every locality. Anaerobic digestion has been suggested as a promising technique for pollution reduction when used for energy production from waste [3-7]. In the present study the potential of cooked waste has been assessed for biogas production and compared with that of cow dung. Cooked waste is produced in bulk in every locality. Sung et al. [8] evaluated both chronic and acute toxicity of ammonia in thermophilic anaerobic digestion of synthetic wastewater over a range of acclimation concentrations. Experimentally gavala et al. [9] Investigates (a) the differences between mesophilic and thermophilic anaerobic digestion of sludge and (b) the effect of the pretreatment at 70 degree Centigrades on mesophilic and thermophilic anaerobic digestion of Journal homepage: http://ijaas.iaescore.com


160

ISSN: 2252-8814

primary and secondary sludge. Demirel et al. [10] investigated that even though ammonia is an essential nutrient for bacterial growth, it may inhibit methanogenesis during anaerobic digestion process if it is available at high concentrations. The tolerance of the biogas process under supply of hydrogen, to ammonia toxicity was studied under mesophilic and thermophilic conditions by wang et al. [11]. Ahring et al. [12]. Investigated that oleate is the free fatty acid that influences the bacterial activity under mesophilic and thermophilic conditions. Zeshan et al. [13] studied the effect of ammonia-N accumulation in a dry anaerobic digestion was studied effectively using pilotscale thermophilic reactor. Biomass samples taken during the continuous operation of thermophilic anaerobic digestors fed with manure and exposed to successive inhibitory pulses of long-chain fatty acids by palatsi et al. [14].

2.

RESEARCH METHOD Three samples (Tea waste, cooked wasteand cow dung), are used in the present work. The surface characteristics of the biomasses areanalyzed by using SEM (Scanning Electron Microscope). The heating values of the samples are measured for combustion in an adiabatic oxygen bomb container as per IP-12 and IS-1305. Thermo gravimetric analysis (TGA) is performed on a Metter TGA/SDTA 851 Thermo gravimetric analyzer. COD values of different samples are determined by the modified open reflux method (APHAAWWA-WPCF) as described by Yadvika et al. [15], which are suitable for samples with high percentage of suspended solids. The Biological Oxygen Demand (BOD) determination is an empirical test in which standardized laboratory procedures are used to determine the relative oxygen requirements of waste waters, effluents and polluted waters.For heating value, thermo gravimetric, biological oxygen demand analyses following equations are used. Percentage of Total solid= [{(Weight of dry pan + dry sample) - Weight of dry pan} / Weight of sample as received]x 100 (1) Percentage of Moisture = 100 – percentage of Total solid

(2)

Percentage of Ash= [{(Weight of crucible + ash) – weight of crucible} / Oven dry weight of sample] x100 (3) Volatile content = [(m2 – m3) / (m2 – m1)] x 100

(4)

Where m1 is mass of the empty crucible with the lid, m2 is mass of the crucible with lid and the sample before heating, m3 is mass of the crucible with lid and the sample after heating. Fixed carbon = 100 - ash - water content (moisture) – volatiles

(5)

Percentage of Lignin=% AIL + % ASL

(6)

where AIL is acid insoluble lignin, ASL is acid soluble lignin Heating Value = (WE x Temperature rise) / Sample weight

(7)

where WE is the water equivalent = 2568.293 and T for differential temp. Chemical oxygen demand, (COD) in mg/l = {(A – B) x M x 8000} / V

(8)

where, A is avolume of blank titrant (ml), B is a volume of sample titrant (ml), M is the molarity of FAS solution= [Volume 0.04167M K2Cr2O7 Solution titrated, ml /Volume FAS used in titration, ml] x 0.2500 (9) 8000 = mill equivalent weight of oxygen x 1000 ml/l

(10)

And V=(D2-D1)/P

(11)

Where Y is the mass of the dry solid sample used for analysis, X1 is the volume of original slurry sample used for drying, X is the solid content of X1 ml of the slurry sample. Thus V is the volume in ml of original slurry, which would have contained Y grams of the dry solids. Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 159 – 170


Int J Adv Appl Sci

ISSN: 2252-8814

Biological Oxygen Demand, mg/l=(D2-D1)/P

161 (12)

Where D1is the DO (Dissolved Oxygen) of diluted sample immediately after preparation in mg/l, D2 is the DO of diluted sample immediately after 5 days incubation at 20OC in mg/l, P is the decimal volumetric fraction of sample used.

3. RESULTS AND DISCUSSION 3.1. Moisture content Moisture content increases the crystallinity of cellulose as described by Taherzadehet al. [16]. Due to crystallization of highly amorphous cellulose hydrolysis process, which is a very prominent process for biogas production, becomes very difficult. So higher moisture content represents low biodegradability. In other word higher moisture content increases the time requirement for the digestion as well as for the biogas production. Figure 1 shows that cooked waste has very high moisture content (44.99%) as compared to tea waste (12.06%) and cow dung (9.88%). So instead of ordinary mesophilic biogas digester we need an efficient thermophilic biogas digester which will decrease the time requirement for the digestion of cooked waste and tea waste.

Figure 1. Moisture content 3.2. Fixed carbon content Fixed carbon is the solid combustible residue that remains after a biomass particle is heated and the volatile matter is expelled. The fixed-carbon content of a biomass is determined by subtracting the percentages of moisture, volatile matter and ash from a sample. Since gas to solid combustion reactions are slower than gas to gas reactions, a high fixed-carbon content indicates that the biomass will require a long time to produce combustible biogas and at the same time biomass will produce higher amount of combustible biogas. For this reason cow dung produces combustible biogas, faster than cooked waste. Cooked waste requires a long time for digestion. The fixed carbon content is higher in tea waste (18.31%) and cooked waste (15.08%), as compare to cow dung (11.57%) as depicted in Figure 2. So we need to design & develop an efficient thermophilic digester to produce combustible biogas from cooked waste and tea waste in a less time. Thermophilic digestion requires less time for producing combustible biogas, than mesophilic digestion process. Hence for a faster digestion of cooked waste we have to prefer thermophilic digestion instead of mesophilic digestion.

Figure 2. Fixed carbon content Characterization for the necessity of thermophilic biogas digester of tea waste and cooked (Nirmal Halder)


162

ISSN: 2252-8814

3.3. Lignin content The effect of lignin content on the biodegradability seems to be linear and is inspected that for every one percentincrease of lignin the biodegradability drops by about 3% [17]. It is observed from lignin content analysis for biomass that, increase in lignin content in biomass, increases time requirement for digestion. The lignin content in cooked waste and tea waste is near about identical (28.56%), which is higher than cow dung (19.8%) as shown in Figure 3. From lignin content analysis it is quite explicit that, here also we need to develop a digester in which a thermophilic digestion will takes place for proper & quicker digestion of cooked waste.

Figure 3. Lignin content

3.4. SEM (scanning electron microscopy) analysis Lignocellulosic materials have two different types of surface area, external and internal. The external surface area is related to the size and shape of the particles, while the internal surface area depends on the capillary structure of cellulosic fibers. There is a good correlation between the accessible surface area and the enzymatic digestibility of Lignocellulosic materials [18]. Figure 4 shows the external surface morphology of the feed stock. It has been observed that the surface morphology of cow dung is highly porous mass with easy accessibility for digestion. The morphology of cooked waste looks like a compact globular mass of multiple units with hollow grooves and minute pores enhances difficulty for digestion. The morphology of tea waste is observed as a mass of compact fibers with tunnels, intermittent grooves and minute pores. From SEM analysis it has been observed that cow dung is easily digestible but for a proper and quicker digestion of cooked waste we have to design & develop a thermophilic biogas digester.

(a)

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 159 – 170

(b)


Int J Adv Appl Sci

ISSN: 2252-8814

163

(c) Figure 4. SEM images of tea waste (a), cow dung (b), cooked waste (c)

3.5. TGA (Thermo gravimetric analysis) TGA (Thermo gravimetric analysis) of the feed stock was performed to determine temperature points and ranges where devolatilization of biomass occurs, which provides qualitative and quantitative information regarding the organic content of the sample [19]. The higher the temperature at which weight loss occur, the more resistant is the organic fraction which is burning [20]. TGA curves have been represented in terms of the percentage of the weight loss experienced by the sample in Figure 5. The first loss of weight registered at low temperatures is associated with dehydration of the samples. A broad curve is observed for cooked waste followed by tea waste due to its high moisture content as mentioned earlier. Thus, to increase the enzymatic digestibility of cooked waste its moisture content would have to be reduced to appreciable amount. And this reduction is possible only with an efficient thermophilic type biogas digester.

Figure 5. TGA profile

3.6. Heating value The heating value of biomass is dependent on the amount of organic content. Heating value is superior where lower resistance is offered by the organic fraction. As discussed earlier, resistance appears from moisture, hemicelluloses, cellulose, lignin content & inaccessible internal surface area. Heating value is higher for cooked waste (15.265kj/gm) & tea waste (27.62kJ/gm) as compared to cow dung (8.922kJ/gm) as shown in Figure 6. So from above mentioned heating value analysis we can conclude that a high heating value is needed for cooked waste for the complete & quicker digestion. In order to achieve complete & quicker digestion, we have to prefer for thermophilic digestion. For thermophilic digestion an innovative thermophilic digester is chosen.

Characterization for the necessity of thermophilic biogas digester of tea waste and cooked (Nirmal Halder)


164

ISSN: 2252-8814

Figure 6. Heating value

3.7. COD value It is found that cooked waste has the highest COD value followed by tea waste and cow dung, as indicated by the Figure7. Cow dung consists of simple monomers, which are easily biodegradable as compared to cooked waste. Cooked waste and tea waste consists of hemicelluloses, cellulose and lignin. Hence, a high COD for cooked waste indicates its potentiality for biogas production. And for easily biodegradable digestion of cooked waste an efficient thermophilic biogas digester is needed to be designed & developed.

Figure 7. COD content

3.8. Necessity of novel design of thermophilic digester From the above analysis of moisture content, fixed carbon content, heating value, TGA, COD, lignin content, scanning electron microscope (SEM) it is very clear that cooked waste and tea waste requires a long time to produce combustible biogas and are not easily digestible as compared to cow dung. And for efficient biogas production from tea waste and cooked waste we have to maintain a thermophilic digestion.Design parameter for thermophilic biogas digester: Active slurry Volume of digester Gas production rate Amount of feed material fed into the digester Total initial feeding Slurry displacement volume With above-mentioned design parameter a thermophilic biogas digester is developed. The digestion of manure occurs in four basic stages as mentioned below: First step The organic matter (carbohydrates, proteins, lipids) is hydrolysed to soluble compounds (amino acids and sugars) with the aid of cellulytic proteolytic lypolytic bacteria. Second step

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 159 – 170


Int J Adv Appl Sci

ISSN: 2252-8814

165

The soluble compounds (amino acids and sugars) are fermented into volatile fatty acids in the presence of fermentive bacteria Third step Fermentation-acetogenesis forms hydrogen, carbon dioxide and acetate from fatty acid with the help of hydrogen producing bacteria Fourth step Methanogenic bacteria produce biogas (consist of methane and carbon dioxide) from acetates and hydrogen by methanogenesis process. Comparison between Mesophilic and Thermophilic digestion with cow dung, tea waste and cooked waste (cow dung: cooked waste = 1:1 in quantity) [21]. Thermophilic digestion comprises with higher temperature as compare to mesophilic digestion [21]. Range of production of combustible gas for mesophilic digestion varies from 0.005904 m3 to 0.006232 m3 and range of production for thermophilic varies from 0.0072 m3 to 0.0076 m3. So almost 22% increase in gas production is observed for thermophilic digestion as compared to mesophilic digestion. Although mesophilic digestion gives biogas comprises with higher methane content as compare to thermophilic digestion temperature, but at the same time low digestion temperature (mesophilic digestion) produce less amount of biogas. For the accomplishment of hydrolysis process (First step of digestion process) thermophilic digestion takes more time. So thermophilic digestion takes more time to produce methane and carbon dioxide. That is why it is noticed that, combustible gas production starts from 11th day onwards for mesophilic digestion, whereas for thermophilic it starts from 14th day onwards. Comparison in per day gas production, with feed material as cow dung and cooked waste in the ratio of 1:1 in quantity, for mesophilc and thermophilic digestion, has been shown in the Figure 8-(a). Also feed material as cow dung and tea waste in the ratio of 1:1 in quantity, for mesophilc and thermophilic digestion, has been shown in the Figure 8-(b). It is observed that, combustible gas production starts from 8th day onwards for mesophilic digestion, whereas for thermophilic it starts from10th day onwards.So at the conclusion we can say that although mesophilic digestion produce less amount of biogas, but the gas becomes combustible quicker than thermophilic digestion due to higher methane content. Effect on thermophilic digestion for the quantity variation of cooked waste and tea waste [21].

(a)

(b)

Figure 8. (a). Comparison between Mesophilic and Thermophilic digestion with cow dung and cooked waste (cow dung: cooked waste = 1:1 in quantity) (b). Comparison between Mesophilic and Thermophilic digestion with cow dung and tea waste (cow dung: tea waste = 1:1 in quantity)

Performance of thermophilic digestion is inspected by varying the amount of cooked waste and tea waste in feed material. Effect on thermophilic digestion by gradual increment in cooked waste is observed. For the sake of simplicity various cases has been configured. For the first case (case1) cooked waste quantity is zero. In case 2 same amount of cooked waste and cow dung is taken for the mixture. Here proportion of cooked waste and cow dung is 1:1. Cooked waste quantity is 2 times higher than cow dung for case 3. Here earlier mentioned proportion is 2:1. In case 4 proportional ratio is 2.5:0.5 for cooked waste and cow dung as shown in Figure 9-(a). For the case 4-(a) tea waste quantity is zero. Case 4-(a) is identical with case 1 in respect of waste quantity. In case5 same amount of tea waste and cow dung is taken for the mixture. Here proportion of tea waste and cow dung is 1:1. Tea waste quantity is 2 times higher than cow dung for case 6. Here earlier mentioned proportion is 2:1. In case 7 proportional ratio is 2.5:0.5 for tea waste and cow dung as shown in Figure 9-(b). Characterization for the necessity of thermophilic biogas digester of tea waste and cooked (Nirmal Halder)


166

ISSN: 2252-8814

For case 1 biogas production range is from 0.0042 m3 to 0.00455 m3. For case 2 it is 0.0072 m3 to 0.0076 m3 while for case 3 it is 0.0082 m3 to 0.0086 m3. Whereas biogas production ranges are from 0.008750 m3 to 0.009110 m3, for case 4. For case 4-(a) biogas production range is from 0.0042 m3 to 0.00455 m3. For case 5 it is 0.0062 m3 to 0.0065 m3 while for case 6 it is 0.0072 m3 to 0.0075 m3. Whereas biogas production ranges are from 0.0077 m3 to 0.0080 m3, for case 7. If we compare biogas production between case 1 and case 2, near about 70% increment is noticed in biogas production for case 2. While near about 13% increment is noticed in biogas production for case 3 as compared to case 2. If biogas production comparison is made between case 4 and 3, almost 6% increment in gas production is noticed for case 4. Due to oil and high moisture content in cooked waste, accomplishment time of hydrolysis process (First step of digestion process) is higher for thermophilic digestion. So thermophilic digestion takes more time to produce combustible biogas as shown in Figure 10. As we increase the quantity of cooked waste in feed material for thermophilic digestion, time requirement will increase for combustible biogas production. Less time requirement is noticed where presence of less amount of cooked waste in the feed stock for thermophilic digestion. For case 1 where thermophilic digestion takes place in the absence of cooked waste, combustible gas production starts from 12th day onwards. For case 2 (cooked waste: cow dung = 1:1 in quantity) it is 14th day onwards. While for case 3 (cooked waste: cow dung = 2:1 in quantity) time requirement is 15th day onwards. And for case4 (cookedwaste: cow dung = 2.5:0.5 in quantity) time requirement is 16th day onwards. For case 4-(a) where thermophilic digestion takes place in the absence of tea waste, combustible gas production starts from 12th day onwards. For case 5 (tea waste: cow dung = 1:1 in quantity) it is 10th day onwards. While for case6 (tea waste: cow dung = 2:1 in quantity) time requirement is 11th day onwards. And for case7 (tea waste: cow dung = 2.5:0.5 in quantity) time requirement is 12th day onwards. Comparison in per day gas production, due to changes in feed material for thermophilic digestion using cooked waste and cow dung. Comparison in per day gas production, due to changes in feed material (tea waste and cow dung) for thermophilic digestion, has been shown in the Figure 18-(b). Also time requirement for combustible biogas production in thermophilic digestion for the variation in cooked waste, tea waste quantity.

(a)

(b)

Figure 9. Comparison in feed material (a) cooked waste (CW) and cow dung (CD), (b) tea waste (TW) and cow dung (CD) in thermophilic digestion

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 159 – 170


Int J Adv Appl Sci

ISSN: 2252-8814

167

Figure 10. Time requirement for combustible biogas production in thermophilic digestion for the variation in cooked waste, tea waste, cow dung proportion

Simulation has been done with the help of ANSYS, mesh is created in ICEMCFD and simulation is performed with FLUENT adopting k-omega SST turbulence model. Here unsteady simulation is carried out considering the same geometry as explained in [21]. Unsteady simulated data (instantaneous quantity and time averaged quantity) is taken after reaching dynamic steady state as explained in Figure 11 [22]. Here various instantaneous and time averaged quantities is obtained within a cycle time period at four time instant (see Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20 ). Through fast Fourier transformation we are getting strauhal number which is the reciprocal of cyclic time period as depicted in Figure 12. Various instantaneous quantities like instantaneous x velocity, y velocity, pressure, q criteria are plotted in Figure 13, Figure 14, Figure 15 and Figure 16 respectively. Various time averaged quantities like mean x velocity, y velocity, pressure, Q criterio are plotted in Figure 17, Figure 18, Figure 19 and Figure 20 respectively. From mean y velocity distribution and pressure distribution it is quite clear that maximum y velocity and maximum pressure exists well above the outlet inside the digester. Q criteria with vector distribution expresses that generated biogas has swirling motion inside the digester. For breaking this swirling motion we need some paddle type arrangement which rotates inside the digester.

Figure 11. Signal of x velocity for the location 1,1,0

Figure 12. Fast Fourier transform

Characterization for the necessity of thermophilic biogas digester of tea waste and cooked (Nirmal Halder)


168

ISSN: 2252-8814

(a)

(b)

(a)

(b)

(c)

(d)

(c)

(d)

Figure 13. Instantaneous X velocity at four time instant (a) t=ta, (b) ta+1, (c) ta+2, (d) ta+3

Figure 14. Instantaneous Y velocity at four time instant (a) t=ta, (b) ta+1, (c) ta+2, (d) ta+3

(a)

(b)

(a)

(b)

(c)

(d)

(c)

(d)

Figure 15. Instantaneous pressure at four time instant (a) t=ta, (b) ta+1, (c) ta+2, (d) ta+3

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 159 – 170

Figure 16. Instantaneous Q criteria at four time instant (a) t=ta, (b) ta+1, (c) ta+2, (d) ta+3


Int J Adv Appl Sci

ISSN: 2252-8814

Figure 17. Time averaged X velocity

Figure 18. Time averaged Y velocity

Figure 19. Time averaged pressure

Figure 20. Time averaged Q criteria with velocity vector

169

4. CONCLUSION Characterization of cooked waste and tea waste is evaluated including moisture content, fixed carbon content, lignin content, and observed that wastes (cooked waste, tea waste) requires a long time to produce combustible biogas and is not easily digestible as compared to cow dung. Hence for a faster digestion a new design of thermophilic digester is proposed and executed. Also performance analysis is carried out on this multi feed thermophilic digester implementing cook waste, tea waste and cow dung as a feed stock. It is inspected that both cooked waste, tea waste produce more amount of biogas as compared to cow dung.

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9]

Singh Bikash, “Assam wants tea waste sale through auction centre,” ET Bureau, 2009. Wasewar, K.L., Mohammad, A, Prasad, B., Mishra, I.M., “Batch adsorption of zinc on tea factory waste,” vol. 244, pp. 66-71, 2009. Huang, J.J.H., and Shih, J.C.H., “The potential of biological methane generation from chicken manure,” Biotechnol, Bioeng, vol. 23, pp. 2307-2314, 1981. Aubart, Ch. and Bully, F., “Anaerobic digestion of rabbit wastes and pig manure mixed with rabbit wastes in various experimental conditions,” Agric. Wastes, vol. 10, pp. 1-13, 1984. Hills, D.J. & Ravishanker, P., “Methane gas from high solids digestion of poultry-manure and wheat straw,” Poultry Sci., vol. 63, pp. 1338-1345, 1984. Demeyer, A., Jacob, F., Jay, M., Menguy, G, and Perrier, J., “La Conversirn Bioenerg~tica de la Radiaci6n Solary las Biotecnologias,” Alhambra, 1985. Lane, A.G., “Methane production and waste management by anaerobic digestion,” ASEAN FoodJ, vol. 1, pp. 55-61, 1985. Sung et al., “Ammonia inhibition on thermophilic anaerobic digestion,” Chemosphere, vol. 53, pp. 43-52, 2003. Gavala et al., “Mesophilic and thermophilic anaerobic digestion of primary and secondary sludge. Effect of pretreatment at elevated temperature,” water research, vol. 37, pp. 4561-4572, 2003.

Characterization for the necessity of thermophilic biogas digester of tea waste and cooked (Nirmal Halder)


170

ISSN: 2252-8814

[10] Demirel et al., “Ammonia inhibition in anaerobic digestion: A review,” Process Biochemistry, vol. 48, pp. 901-911, 2013. [11] Wang et al., “Ammonia inhibition on hydrogen enriched anaerobic digestion of manure under mesophilic and thermophilic conditions,” water research, vol. 105, pp. 314-319, 2016. [12] Ahring et al., “Effect of free long chain fatty acid on thermophilic anaerobic digestion,” applied microbiology biotechnology, vol. 37, pp. 808-812, 1992. [13] Zeshan et al., “Effect of C/N ratio and ammonia-N accumulation in a pilot-scale thermophilic dry anaerobic digester,” Bioresource Technology, vol. 113, pp. 294-302, 2012. [14] Palatsi et al., “Long-chain fatty acids inhibition and adaptation process in anaerobic thermophilic digestion: Batch tests, microbial community structure and mathematical modelling,” Bioresource Technology, vol. 101, pp. 2243-2251, 2010. [15] Yadvika, Yadav, A. K., Sreekrishnan, T.R., Satya, S., Kohli, S.March, “A modified method for estimation of chemical oxygen demand for samples having high suspended solids,” Bioresource Technology, vol. 97, pp. 721-726, 2006. [16] Taherzadeh, M. J.and Karimi, K., “Pretreatment of Lignocellulosic Wastes to Improve Ethanol and Biogas Production: A Review,” Int. J. Mol. Sci., vol. 9, pp. 1621-1651, 2008. [17] Nijaguna, BT., “Biogas technology,” New Delhi, New age publication, 2002. [18] Fan et al., “A Large Terrestrial Carbon Sink in North America Implied by Atmospheric and Oceanic Carbon Dioxide Data and Models,” vol. 282. no. 5388, pp. 442-446, 1980. [19] Melis P and Castaldi P., “Thermal analysis for the evaluation of the organic matter evolution during municipal solid waste aerobic composting process,” Thermochim. Acta, vol.413, pp. 209-214, 2004. [20] Otero M, Calvo L.F, Estrada B, García A.I. and Morán A, “Thermogravimetry as a technique for establishing the stabilization progress of sludge from wastewater treatment plants,” Thermochim. Acta, vol. 389, pp. 121-132, 2002. [21] Halder N., “thermophilic biogas digester for efficient biogas production from cooked waste and cow dung and Some Field Study,” International journal of renewable energy research, vol.7, pp. 1062-1073, 2017. [22] ANSYS FLUENT user guide.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 159 – 170


International Journal of Advances in Applied Sciences (IJAAS) Vol. 9, No. 3, September 2020, pp. 171~179 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i3.pp171-179

171

Fault analysis in power system using power systems computer aided design Amanze Chukwuebuka Fortune1, Amanze Destiny Josiah2 1Department

of Electrical Engineering, University of Nigeria, Nigeria of Petroleum Engineering, University of Uyo, Nigeria

2Department

Article Info

ABSTRACT

Article history:

This work presents a fault analysis simulation model of an IEEE 30 bus system in a distribution network. This work annalysed the effect of fault current and fault voltage in a distribution system. A circuit breaker was introduced into the system to neutralize the effect of the fault. The system was run on a PSCAD software and results were obtained. The system was monitored based on the start time and the end time of the fault and how well the circuit breaker reacts with those times. Fault occurred from 0.100 to 0.300 seconds before it was removed. At the time fault was not applied (i.e. from 0.00 to 0.100 and from 0.300 to 0.72), the circuit breaker was close and became open when fault was applied so as to cut off current flow through the line.The result obtained gave the disruption caused by the fault and the quick response of the circuit breaker in neutralizing it. Results gotten are based on when the circuit breaker is close and no fault is applied and when the circuit breaker is open due to fault. From this work, it was obtained that circuit breakers are very essential in system protection and reliability.

Received Dec 6, 2019 Revised Mar 4, 2020 Accepted Apr 25, 2020 Keywords: Circuit breaker Fault current Fault voltage Power system PSCAD

This is an open access article under the CC BY-SA license.

Corresponding Author: Amanze Chukwuebuka Fortune, Department of Electrical Engineering, University of Nigeria, Nsukka - Onitsha Rd, Nsukka, Nigeria. Email: fortuneamanze@gmail.com

1.

INTRODUCTION Fault occurrence in power system is inevitable. Hence, in every power system design procedure, these faults are considered and analyzed, their likelihood of occurrence is determined and the best ways to handle them are earmarked so as to improve system stability, ruggedness, maintenance cost and reliability.The fault analysis of a power system is required in order to provide information for the selection of switchgear, circuit breakers, and the setting of relays, to be used in the power systems protection. A power system is not static but changes during operation (switching on or of of generators and transmission lines) and during planning operation (addition of generators and transmission lines). Thus, fault studies need to be routinely performed by utility engineers [1-3]. Fault usually occur in a power system due to insulation failure, lightning flashover, physical damage or human error. These faults may either be three-phase in nature involving all three phases in a symmetrical manner, or may be asymmetrical where usually only one or two phases may be involved. Faults may also be caused by either short-circuits to earth or between live conductors, or may be caused by broken conductors in one or more phases. Sometimes simultaneous faults may occur involving both short-circuit fault and brokenconductor fault (also known as open-circuit fault). Balanced three phase faults may be analysed using Journal homepage: http://ijaas.iaescore.com


172

ISSN: 2252-8814

an equivalent single-phase circuit. With asymmetrical three-phase faults the use of symmetrical components helps to reduce the complexity of the calculations as transmission lines are byand-large symmetrical, although the fault may be asymmetrical (not affecting the lines they occur on the same way) [4]. During the operation of power systems, it is desirable to switch on or off the various circuits (e.g.: transmission lines, distributors, generating plants etc) under both normal and abnormal conditions. In earlier days, this function used to be performed by a switch and a fuse placed in series with the circuit [5, 6]. However, such means of control present two disadvantages. Firstly, when a fuse blows out, it takes quite some time to replace it and restore power supply to the customers [7]. Secondly, a fuse cannot successfully interrupt heavy fault currents that results from modern high voltage and high capacity circuits [8]. Due to these disadvantages, the use of switches and fuses is limited to low-voltage and small capacity circuits where frequent operations are not expected e.g: for the switching and protection of distribution transformers, lighting circuits, branch circuits of distribution lines etc. [9-17] With the advancement in technology, devices that handle faults better and also switch faster, have been developed. Circuit breakers are designed to manually or automatically switch high voltage, high capacity power components. A circuit breaker can make or break a circuit automatically under fault condition or may be opened manually or by remote control whenever desired under: no-load, full-load and short-circuit conditions. When the contacts of a circuit breaker are separated under fault conditions, an arc is struck between them. The fault current is thus able to continue until the discharge ceases. The production of arc not only delays current interruption but it also generates enormous heat which may cause damage to the system or to the circuit breaker itself. Therefore, the main problem in a circuit breaker is to extinguish the arc within the shortest possible time so that the heat generated by it may not reach a dangerous value [18].Circuit breakers may vary in the type of mechanism or material used for arc quenching and also in the voltage it can operate at safely; in size and in capacity of fault current it can withstand. Faults cause unreliability and instability in power systems. Unhandled faults cause breakdown of power lines and expensive power system devices (such as generators, transformers and transmission lines, etc) which cause economic meltdowns and even loss of lives. During power line maintenance, it is sacrosanct to be able to isolate the part under maintenance from electrical power for the safety of the power line maintenance engineer. Isolation of power lines is much like switching, there is bound to be dangerous arcs especially when the switching is slow and also when the arc-handling mechanism is not very effective. These arcs damage the switching devices when they are used over some period of time. The aim of this study is to analyze fault in a power system and determine fault- currents and voltages relationships using Power Systems Computer Aided Design (PSCAD). The objectives of this work are to: Analyze faults on an International Electrical and Electronics Engineers (IEEE) standard 30-bus test system without protection. Analyze the operation and characteristics of a vacuum circuit breaker using PSCAD Annalyze the effect of the application of a vacuum circuit breaker in the protection of a standard IEEE bus system using PSCAD. International electrical and electronic engineers (IEEE) standard 30 bus system was used in this analysis. The system is 132kV – 33 kV standard. Its load parameters were established in per unit (pu) and based on 100MVA rating. The IEEE standard is applicable in a physical system and depicts a typical system which is usable in any other experimental analysis. This study and analysis are based on already established experimental data, fault analysis using power system simulation software (PSCAD). The work covers areas of computational analysis of power system devices on a transmission network, fault control algorithms, and circuit breaker operation simulation in software. Fault analysis in power systems help in the determination of switches’ capacities and voltages to be employed in the protection scheme of power systems. During faults very high current magnitudes have to flow through the protective switches even as they attempt to open to isolate the fault. The maximum value of these currents depends on the overall power system parameters such as total available power in each component and the voltages. Protection in power systems is as important as the system itself. Over the years circuit breakers have been employed in the protection of power lines, generators, transformers and every other component of the power system. The major problems in protection are in the switching on and off of these components. Circuit breakers vary in such parameters as: voltage (because in the open state, they are like capacitors that must have high enough dielectric strength to withstand breakdown as a result of the voltage across it), and current (because in the closed state, they are like conductors which must be able to carry all the dynamic currents of the system). The following are fault studies and circuit breakers analysis existing in literature.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 171 – 179


Int J Adv Appl Sci

ISSN: 2252-8814

173

Modeling of circuit breaker for controlled switching applications using PSCAD/EMTDC This presents the electrical and mechanical modelling of a circuit breaker in Power System Computer Aided Design / Electromagnetic Transients including Direct Current (PSCAD/EMDTC) environment. According to [19], for controlled switching applications that mainly occur during load switching and faults, circuit breakers’ characteristics plays an important role in determining the instant of closing and opening. The key parameters that can affect the accuracy of circuit breaker switching instants are: a. Operating time of physical contacts of circuit breaker b. Rate of decay of dielectric strength (RDDS) c. Rate of rise of dielectric strength (RRDS) d. Arcing time Fault analysis of the 150kV South Sulawesi transmission system using PSCAD/EMTDC In the work of [20], short circuit fault was considered as one of the characteristics of transient disturbances in electric power systems that must be addressed by safety equipments. It demonstrates that the increase in occurrence of short circuit generates large electrical currents and at a very low voltage. This research claims to address the simulation of short circuit interruption in a 150kV transmission line. The method is used to perform the simulation with the help of PSCAD / EMTDC and PWS (Power World Simulator) software’s to obtain the characteristic of current and voltage on the 150kV transmission network in South Sulawesi. It examines the changes in current and voltage during short circuit fault with or without fault impedance and fault location distance. Analysis of research and development trends in vacuum circuit breakers According to [21], several companies in the beginning of the twentieth century, notably General Electric and Westinghouse in the United States, invested significant R&D manpower and finances into building the knowledge and expertise in vacuum arcs, vacuum contacts, and vacuum interrupters. These efforts resulted in successful deployment of commercial vacuum circuit breakers for medium voltage distribution systems. The companies were also very enthusiastic to continue the main research in extending the applicability of vacuum into higher voltage, sub-transmission and transmission systems. Soon they were to be faced with big disappointment. It turns out that the vacuum interrupters are not easily scalable to higher voltages. Particularly, the electric field breakdown between two contacts in vacuum is not proportional to the contact distance. Although a 0.5 mm contact gap can withstand approximately 15 kV, the gap of 50 mm will not handle 1500 kV, or 1.5 MV. The scaling law is not linear and other physical processes stand on the way of using long contact gaps for higher voltages. The initial excitement diminished and the vacuum switch production companies concentrated on the distribution products with additional applications of low voltage vacuum contactors. Today only a handful of companies maintain production line of commercial vacuum switching products above 36 kV primarily by employing a series combination of interrupters each rated in the 15-36 kV range. This situation might be changing. In this work, it was stated that in last several years of advances, provision has been made on more understanding of the practical issues associated with vacuum switchgear. Also, the economics of the vacuum interrupter manufacturing changed dramatically, the devices being less expensive, smaller, and more efficient. The work also stated that traditional boundary between the applications of Sulphur HexaFluoride (SF6) and vacuum technologies is changing and the recent inquiries of the environmental impact of SF6 also play a role in this process. All said, the vacuum switchgear is a mature technology and still an active industry with dynamically increasing shares of the world markets, production volumes, and more efficient, smaller and better products. Dynamic simulation of a vacuum switch with PSCAD The work of [22] shows that the switching behaviour of the vacuum circuit breaker is different from that of other circuit breakers when the effects of current chopping, dielectric strength of vacuum gap, the arc voltage, the quenching capability of high frequency currents, virtual current chopping and prestrikes. The work demonstrated the above behaviours in simulation using PSCAD. The Vacumm Circuit Breaker (VCB) model developed in this paper uses the variable resistor approach representing frequently switching inductances and capacitances. It also mentions that the other approach to modelling a VCB is by using an ideal circuit breaker which is switched on / off after different criteria have been checked. This work also investigates the stress on a VCB and the energy in the vacuum tube: as current through a VCB is flowing from moment of contact separation until next zero crossing, together with the arc voltage, the energy conversion is very high, dependent on the moment of contact separation.

Fault analysis in power system using power systems computer aided design (Amanze Chukwuebuka Fortune)


174

ISSN: 2252-8814

2. RESEARCH METHOD 2.1. Methods employed in the analysis simulation This research analysis uses PSCAD to simulate fault conditions of a standard IEEE 30-Bus system using the transmission line model line Bergeron of PSCAD which has a graphical user interface for EMTDC simulation and control simulation engine [23]. PSCAD enables users to build a circuit schematic, run simulations, and analyse results, manage data in a fully integrated graphical environment, control and meter so that users can change system parameters during the simulation run and can obtain immediate results [24, 25]. PSCAD comes with a library of models that have been programmed and tested, ranging from simple passive elements, control functions and more complex models such as electric machines, FACTS device, transmission lines, transformers and cables. EMTDC (Electromagnetic Transient, Including DC) proposes and solves differential equations in time domain. The solution is calculated based on fixed time step, and the program structure allows for the representation of the control system. The VCB model used in this work is written in FORTRAN77 code. It is possible to create own models and integrate them into the component library of PSCAD if they are not already available. Parameterization of the VCB is done by filling the input mask. Other simulation software might have other means of presenting these. 2.2. VCB model The VCB in this paper is modeled by a variable resistance, this is important for considering the arc voltage. Input parameters are: voltage, current and switching command. The output parameter is the value of resistance. Furthermore, there are several parameters which can be admitted by the input mask as shown in Table 1.

Table 1. Input mask of VCB Configuration Maximum Voltage [kV] Metal vapour arc voltage [V] Resistance (closed) [ohm] Resistance (open) [ohm] Frequency [Hz] Slope of dielectric recovery [kV/ms] Initial value of dielectric strength [kV] Slope of dielectric strength [kV/µs] Initial value of dielectric stsrength [kV] Current quenching capability [A/µs]

60 [kV] 25 [V] 80e-6 [ohm] 100e6[ohm] 60 [Hz] 30 [kV/µs] 0 [kV] 50 [kV/µs] 0 [kV] 80 [A/µs]

Here, electrical parameters are defined, such as resistance in closed and open position and maximum voltage. Additionally, value of arc voltage, rise in dielectric strength and rise in dielectric recovery can be chosen. The arc voltage is modeled as a constant value in order to simplify the model. The corresponding value of resistance is calculated by (1) r = Varc/IVCB

(1)

Both dielectric recovery mechanisms are modeled using (2). Vcontact(t) = 30 kV/ms.t (effective only at moment of contact separation) Vdielectric(t) = 50 kV/µs.t (effective at current separation) Generally, V = a.x

(2)

Where: x = contacts distance a = slope of di-electric These values are limited by the maximum voltage of VCB (60 kV). The closing of contacts is modeled by 60 kV – Vcontact. A function checks di/dt of current at every zero crossing to consider the current quenching capability of the vacuum space. The value is set to a maximum of 80 A/µs. If this value is exceeded, current will go on flowing till the next current zero. This VCB model will be employed in the standard IEEE 30 bus system for fault studies. The model block is as shown in Figure 1 and the label “BRK” represents the control signal as shown in Figure 2. Here

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 171 – 179


Int J Adv Appl Sci

ISSN: 2252-8814

175

the control signal is attached to the output of the timed fault logic block so as to trigger the circuit breakers on the line as soon as the fault occurs.

Figure 1. The timed fault logic with its signal output linking “BRK”

Figure 2. The VCB model Block

2.3. The fault model Table 2 is showing the fault block in PSCAD library is used to create a double line-ground fault (shown as AB G). The fault block can simulate three lines (LLL) fault, double line to ground (LLG) fault, double line (LL) fault, double line to ground (LG) fault, three line to ground (LLLG) fault. In this study, LLG was chosen for the comparism between a healthy and faulty line. Table 2. The fault mask Fault Type General Is Phase A in Fault? Is Phase C in Fault? Is Phase C in Fault? Is this Fault to Neutral?

Three Phase Fault Fault Resistances General Yes Fault ON Resistance 0.01[ohm] Yes Fault OFF Resistance 100E6 [ohm] No Yes

The fault model block in Figure 2; placed on T9_11 (33 kV). The fault was set to occur 0.1 seconds into the simulation and be removed after 0.3 seconds of occurrence. The results from the fault trigger will be analyzed later as well as the data used to compute the currents and voltages during fault. Figure 3 is the standard IEEE bus with fault and without protection. It is easy to see that the fault affects every other transmission line, generators and every other component attatched to the system. Figure 4 shows the same 30 bus system but with fault protection circuit breaker placed on the line of occurrence.

Figure 3. Fault placed on T9_11 without protective circuit breaker. Fault analysis in power system using power systems computer aided design (Amanze Chukwuebuka Fortune)


176

ISSN: 2252-8814

Figure 4. A bolder view of the TL with fault

Loads are modelled as a constant PQ load with parameters as shown in Table 3.

Table 3. Load characteristics of IEEE 30-bus system Bus P [pu] Q [pu] Bus P [pu] Q [pu] 2 0.217 0.127 17 0.090 0.058 3 0.024 0.012 18 0.032 0.009 4 0.076 0.016 19 0.095 0.034 5 0.942 0.190 20 0.022 0.007 7 0.228 0.109 21 0.175 0.112 8 0.300 0.300 23 0.032 0.016 10 0.058 0.020 24 0.087 0.067 12 0.112 0.075 26 0.035 0.023 14 0.062 0.016 29 0.024 0.009 15 0.082 0.025 30 0.106 0.019 16 0.035 0.018 The above data were used to model every transmission line and load in the system. They are based on 100MVA.

3. RESULTS AND ANALYSIS 3.1. Software analysis of an IEEE 30 bus system with fault at T9_11 without protection. The graphs give a pictorial idea of the values (bus voltage, bus current, fault current separate for each phase, rms voltage, fault signal) expected form the fault studies. From Figure 5, it is seen that fault occurred on two lines and the instance of fault occurance, the voltage in the two lines became zero.

Figure 5. Bus voltage at the instant of fault occurrence Figure 6 depicts the effect of the fault on the RMS voltate of the system. The fault seen to occur from the 0.1sec to 0.3sec gets to give more detals on the very short duration of the occurrence of fault.

Figure 6. RMS Voltage during the period of fault

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 171 – 179


Int J Adv Appl Sci

ISSN: 2252-8814

177

The effect on of the fault on the current on each line as ssen in Figures 7, Figure 8 and Figure 9 gives the effect of the fault current on the lines. The faulted line which is the phase A and phase B has its current spike to a value very high when compared to its normal current rate. The fault current has no effect on phase C because it had no fault. Figure 10 is the complete waveform of the transmisssion line with the three phases indicating the instant of fault occurrence which is between 0.1sec to 0.3sec. The wave depicts the effect of the two phases with fault. The faulted lines green and blue had its current spike during fault occcurence and became steady back to normal after fault was gone. The Red phase had no fault. Figure 11 is the indication of the immediate removal of the fault from the system. The fault which could be caused by wind can seize when the atmosphere is okay and would relese the bridging lines. It can be seen that the two line with the fault had an irregular curve before becoming steady. Without protection, there may not be any ecovery from the fault occurance.

Figure 7. Fault current on phase A without protection

Figure 8. Fault current on phase B without protection

Figure 9. Fault current on phase C without protection

Figure 10. Transmission line current in the period of fault occurrence

Figure 11. Bus voltage at the instant of fault removal

Fault analysis in power system using power systems computer aided design (Amanze Chukwuebuka Fortune)


178

ISSN: 2252-8814

What is seen in the above graphs is a clear indication of the dangers of an unprotected power system. It may look simple but the two lines with fault will cause drastic load changes not just on the bus and the remaining line which the fault is directly attached to but every other component in that system. 3.2. Software analysis of an IEEE 30 bus system with fault at T9_11 with protective VCB. Figure 12 is the effect of the quick removal of fault on the transmission line due to the effect of circuit breakers. The circuit breaker annuls the effect of the fault instantly preventing equipment destruction. The current chopping limit set in the model is 0.0 kA. This results in the phenomenon observed in the waveforms of the faulty system with protection. An ideal VCB would have caused no disturbances on the system during fault. However, there are disturbances on even the healthy line due to the parameters of the VCB which are not ideal, for instance, the vacuum dielectric strength, the dielectric recovery, the voltage across the contacts etcetera. When the fault occurred, the attempt by the VCB to isolate it was distorted due to the arc established by the separation of the contacts which heat up the region, the time of distortion seen is the time dielectric recovery of the VCB. This same thing happens when the VCB is de-energized after the fault removal. The fault signal is given in Figure 13.

Figure 12. Bus Voltage at the instant of fault and fault removal

Figure 13. The fault signal

4.

CONCLUSION Every device obeys Newton’s first law of motion and the VCB movable contacts are not exceptions. Under heavy loads such as short-circuits, opening and closing of circuit breaker contacts generates an arc around the contact regions which heats up the contacts. If the fault is not cleared early enough by protective elements such as the circuit breaker, severe damages would be done on the system which will not only be catastrophic and extremely dangerous, but also costly.It is therefore, extremely important to perform fault studies on every power system, in order to determine the ratings of the respective protective elements employed. The importance of CBs in power systems is extremely important and the VCB in particular has a very good fault error clearing capabilities, ease of installation and environmentally friendly. Concerned industries should pioneer research programs in the area of development of high voltage VCBs in our institutions and research centers. Every component of our power system should be protected from fatal and total failures usually caused by faults and accidents on the transmission and distribution sections. To protect failures in our national grid, power system protection engineers should be employed to examine and perform adequate fault study on the system. REFERENCES [1] [2] [3] [4]

L. Xingping, Distribution transformers health condition monitoring and evaluation methods, Chongqing, 2013. State Grid, Q-GDW 11190-2014 high overload capacity of rural power distribution transformers technical guideline, China Electric Power Press, Beijing, 2014. IEEE Standard C57.91-1995 IEEE Guide for Loading Mineral-Oil-Immersed Transformers. G. Swift, Z. Zhang, “A different approach to transformer. thermal modeling”, IEEE Transmission and Distribution Conference, New Orleans, Apr 1999.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 171 – 179


Int J Adv Appl Sci [5] [6] [7] [8] [9] [10] [11] [12] [13]

[14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]

ISSN: 2252-8814

179

IEC (International Electrotechnical Commission) Standard 354Second Edition, 1991-09, Loading guide for oilimmersed power transformers, pp. 143-145, 1991. M. Klasen-Memmer and H. Hirschmann, “Liquid crystal materials for devices”, IN J. Chen, W. Cranton and M Fihn (eds.) “Handbook of Virtual Display Technology”, Springer-Verlag Berlin Heidelberg, 2012. J. J. Grainger and W. D. Stevenson, Power system and analysis, Tata Mc- Graw-Hill, 2005. E. Acha., V. Agelidis, O. Anaya-Lara, T. Miller, Power electronic control in electrical systems, Newness Power Engineering Series 2002. SadohJ, PhD, “Thesis on power system protection: investigation of system protection schemes on the 330KV of Nigeria transmission network,” University of Benin, Benin City, 2006. L.P. Singh, Advanced power system analysis and dynamics, Wiley, New York, 1983. Anderson, Paul M, Analysis of faulted power systems, Iowa State Press, Ames, 1973. H.E. Brown and C.E. Person, “Short circuit studies of large systems by the impedance matrix method,” Proc. PICA, pp. 335, 1967. S. Bakanagari, A. Mahesh Kumar, M. Cheenya, “Three phase fault analysis with auto reset for temporary fault and trip for permanent fault,” Int. Journal of Engineering Research and Applications, vol. 3, no. 6, pp. 1082-1086, 2013. G. Vladimir, Electrical relays principle and applications, Taylor and francis group publication, 2006. D. Miroslav, Fault analysis in power systems by using the fortescue method, TESLA Institute, 2009. J. Zhu. “Analysis of transmission system faults the phase domain,” Texas A&M University. Master Thesis, 2004. M. Aleksandar, “Analysis of asymmetrical faults in power systems using dynamic phasors,” IEEE Transactions on Power Systems, vol. 15, no. 3, pp. 1062-1068, 2000. V. Gamit, et al, “Fault analysis on three phase system by auto reclosing mechanism,” International Journal of Research in Engineering and Technolog, vol. 4, no. 5, pp. 292-298, 2015. T. Mushiri and C. Mbohwa, “Research on the use of MATLAB in the modeling of 3-phase power systems,” Proceedings of the World Congress on Engineering (WCE 2015), vol. 1, 2015. C. Vijaya Tharani, et al, “MATLAB based simulations model for three phases power system network,” Int. Journal for Research in Applied Science & Engineering Technology, vol.4, no. 11, pp. 502-509, 2016. A. Adly, M. Christopher, G. Fallon, L. David, "A fault location technique for rural distribution feeders," IEEE transaction on Industry Application, vol. 29, no. 6, pp. 1170-1175, 1993. N. Amjady, “Design and implimentation of a fault diagnosis system for transmissin and subtransmission networks”, IEEE paper on tranmission & distribution con. Exp., vol. 2, pp. 69-704, 2003. J.A. Halluday, C.H. Shih, “Resonent overvoltage phenomena coused by transmission line fault,” IEEE trans. Power apparatus & system, vol. 104, no. 9, pp. 2531-2539, 1985. D. C. Yu, D. Chen, S. Ramasamy and D. G. Flinn, “A windows based graphical package for symmetrical components analysis,” IEEE Transactions on Power Systems, vol. 10, no. 4, pp. 1742-1749, 1995. M. Yuen Chow, S. Leroy, “A novel approach for distribution fault analysis,” IEEE Transactions on Power Delivery, vol.8, no. 4, pp. 1882-1889, 1993.

BIOGRAPHIES OF AUTHORS Fortune Chukwuebuka Amanze is a graduate researcher at the Department of Electrical Engineering, University of Nigeria, Nsukka in Enugu State Nigeria. His research interests are in Power Sytems, Power Electronics and Control Systems Engineering.

Destiny Josiah Amanze is a graduate researcher who has a flair for Energy systems and efficient utilization of Energy. His research approach is aimed at seeking alternatives to already existing energy sources and efficient use of current energy sources to increase productivity.

Fault analysis in power system using power systems computer aided design (Amanze Chukwuebuka Fortune)


International Journal of Advances in Applied Sciences (IJAAS) Vol. 9, No. 3, September 2020, pp. 180~185 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i3.pp180-185

180

Two bio-inspired algorithms for solving optimal reactive power problem Lenin Kanagasabai Department of EEE, Prasad V. Potluri Siddhartha Institute of Technology, India

Article Info

ABSTRACT

Article history:

In this work two ground-breaking algorithms called; Sperm Motility (SM) algorithm & Wolf Optimization (WO) algorithm is used for solving reactive power problem. In sperm motility approach spontaneous movement of the sperm is imitated & species chemo attractant, sperms are enthralled in the direction of the ovum. In wolf optimization algorithm the deeds of wolf is imitated in the formulation & it has a flag vector also length is equivalent to the whole sum of numbers in the dataset the optimization. Both the projected algorithms have been tested in standard IEEE 57,118, 300 bus test systems. Simulated outcomes reveal about the reduction of real power loss & with variables are in the standard limits. Almost both algorithms solved the problem efficiently, yet wolf optimization has slight edge over the sperm motility algorithm in reducing the real power loss.

Received Jan 3, 2020 Revised Mar 4, 2020 Accepted Apr 25, 2020 Keywords: Optimal reactive power Sperm motility Transmission loss Wolf algorithm

This is an open access article under the CC BY-SA license.

Corresponding Author: Lenin Kanagasabai, Department of EEE, Prasad V. Potluri Siddhartha Institute of Technology, Kanuru, Vijayawada, Andhra Pradesh-520007, India. Email: gklenin@gmail.com

1.

INTRODUCTION Optimal reactive power problem has been key problem in power system, since it plays major role in secure & economic operation of the power system. Many conventional methods [1-8] have been applied for solving optimal reactive power problem. But many drawbacks have been found in the conventional methods and mainly difficulty in handling the inequality constraints. Last two decades many evolutionary algorithms [9-18] continuously applied to solve the problem. This paper proposes a new Sperm Motility (SM) algorithm for solving optimal reactive power problem. Sperm Motility (SM) algorithm is stimulated by fertilization process in human beings [19]. Wolf optimization (WO) algorithm has been formulated on the basis of basic deeds of wolf search for the prey. The formulation has been enhanced by utilizing the velocity & movement properties of particle swarm optimization algorithm.Both the projected algorithms have been tested in standard IEEE 57,118, 300 bus test systems. Simulated outcomes reveal about the reduction of real power loss & with variables are in the standard limits. Almost both algorithms solved the problem efficiently, yet wolf optimization has slight edge over the sperm motility algorithm in reducing the real power loss.

2.

PROBLEM FORMULATION The key objective of the reactive power problem is to minimize the system real power loss & given as,

Journal homepage: http://ijaas.iaescore.com


Int J Adv Appl Sci

ISSN: 2252-8814

Ploss= ∑nk=1 g k(V2 +V2−2Vi Vj cos θ i

k=(i,j)

j

ij

181

(1)

)

Voltage deviation magnitudes (VD) is stated as follows, Minimize VD = ∑nl k=1|Vk − 1.0|

(2)

Load flow equality constraints: PGi – PDi − Vi ∑nb

j=1 Vj

Gij [ +Bij

Q Gi − Q Di − Vi ∑nb

j=1 Vj

cos θij ] = 0, i = 1,2 … . , nb sin θij

Gij [ +Bij

sin θij ] = 0, i = 1,2 … . , nb cos θij

(3)

(4)

Inequality constraints are: min max VGi ≤ VGi ≤ VGi , i ∈ ng

(5)

min max VLi ≤ VLi ≤ VLi , i ∈ nl

(6)

max Qmin Ci ≤ Q Ci ≤ Q Ci , i ∈ nc

(7)

max Qmin Gi ≤ Q Gi ≤ Q Gi , i ∈ ng

(8)

Timin ≤ Ti ≤ Timax , i ∈ nt

(9)

min max SLi ≤ SLi , i ∈ nl

(10)

3.

SPERM MOTILITY ALGORITHM Sperm Motility (SM) algorithm has been inspired by the fertilization procedure in human beings. Throughout the exploration sequence as the species chemo attractant, sperms are fascinated towards the ovum. Also Chemo attractant & concentration will induce the sperm when is moving closer to ovum. Utmost eminence sperm will be moved over & expressed as -Type A. With probability Pa ∈ [0,1] low quality sperms are discarded; specified as type B, C and D. Towards the ovum more than 220 million sperms swim capriciously with velocity 𝑣𝑖 at position 𝑥𝑖 , & by the Stokes equations motility can be described as, 𝑅𝑒 = (

𝜕𝑣 𝜕𝑡

+ 𝑣. ∇𝑣) + ∇𝑝 = 𝜇∇2 𝑣 + 𝑓∇. 𝑣 = 0 𝑥 ∈ 𝛺

(11)

Simpler form of Stokes written as: ∇𝑝 = 𝜇∇2 𝑣 + 𝑓

(12)

∇. 𝑣 = 0, 𝑦 ∈ 𝛺

(13)

C singularity velocity solution as follows, 𝑣𝑖 (𝑡) = (

1 8𝜋𝑢

)∗(

𝛿𝑖𝑗 ℎ

+

ℎ𝑖 ℎ𝑗 ℎ3

) ∗ 𝐹𝑙𝑗 = (

1 8𝜋𝑢

) ∗ 𝑠𝑖𝑗 (𝑦, 𝜉) ∗ 𝐹𝑙𝑙𝑗 ; 𝑖, 𝑗 = 1,2,3. ..

(14)

Due to a force 𝐹𝑙𝑗 concentrated at the point ζ, the flow will be as, ℎ𝑖 = 𝑦 − 𝜉

(15)

ℎ2 = ℎ12 + ℎ22 + ℎ32

(16)

It has been updated as, 𝛿𝑡 𝑦𝑖+1 (𝑡) = 𝑦𝑖 (𝑡) + ( ) ∗ (𝑣𝑖+1 (𝑡) + 𝑣𝑖 (𝑡)) + 𝛼(𝑦𝑖 (𝑡) − 𝐽∗ ) 2

(17)

Two Bio-Inspired Algorithms for Solving Optimal Reactive Power Problem… (Lenin Kanagasabai)


182

ISSN: 2252-8814

Chemo attractant is defined as follows, 𝑐𝑎𝑖 (𝑡) = 𝑐𝑎𝑜 (𝑡) + 𝑐𝑎1 (‖𝐽∗ − 𝑥𝑖 (𝑡)‖)−𝑏

(18)

In current iteration 𝐽∗ is the outstanding solution exist. Sperm Motility(SM) algorithm for solving reactive power problem Commence Based on the problem objective function is defined. N sperm Population size is initialised Primary attentiveness c0 has been engendered for N sperm with reference to primary position y0 and velocity v0 are produced. Parameters of the motility are described; While (t< Maximum Generation) For i=1: N do Velocity vi is computed from By equation (14) using the data at t = t i; Position xi has been modernized for sperm i , by equation (17) Calculate the each sperm individual value, according to its position When new solution is available & superior then modernize the population. Value of cai from equation (18) has been calculated. When 𝑐𝑎𝑖 ≤ 𝑐𝑎𝑖−1 , poorer sperm with help of (Pa), will be abandon Checking of the constraints with respective to objective function End for Existing outstanding population has been sorted out End

4.

WOLF OPTIMIZATION Wolf optimization mimics the communal management and hunt deeds of wolf in nature [20]. There are three fittest candidate solutions assumed as 𝛼,𝛽 and 𝛾 to lead the population toward promising regions of the exploration space in each iteration of wolf optimization. 𝜑 is named for the rest of wolves and it will assist 𝛼,𝛽 and 𝛾 to encircle, hunt, and attack prey, that is, to find Enriched solutions. In order to scientifically replicate the encompassing behavior of Red wolves, the following equations are proposed: ⃗​⃗(𝑡)|, 𝐺⃗ = |𝐹⃗ . ⃗​⃗​⃗​⃗ 𝑌𝑃 (𝑡) − 𝑌 ⃗​⃗(𝑡 + 1) = ⃗​⃗​⃗​⃗ ⃗​⃗. 𝐺⃗ 𝑌 𝑌𝑃 (𝑡) − 𝐻

(19)

⃗​⃗​⃗​⃗. ⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗, F ⃗​⃗​⃗ = 2b ⃗​⃗ = 2. ⃗​⃗​⃗​⃗ Where 𝑡 indicates the current iteration, H r1 − b r2 , Ŷ P the position vector of the ⃗​⃗ ⃗ ⃗ ⃗​⃗​⃗​⃗ prey, Y is the position vector of a wolf, b is linearly decreased from 2.0 to 0, and ⃗​⃗​⃗​⃗and r1 r2 are arbitrary ⃗​⃗​⃗​⃗ vectors in [0, 1]. Hunting behavior of wolves are mathematically simulated by following equations, ⃗​⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗1 , ⃗​⃗​⃗​⃗​⃗ ⃗​⃗| 𝐺𝛼 = |𝐹 𝑌𝛼 − 𝑌 ⃗​⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗ ⃗ 𝐺𝛽 = |𝐹2 , 𝑌𝛽 − 𝑌⃗ |

(20)

⃗​⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗3 , ⃗​⃗​⃗​⃗ ⃗​⃗| 𝐺𝛾 = |𝐹 𝑌𝛾 − 𝑌 ⃗​⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗​⃗ 𝑌1 = 𝑌𝛼 − 𝐻1 . 𝐺𝛼 ⃗​⃗​⃗​⃗2 = 𝑌 ⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗​⃗ 𝑌 𝛽 − 𝐻2 . 𝐺𝛽

(21)

⃗​⃗​⃗​⃗ 𝑌3 = ⃗​⃗​⃗​⃗ 𝑌𝛾 − ⃗​⃗​⃗​⃗​⃗ 𝐻3 . ⃗​⃗​⃗​⃗​⃗ 𝐺𝛾 ⃗​⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗​⃗

⃗​⃗(𝑡 + 1) = 𝑌1 +𝑌2 +𝑌3 𝑌 3

(22)

Position of wolf was updated by (19) & the following equation is used to discrete the position. 𝑓𝑙𝑎𝑔𝑖,𝑗 = {

1 𝑌𝑖,𝑗 > 0.50 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 180 – 185

(23)


Int J Adv Appl Sci

ISSN: 2252-8814

183

Where i, indicates the jth position of the ith wolf, 𝑓𝑙𝑎𝑔𝑖,𝑗 is features of the wolf. To enhance the search velocity & position updating equations form particle swarm optimization has been incorporated in this approach. g

i vt+1 = ωt . vti + cg1 . Rm1 . (mit − yti ) + cg 2 . Rm2 . (mt − yti )

(24)

i i yt+1 = yti + vt+1

(25) g

The current position of particle is yti & search velocity is vti . Global best-found position is. mt . In uniformly distributed interval (0, 1) Rm1 & Rm2 are arbitrary numbers. Where cg1 and cg 2 are scaling parameters.ωt is the particle inertia. The variable ωt is modernized as ωt = (ωmax − ωmin ).

(tmax −t) tmax

+ ωmin

(26)

Maximum and minimum of ωt is represented by ωmax and ωmin ; maximum number of iterations is given by t max . Until termination conditions are met this process will be repeated. In this approach wolves ⃗​⃗​⃗​⃗. ⃗​⃗​⃗​⃗ ⃗​⃗ = 2b categorized as α,β and γ determine the position of the prey. 𝐻 r1 − ⃗​⃗b⃗​⃗ directs the exploration & ⃗​⃗​⃗| < 1 it converged towards the prey & If exploitation process by reducing the value from 2 to 0.When |H ⃗ ⃗​⃗ |H| > 1 diverged away. The first best Minimum loss and variables are accumulated as "α" position, score & as like second best, third best accumulated as "β" and " γ" position & score. Commence Initialize the parameters ⃗​⃗ and ⃗F⃗; beginning positions of wolves has been stimulated. Initialize b, ⃗H i = 1: population size j = 1: n When (i, j) > 0.500 (i) = 1; Else (j) = 0; End if End for Work out the maximum fitness of wolves as follows, Primary maximum fitness of the wolf is designated as “𝛼” Second maximum fitness of the wolf is designated as “𝛽” Third maximum fitness of the wolf is designated as “𝛾” While k < maximum iteration For i = 1: population size Exact Location of the existing wolf has been revised periodically End for For i = 1: population size For i=1:n;If (i, j) > 0.496 (j) = 1; Else (j) = 0; End if End for Sporadically revise the values of b,⃗​⃗​⃗​⃗ H and ⃗F⃗; At this stage Fitness of wolves has been calculated The assessment of wolves "𝛼","𝛽" and " 𝛾" has to be revised k=k+1; End while Re-examine the value of” 𝛼 “as the optimal characteristic division; End

5.

SIMULATED OUTCOME At first IEEE 57 bus system [21] is used as test system to validate the performance of the proposed algorithms. Total active and reactive power demands in the system are 1240.68 MW and 330.82 MVAR, Two Bio-Inspired Algorithms for Solving Optimal Reactive Power Problem… (Lenin Kanagasabai)


184

ISSN: 2252-8814

respectively. Generator data the system is given in Table 1. The optimum loss comparison is presented in Table 2. Number of iterations taken is 32 & time taken 11.24 sec. Table 1. Generator data Generator No 1 2 3 4 5 6 7

Pgi minimum 25.00 15.00 10.00 10.00 12.00 10.00 50.00

Pgi maximum 50.00 90.00 500.00 50.00 50.00 360.00 550.00

Qgi minimum 0.00 -17.00 -10.00 -8.00 -140.00 -3.00 -50.00

Qgi maximum 0.00 50.00 60.00 25.00 200.00 9.00 155.00

Table 2 comparison of losses Parameter

Method CLPSO [22] 24.5152

PLOSS (MW)

Method DE [23] 16.7857

Method GSA [23] 23.4611

Method OGSA [24] 23.43

Method SOA [22] 24.2654

Method QODE [23] 15.8473

Method CSA [25] 15.5149

SM

WO

12.1482

12.0064

Secondly IEEE 118 bus system [26] is used as test system to validate the performance of the proposed algorithms. Table 3 shows limit values and Table 4 show the comparison of results. Table 3. Limitation of reactive power sources Bus number Maximum value of QC Minimum value of QC Bus number Maximum value of QC Minimum value of QC

5 0.000 -40.000 74 12.000 0.000

34 14.000 0.000 79 20.000 0.000

37 0.000 -25.000 82 20.000 0.000

44 10.000 0.000 83 10.000 0.000

45 10.000 0.000 105 20.000 0.000

46 10.000 0.000 107 6.000 0.000

48 15.000 0.000 110 6.000 0.000

Table 4. Evaluation of results Active power loss – Minimum & Maximum values

Methodology BBO [27]

Minimum value Maximum value Average value

128.770 132.640 130.210

Methodology ILSBBO/ strategy1 [27] 126.980 137.340 130.370

Methodology ILSBBO/ strategy1 [27] 124.780 132.390 129.220

SM

WO

126.540 132.860 129.120

125.172 131.248 128.864

Finally IEEE 300 bus system [21] is used as test system to validate the performance of the proposed algorithms. Table 5 shows the comparison of real power loss obtained after optimization. Table 5 comparison of real power loss Parameter PLOSS (MW)

Method EGA [28] 646.2998

Method EEA [28] 650.6027

Method CSA [25] 635.8942

SM 630.1898

WO 629.2824

6.

CONCLUSION In this paper both Sperm Motility (SM) algorithm & Wolf Optimization (WO) algorithm solved the problem successfully. Performance of both above said algorithms in solving the problem is outstanding. Both the projected algorithms have been tested in standard IEEE 57,118, 300 bus test systems. Simulated outcomes reveal about the reduction of real power loss & with variables are in the standard limits. Almost both algorithms solved the problem efficiently, yet wolf optimization has slight edge over the sperm motility algorithm in reducing the real power loss.

REFERENCES [1]

K. Y. Lee, “Fuel-cost minimisation for both real and reactive-power dispatches,” Proceedings Generation, Transmission and Distribution Conference, vol. 131, no. 3, pp. 85-93, 1984.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 180 – 185


Int J Adv Appl Sci [2] [3] [4] [5] [6]

[7] [8] [9]

[10]

[11]

[12] [13]

[14]

[15]

[16] [17]

[18]

[19] [20] [21] [22] [23] [24] [25] [26] [27] [28]

ISSN: 2252-8814

185

N. I. Deeb, “An efficient technique for reactive power dispatch using a revised linear programming approach,” Electric Power System Research, vol. 15, no. 2, pp. 121–134, 1988. M. R. Bjelogrlic, M. S. Calovic, B. S. Babic, et. al., ”Application of Newton’s optimal power flow in voltage/reactive power control,” IEEE Trans Power System, vol. 5, no. 4, pp. 1447-1454, 1990. S. Granville, “Optimal reactive dispatch through interior point methods,” IEEE Transactions on Power System, vol. 9, no. 1, pp. 136–146, 1994. N. Grudinin, “Reactive power optimization using successive quadratic programming method,” IEEE Transactions on Power System, vol. 13, no. 4, pp. 1219–1225, 1998. Wei Yan, J. Yu, D. C. Yu and K. Bhattarai, ”A new optimal reactive power flow model in rectangular form and its solution by predictor corrector primal dual interior point method,” IEEE Trans. Pwr. Syst.,vol. 21, no.1, pp. 61-67, 2006. Aparajita Mukherjee, Vivekananda Mukherjee, “Solution of optimal reactive power dispatch by chaotic krill herd algorithm,” IET Gener. Transm. Distrib, , Vol. 9, no. 15, pp. 2351–2362, 2015. Hu, Z., Wang, X. & Taylor, G. Stochastic optimal reactive power dispatch: Formulation and solution method. Electr. Power Energy Syst., vol. 32, pp. 615-621. 2010. Mahaletchumi A/P Morgan , Nor Rul Hasma Abdullah, Mohd Herwan Sulaiman, Mahfuzah Mustafa and Rosdiyana Samad, “Computational intelligence technique for static VAR compensator (SVC) installation considering multi-contingencies (N-m),” ARPN Journal of Engineering and Applied Sciences, vol. 10, no. 22, 2015. Mohd Herwan Sulaiman, Zuriani Mustaffa, Hamdan Daniyal, Mohd Rusllim Mohamed and Omar Aliman, “Solving Optimal Reactive Power Planning Problem Utilizing Nature Inspired Computing Techniques,” ARPN Journal of Engineering and Applied Sciences, vol. 10, no. 21, pp. 9779-9785. 2015. Mohd Herwan Sulaiman, Wong Lo Ing, Zuriani Mustaffa and Mohd Rusllim Mohamed, “Grey Wolf Optimizer for Solving Economic Dispatch Problem with Valve-Loading Effects,” ARPN Journal of Engineering and Applied Sciences, vol. 10, no. 21, pp. 9796-9801, 2015. Pandiarajan, K. & Babulal, C. K. , “Fuzzy harmony search algorithm based optimal power flow for power system security enhancement,” International Journal Electric Power Energy Syst., vol. 78, pp. 72-79. 2016. Mustaffa, Z., Sulaiman, M.H., Yusof, Y., Kamarulzaman, S.F., “A novel hybrid metaheuristic algorithm for short term load forecasting”, International Journal of Simulation: Systems, Science and Technology, vol. 17, no. 41, pp. 6.1-6.6. 2017. Sulaiman, M.H., Mustaffa, Z., Mohamed, M.R., Aliman, O., “An application of multi-verse optimizer for optimal reactive power dispatch problems,” International Journal of Simulation: Systems, Science and Technology, vol. 17, no. 41, pp, 5.1-5.5. 2017. Mahaletchumi A/P Morgan, Nor Rul Hasma Abdullah, Mohd Herwan Sulaiman,Mahfuzah Mustafa and Rosdiyana Samad, “Multi-Objective Evolutionary Programming (MOEP) Using Mutation Based on Adaptive Mutation Operator (AMO) Applied For Optimal Reactive Power Dispatch,” ARPN Journal of Engineering and Applied Sciences, vol. 11, no. 14, 2016. Rebecca Ng Shin Mei, Mohd Herwan Sulaiman, Zuriani Mustaffa, “Ant Lion Optimizer for Optimal Reactive Power Dispatch Solution,” Journal of Electrical Systems, "Special Issue AMPE2015", pp. 68-74. 2016. Mahaletchumi Morgan, Nor Rul Hasma Abdullah, Mohd Herwan Sulaiman, Mahfuzah Mustafa, Rosdiyana Samad, “Benchmark Studies on Optimal Reactive Power Dispatch (ORPD) Based Multi-objective Evolutionary Programming (MOEP) Using Mutation Based on Adaptive Mutation Adapter (AMO) and Polynomial Mutation Operator (PMO),” Journal of Electrical Systems, 2016. Rebecca Ng Shin Mei, Mohd Herwan Sulaiman, Zuriani Mustaffa, Hamdan Daniyal,“Optimal Reactive Power Dispatch Solution by Loss Minimization using Moth-Flame Optimization Technique,” Applied Soft Computing, vol. 59, pp. 210-222, 2017. O. A. R. Ibrahim Hezam, “Sperm Motility Algorithm: A Novel Metaheuristic Approach for Global Optimization,” Int. J. Oper. Res., vol. 28, no. 2, pp. 143–163, 2017. A. Kaveh and F. Shokohi, “Application of Grey Wolf Optimizer in design of castellated beams,” Asian Journal of Civil Engineering, vol. 17, no. 5, pp. 683–700, 2016. .[Online] Available: http://www2.ee.washington.edu/research/pstca/ C. Dai, et al., “Seeker optimization algorithm for optimal reactive power dispatch,” IEEE Trans. Power Systems, vol. 24, no. 3, pp. 1218-1231, 2009. M. Basu, “Quasi-oppositional differential evolution for optimal reactive power dispatch,” Electrical Power and Energy Systems, vol. 78, pp. 29-40, 2016. B. Shaw, et al., “Solution of reactive power dispatch of power systems by an opposition-based gravitational search algorithm,” International Journal of Electrical Power Energy Systems, vol. 55, pp. 29-40, 2014. S. Surender Reddy, “Optimal Reactive Power Scheduling Using Cuckoo Search Algorithm,” International Journal of Electrical and Computer Engineering, Vol. 7, No. 5, pp. 2349-2356. 2017 IEEE, “The IEEE 30-bus test system and the IEEE 118-test system,” 1993, http://www.ee.washington.edu/trsearch/pstca/. Jiangtao Cao, Fuli Wang and Ping Li, “An Improved Biogeography-based Optimization Algorithm for Optimal Reactive Power Flow,” International Journal of Control and Automation, vol.7, no.3, pp. 161-176, 2014. S.S. Reddy, et al., “Faster evolutionary algorithm based optimal power flow using incremental variables,” Electrical Power and Energy Systems, vol. 54, pp. 198-210, 2014.

Two Bio-Inspired Algorithms for Solving Optimal Reactive Power Problem… (Lenin Kanagasabai)


International Journal of Advances in Applied Sciences (IJAAS) Vol. 9, No. 3, September 2020, pp. 186~191 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i3.pp186-191

186

Real power loss reduction by hyena optimizer algorithm Lenin Kanagasabai Department of EEE, Prasad V. Potluri Siddhartha Institute of Technology, India

Article Info

ABSTRACT

Article history:

To solve optimal reactive power problem this paper projects Hyena Optimizer (HO) algorithm and it inspired from the behaviour of Hyena. Collaborative behaviour & Social relationship between Hyenas is the key conception in this algorithm. Hyenas a form of carnivoran mammal & deeds are analogous to canines in several elements of convergent evolution. Hyenas catch the prey with their teeth rather than claws – possess hardened skin feet with large, blunt, no retractable claws are adapted for running and make sharp turns. However, the hyenas' grooming, scent marking, defecating habits, mating and parental behaviour are constant with the deeds of other feliforms. Mathematical modelling is formulated for the basic attributes of Hyena. Standard IEEE 14,300 bus test systems used to analyze the performance of Hyena Optimizer (HO) algorithm. Loss has been reduced with control variables are within the limits.

Received Jan 3, 2020 Revised Mar 4, 2020 Accepted Mar 23, 2020 Keywords: Hyena Optimization Reactive power Real

This is an open access article under the CC BY-SA license.

Corresponding Author: Lenin Kanagasabai, Department of EEE, Prasad V. Potluri Siddhartha Institute of Technology, Kanuru, Vijayawada, Andhra Pradesh-520007, India. Email: gklenin@gmail.com

1.

INTRODUCTION Reactive power problem has been the key in power system operation & control, since it plays major role in secure & economic operation of the power system. Many conventional techniques [1-6] used for solving the problem. But many drawbacks have been found in the conventional methods and mainly difficulty in handling the inequality constraints. Last two decades many evolutionary algorithms [7-18] continuously applied to solve the problem. This paper projects Hyena Optimizer (HO), in which Hyena behaviour imitated to solve the problem. Collaborative behaviour & Social relationship between Hyenas is the key conception in this algorithm [19]. Hyenas a form of carnivoran mammal & deeds are analogous to canines in several elements of convergent evolution. Hyenas catch the prey with their teeth rather than claws – possess hardened skin feet with large, blunt, no retractable claws are adapted for running and make sharp turns. However, the hyenas' grooming, scent marking, defecating habits, mating and parental behaviour are constant with the deeds of other feliforms. Hyenas clean themselves habitually by legs are spread with one leg pointing vertically upward by sitting on the lower back. Conversely, unlike other feliforms, they do not clean their faces. Territories are built by using their anal glands. From any form of attack they defend itself ferociously and capable of producing a number of different sounds consisting of whoops, grunts, groans, lows, giggles, yells, growls, laughs and whines. Mathematical modelling is formulated for the basic attributes of Hyena. Standard IEEE 14,300 bus test systems used to analyze the performance of Hyena Optimizer (HO) algorithm. Loss has been reduced with control variables are within the limits.

Journal homepage: http://ijaas.iaescore.com


Int J Adv Appl Sci 2.

ISSN: 2252-8814

187

PROBLEM FORMULATION Main aim is to minimize the system real power loss & given as, Ploss= ∑nk=1 g k(V2 +V2−2Vi Vj cos θ i

k=(i,j)

j

ij

)

(1)

Voltage deviation magnitudes (VD) is, Min (VD) = ∑nl k=1|Vk − 1.00|

(2)

Load flow equality constraints: PGi – PDi − Vi ∑nb

j=1 Vj

Q Gi − Q Di − Vi ∑nb

Gij [ +Bij

j=1 Vj

cos θij ] = 0, i = 1,2 … . , nb sin θij

Gij [ +Bij

sin θij ] = 0, i = 1,2 … . , nb cos θij

(3)

(4)

Inequality constraints are: min max VGi ≤ VGi ≤ VGi , i ∈ ng

(5)

min max VLi ≤ VLi ≤ VLi , i ∈ nl

(6)

max Qmin Ci ≤ Q Ci ≤ Q Ci , i ∈ nc

(7)

max Qmin Gi ≤ Q Gi ≤ Q Gi , i ∈ ng

(8)

Timin ≤ Ti ≤ Timax , i ∈ nt

(9)

min max SLi ≤ SLi , i ∈ nl

(10)

3.

HYENA OPTIMIZER ALGORITHM Hyena optimizer (HO) algorithms imitated form Hyena which is complicated, intelligent [19]. When a new food source is found Hyena produce a typical sound alike to laughing sound of human beings to communicate about the findings. Mating between hyenas engage a number of diminutive sexual intercourse with short intervals, & hyena cubs are born roughly fully developed, with their eyes open with adult markings, closed eyes and small ears. Hyenas do not reiterate food for their young and male hyenas play no part in lift up their cubs. For recognition they use multiple sensory procedures and have been used during he social decision making including relationships. Mathematically modelling of HO algorithm as follows, Encircling prey Mathematical model [22]of encircling the prey is; ⃗ 𝑐 = |𝑙⃗ . 𝑆𝑄 (𝑦) − ⃗​⃗𝑆⃗ (𝑦)| 𝐻

(11)

⃗​⃗​⃗ (𝑦 + 1) = 𝑆𝑄 (𝑦) − 𝐵 ⃗ .𝐻 ⃗𝑐 𝑆

(12)

⃗ 𝑐 - distance of Hyena with prey, , 𝑆𝑄 -position of prey, ⃗​⃗𝑆⃗ - position of Hyena. Where 𝐻 ⃗𝑙 & 𝐵 ⃗ are designed as follows: 𝑙 = 2. 𝑟𝑑1

(13)

⃗​⃗​⃗ 𝐵 = 2𝑐 . 𝑟𝑑2 − 𝑐

(14)

𝑐 = 5.0 − (iteration ∗ (5.0⁄maxiteration ))

(15)

Real power loss reduction by hyena optimizer algorithm … (Lenin Kanagasabai)


188

ISSN: 2252-8814

Where, Iteration = 1, 2, 3, . . ., MaxIteration “c”- (5.00 -6.00). rd1, rd2 are arbitrary vectors in [0, 1]. Hunting The following equations are articulate the hunting procedure, ⃗ 𝑐 = |𝑙⃗ . 𝑆𝑐 − 𝑆𝑐 | 𝐻

(16)

⃗ .𝐻 ⃗𝑐 𝑆𝑘 = 𝑆𝑐 − 𝐵

(17)

𝐺𝑐 = 𝑆𝑘 + 𝑆𝑘+1 +. . +𝑆𝐾+𝑁

(18)

Where 𝑆ℎ defines the position of most excellent & count calculated as: 𝐷 = 𝑐𝑜𝑢𝑛𝑡𝑁𝑢𝑚𝑏𝑒𝑟𝑠 (𝑆ℎ , 𝑆ℎ+1 , … (𝑆ℎ + 𝐸⃗ ))

(19)

Where 𝐸⃗ is an arbitrary vector in [0.5, 1], Mathematical modelling of Attacking on prey Attacking the prey has been mathematically modelled and through that vector h will be specified. Vector 𝑍 is varied from 5.00 to 0.00 over the course of iterations. Assigned Value |Z| < 1 will oblige & mathematically written as follows, 𝑆 (𝑦 + 1) =

⃗ℎ 𝐿 𝐷

(20)

According to the position of the finest exploration agent ⃗​⃗𝑆 (𝑦 + 1) will modernize the positions & Search for prey is exploration considered in the algorithm. Hyena position in the group indicates the place in vector⃗​⃗𝐿ℎ . In this progression of Hyena use 𝑍 & Factor |Z| > 1 with arbitrary values (1,-1) to move away (random values which are greater than 1 or less than 1) which compel the search agents to move away from the prey. Step a: Initialize Hyena population. Step b: primary parameters are chosen. Step c: agents (fitness value) are calculated. Step d: In exploration space the premium search agent has been found. Step e: group of optimal solutions defined sequentially. Step f: modernize the positions of search agents Step g: whether any search agent goes beyond the boundary has to be checked in the exploration space and confined to regulate it. Step h: based on the calculation of modernized search agent fitness value, solution will be revised. Step i: group of Hyena are modernized based on search agent fitness value. Step j: stop criterion or else move back to Step e. Initialize Hyena population & parameters Fitness of each agent calculated; While (x < Maximum number of iteration) do Modernize the agent position End for Modernize the parameters Corrective action initiated if violation of search space found Agent’s fitness has been calculated Update solution Update the group values; Y=Y + 1 End while End Output

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 186 – 191


Int J Adv Appl Sci

ISSN: 2252-8814

189

4.

SIMULATION RESULTS At first in standard IEEE 14 bus system the validity of the proposed Hyena Optimizer (HO) algorithm has been tested & comparison results are presented in Table 1. Figure 1. Provide the details of Comparison of real power loss.

Table 1. Comparison results Control variables V1 V2 V3 V6 V8 Q9 T56 T47 T49 Ploss (MW)

ABCO [20] 1.0600 1.0300 0.9800 1.0500 1.0000 0.1390 0.9790 0.9500 1.0140 5.92892

IABCO [20] 1.0500 1.0500 1.0300 1.0500 1.0400 0.13200 0.96000 0.9500 1.00700 5.50031

Projected HO 1.0110 1.0120 1.0040 1.0110 0.9000 0.10000 0.90000 0.90000 1.00000 4.70108

Figure 1 comparison of real power loss

Then IEEE 300 bus system [18] is used as test system to validate the performance of the Hyena Optimizer (HO) algorithm. Table 2 shows the comparison of real power loss obtained after optimization. Figure 2 gives the comparison of real power values. Real power loss has been considerably reduced when compared to the other standard reported algorithms.

Table 2. Comparison of real power loss Parameter PLOSS (MW)

Method EGA [21 ] 646.2998

Method EEA [21] 650.6027

Method CSA [22] 635.8942

Projected HO 617.8926

Figure 2. Real power loss comparison

Real power loss reduction by hyena optimizer algorithm … (Lenin Kanagasabai)


190

ISSN: 2252-8814

5.

CONCLUSION In this work Hyena Optimizer (HO) efficiently solved the reactive power problem. Deeds of the Hyena has been mathematically modelled successfully & employed to solve the problem. Attacking the prey has been mathematically modelled and through that vector h will be specified. Exploration & exploitation capabilities in the search improved. Standard IEEE 14,300 bus test systems used to analyze the performance of Hyena Optimizer (HO) algorithm. Loss has been reduced with control variables are within the limits.

REFERENCES 1. K. Y. Lee, “Fuel-cost minimisation for both real and reactive-power dispatches,” Proceedings Generation, Transmission and Distribution Conference, vol, 131, no. 3, pp. 85-93, 1984. 2. N. I. Deeb, “An efficient technique for reactive power dispatch using a revised linear programming approach,” Electric Power System Research, vol, 15, no. 2, pp. 121–134, 1988. 3. M. R. Bjelogrlic, M. S. Calovic, B. S. Babic, et. al., ”Application of Newton’s optimal power flow in voltage/reactive power control,” IEEE Trans Power System, vol. 5, no. 4, pp. 1447-1454, 1990. 4. S. Granville, “Optimal reactive dispatch through interior point methods,” IEEE Transactions on Power System, vol, 9, no. 1, pp. 136–146, 1994. 5. N. Grudinin, “Reactive power optimization using successive quadratic programming method,” IEEE Transactions on Power System, vol. 13, no. 4, pp. 1219–1225, 1998. 6. Wei Yan, J. Yu, D. C. Yu and K. Bhattarai, ”A new optimal reactive power flow model in rectangular form and its solution by predictor corrector primal dual interior point method,” IEEE Trans. Pwr. Syst., vol.21, no.1, pp. 61-67, 2006. 7. Aparajita Mukherjee, Vivekananda Mukherjee, “Solution of optimal reactive power dispatch by chaotic krill herd algorithm,” IET Gener. Transm. Distrib, vol. 9, no. 15, pp. 2351–2362, 2015. 8. Hu, Z., Wang, X. & Taylor, G. Stochastic optimal reactive power dispatch: Formulation and solution method. Electr. Power Energy Syst., vol. 32, pp. 615-621, 2010. 9. Mahaletchumi A/P Morgan , Nor Rul Hasma Abdullah, Mohd Herwan Sulaiman, Mahfuzah Mustafa and Rosdiyana Samad, “Computational intelligence technique for static VAR compensator (SVC) installation considering multi-contingencies (N-m),” ARPN Journal of Engineering and Applied Sciences, vol. 10, no. 22, 2015. 10. Mohd Herwan Sulaiman, Zuriani Mustaffa, Hamdan Daniyal, Mohd Rusllim Mohamed and Omar Aliman, “Solving optimal reactive power planning problem utilizing nature inspired computing techniques”, ARPN Journal of Engineering and Applied Sciences, vol. 10, no. 21, pp. 9779-9785. 2015 11. Mohd Herwan Sulaiman, Wong Lo Ing, Zuriani Mustaffa and Mohd Rusllim Mohamed, “Grey wolf optimizer for solving economic dispatch problem with valve-loading effects,” ARPN Journal of Engineering and Applied Sciences, vol. 10, no. 21, pp. 9796-9801, 2015. 12. Pandiarajan, K. & Babulal, C. K., “Fuzzy harmony search algorithm based optimal power flow for power system security enhancement,” International Journal Electric Power Energy Syst., vol. 78, pp. 72-79, 2016. 13. Mustaffa, Z., Sulaiman, M.H., Yusof, Y., Kamarulzaman, S.F., “A novel hybrid metaheuristic algorithm for short term load forecasting,” International Journal of Simulation: Systems, Science and Technology, vol. 17, no. 41, pp. 6.1-6.6, 2017. 14. Sulaiman, M.H., Mustaffa, Z., Mohamed, M.R., Aliman, O., “An application of multi-verse optimizer for optimal reactive power dispatch problems,” International Journal of Simulation: Systems, Science and Technology, vol. 17, no. 41, pp, 5.1-5.5, 2017. 15. Mahaletchumi A/P Morgan, Nor Rul Hasma Abdullah, Mohd Herwan Sulaiman,Mahfuzah Mustafa and Rosdiyana Samad, “Multi-objective evolutionary programming (MOEP) using mutation based on adaptive mutation operator (AMO) applied for optimal reactive power dispatch,” ARPN Journal of Engineering and Applied Sciences, vol. 11, no. 14, 2016. 16. Rebecca Ng Shin Mei, Mohd Herwan Sulaiman, Zuriani Mustaffa, “ant lion optimizer for optimal reactive power dispatch solution,” Journal of Electrical Systems, "Special Issue AMPE2015", pp. 68-74, 2016. 17. Mahaletchumi Morgan, Nor Rul Hasma Abdullah, Mohd Herwan Sulaiman, Mahfuzah Mustafa, Rosdiyana Samad, “Benchmark studies on optimal reactive power dispatch (ORPD) based multiobjective evolutionary programming (MOEP) using mutation based on adaptive mutation adapter (AMO) and polynomial mutation operator (PMO),” Journal of Electrical Systems, pp. 12-1, 2016.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 186 – 191


Int J Adv Appl Sci

ISSN: 2252-8814

191

18. Rebecca Ng Shin Mei, Mohd Herwan Sulaiman, Zuriani Mustaffa, Hamdan Daniyal, “Optimal reactive power dispatch solution by loss minimization using moth-flame optimization technique,” Applied Soft Computing, vol. 59, pp. 210-222, 2017. 19. Gaurav Dhiman, Vijay Kumar, “Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications,” Advances in Engineering Software, vol. 114, pp. 48-70, 2017. 20. Chandragupta Mauryan Kuppamuthu Sivalingam1, Subramanian Ramachandran, Purrnimaa Shiva Sakthi Rajamani, “Reactive power optimization in a power system network through metaheuristic algorithms,” Turkish Journal of Electrical Engineering & Computer Science, vol. 25, pp. 4615–4623, 2017. 21. S.S. Reddy, et al., “Faster evolutionary algorithm based optimal power flow using incremental variables,” International Journal of Electrical Power & Energy Systems, vol. 54, pp. 198-210, 2014. 22. S. Surender Reddy, “Optimal reactive power scheduling using cuckoo search algorithm,” International Journal of Electrical and Computer Engineering, vol. 7, no. 5, pp. 2349-2356, 2017.

Real power loss reduction by hyena optimizer algorithm … (Lenin Kanagasabai)


International Journal of Advances in Applied Sciences (IJAAS) Vol. 9, No. 3, September 2020, pp. 192~200 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i3.pp192-200

192

Effect of heating temperature on quality of bio-briquette empty fruit bunch fiber Nofriady Handra1, Anwar Kasim2, Gunawarman3, Santosa4 1Mechanical

Engineering Department, Institut Teknologi Padang, Indonesia of Industrial Agriculture Technology, Andalas University, Indonesia 3Mechanical Engineering Department, Andalas University, Indonesia

2,4Faculty

Article Info

ABSTRACT

Article history:

Empty Fruit Bunches (EFB) are one of the palm oil industry wastes, which are quite plentiful and currently unused optimally. Biomass is one of the renewable energy resources which has important roles in the world. The bio-briquettes are manufactured through densification of waste biomass by implementing certain processes. This research aimed to obtain variations in the mold temperature at 150 ºC, 200 ºC, and 250 ºC to the calorific value and toughness of the briquette material. The toughness was tested using ASTM D 440-86 R02 standard. Arduino program was used for setting the heating resistance time of the mold, which was 20 minutes and the thermal controller was used to adjust the temperature variation. The average mold pressure was 58 Psi. The highest heating value was obtained at a mold temperature of 250 ºC with a value of 5256 cal/g, and the lowest was resulted at a temperature of 150 ºC (4117 cal/g). Meanwhile, the briquette toughness test at 200 ºC mold temperature indicated good data results in which the average loss of fiber particles was only 4.17 %, this was because the adhesion between particles by lignin and cellulose in the fiber functions optimally at this temperature so that the resistance of briquettes went through minor damage.

Received Feb 29, 2020 Revised Apr 23, 2020 Accepted May 2, 2020 Keywords: Bio-briquette Calorific value Empty fruit bunch Fiber Temperature

This is an open access article under the CC BY-SA license.

Corresponding Author: Anwar Kasim, Faculty of Industrial Agriculture Technology, Andalas University, Padang West Sumatera, Indonesia Email: anwar_ks@yahoo.com

1.

INTRODUCTION Energy has emerged as an important issue recently in the world affecting the economic development along with the growth of population, fuel emission problems, and not to mention depletion of oil reserves. The relentlessly increasing consumption of fossil energy has triggered the crisis of fuel. The idea of limiting the fuel oil and gas procurement encourages the renewable fuel development. Indonesia, as an agriculture country, has abundant stock of agriculture products which can be used as one of the renewable energy sources, for example biomass. From about 50,000 megawatts, there are only 320 megawatts of biomass energy that have been used; it is just about 0.64 % from the total biomass that Indonesia [1].Energy from biomass is considered an ideal renewable energy source since it has some benefits such as lower sulfur, CO neutral emission, and abundantly available in the form of agriculture product wastes. Therefore, this renewable energy has been well-known as one of the potential alternative energies. Bioenergy which is produced from biomass becomes a promising, limitless, and sustainable energy source. In addition, it may help minimize the increasing problems related to environment, economic, and technology because of the fossil fuel depletion [2]. The motivation of utilizing bioenergy as the fossil energy substitution either in

Journal homepage: http://ijaas.iaescore.com


Int J Adv Appl Sci

ISSN: 2252-8814

193

heating or electricity generation field is affected by the consideration of the increasing environmental and energy dependency. Since several decades ago, the biomass development has become a crucial issue. Even, it will keep being interesting for the future since this energy is clean, can be renewable, as well as containing carbon-neutral properties. It needs to find the solutions to tackle down this issue. One of which is utilizing an alternative energy, especially the renewable one. The move from oil energy source to the renewable one which is plenty available in Indonesia can lower the dependence level on petroleum. Biomass serves as one of the sources of renewable energy. Biomass is produced from photosynthesis of plants and the derivatives of plants. Biomass promises more value since it is actually a green energy. Biomass consumption does not disrupt the environment since it can be renewable and CO combustion can be reabsorbed by plant, thus, in other words, it is zero emission. In general, biomass waste can be gained from plantation industry, agricultural products, or industry which use raw materials coming from forests [2]. Biomass briquettes can be uses as a biofuel substitute for coal and charcoal. They are commonly used in the developing countries, in which cooking fuels are not easily found, to heat industrial boilers in generating electricity from steam. In order to produce the heat for the boiler, briquettes with coal are burned. Interestingly, briquettes had been used since before the recorded era. They are made of agricultural waste and can replace the fossil fuel, e.g. oil and coal, and can be utilized to heat the boiler for the manufacturing plants. Briquettes serve as a renewable energy source since it will toxic the atmosphere with fossil carbon [3]. Several previous studies have examined the use of briquettes in several ways to alleviate the aforementioned problem. One of the technologies used is a densification as it improves the handling characteristics of raw materials as well as enhances the volumetric calorific value of the biomass. Previous studies on briquette technology have been done [4, 5]. They reviewed the effect of raw materials, temperature, binders, and pressure. Some also reviewed the combustion properties of densified palm biomass, e.g. moisture and ash content, and calorific value. Burning rate was discussed in certain sections [6]. One of the agricultural products which is traded in the domestic market as well as exported is palm oil. The production of palm oil leaves palm Empty Fruit Bunch (EFB), which is considered wastes and not re-processed. These wastes lead to an issue for space and transportation of disposal which eventually bears extra cost for the industry. In disposal, empty fruit bunches is commonly burned in which this process leads another environmental problem such as air pollution and odor. Uncontrolled empty fruit bunches disposal to land leads to large quantity of biomass empty fruit bunches piles in palm groves in which eventually ends to anaerobic decomposition process. Therefore, considering some potential issues such as pollution, empty fruit bunches waste should be managed properly to require the use of biomass renewable energy and produce heat energy for public. A factor that affecting the process of making briquette is adhesive level and waste biomass briquette optimization. The former increases the briquette calorific value as can be found in the element carbon addition. Although, excessive adhesive level will lead to the burning briquette since the pores will be filled with adhesive briquettes and make the briquettes too dense. Not to mention, the adhesive level is affected by the waste biomass type. To be safely used at home, the burning briquette smoke must be reduced. Therefore, it needs to optimize the adhesive level of waste biomass briquette that will be used as fuel. The idea of this research is to see the amount of agricultural industrial wastes that are not utilized properly in the community. Among these wastes is empty fruit bunches. Empty Fruit Bunch waste can actually be processed into solid fuels in the form of briquettes. Biomass waste can be directly used as fuel, converted first into charcoal or pressed into briquettes first. The purpose of pressing is to obtain a better combustion quality and ease of use and handling. Biomass cannot be directly used as fuel because of its low physical properties such as: small energy density, and problems with its handling, storage and transportation, so diversification needs to be done including products such as briquettes or pellets [7]. Converting biomass into better forms can improve its quality as a fuel such as increased fuel power, combustion efficiency, a more uniform shape, drier products and greater mass density. Many previous researchers have discussed the problem of making briquettes from various materials such as charcoal briquettes, Barks wood, Wood chips, Paddy straw, Sawdust briquette, Palm husk, etc. Some of these briquettes use adhesives as reinforcement in the mixture. Meanwhile, in this study biomass briquettes were made without adhesives by the hot temperature regulation method in the mold, so that the ideal temperature is able to produce briquettes with maximum heating value. The use of EFB as an energy source in the form of briquettes, in addition to providing financial benefits, will also help in environmental preservation. The EFB can be made charcoal by a relatively simple process. For the purpose of utilization as charcoal, According to Guritno, needs to be further processed into charcoal briquettes to increase its density and provide a regular shape. In addition, heat energy from EFB of 18,79 kJ/kg is very potential to be used as an alternative energy source. The use of briquettes as fuel can save time and costs because briquettes have a relatively high calorific value [8]. The novelty aspect of this research is focused on the manufacture of briquettes without adhesives by determining variations in the appropriate heating temperature so as to be able to get good briquette properties. Some studies that are Effect of heating temperature on quality of bio-briquette empty fruit bunch fiber … (Anwar Kasim)


194

ISSN: 2252-8814

relevant and support this research include Hasan et al. The binderless briquettes are possible to be formed when continuous heating is applied within the temperature range of 150 °C to 210 °C. This means lignin that serves as a natural binder for the briquette has been produced due to heating process [9]. Briquette making can be done by the hot print method. This method uses raw material for biomass that has not yet been carbonized. The purpose of heating is to activate natural adhesives (lignin & hemicellullose) found in the raw material. Natural adhesives contained in biomass can be activated by increasing the temperature. Lignin has amorphous thermoplastic properties that can be activated through low compacting pressures and temperatures around 60 °C. Activation of natural adhesives with high compacting pressure and increasing temperatures can produce briquettes and pellets that have high durability [10]. This research aimed to obtain variations in the mold temperature at 150 ºC, 200 ºC, and 250 ºC to the calorific value and toughness of the briquette material on binderless briquette. In this study, briquettes were made using the hot mold method without binder in the hope of eliminating water-based binder in the manufacture of briquettes. The purpose of heating is to heat the fiber raw material so that the lignin content (one of the substances contained in the fiber) is able to melt and harden again at room temperature, so that the thermoplastic properties contained in the raw material are used as binder in making briquettes.

2. RESEARCH METHOD 2.1. Raw materials The Empty Fruit Bunch fibers and briquettes were acquired from TAL Ltd at Kuansing Taluak Kuantan, Riau - Indonesia. It used the fresh condition of fiber and empty fruit bunches. Waste of empty fruit bunches was used in this study as shown in Figure 1.

Figure 1. Waste of Empty Fruit Bunch (EFB)

Figure 2a shows empty fruit bunches was directly dried under the sun. Fiber enumeration was manually done to obtain the fiber fineness size, i.e. 1-3 mm. Fiber which was chopped produced both fiber and powder in various lengths, as can be seen in Figure 2b. Oxygen Bomb Calorimeter (OBC) Type Ignition Unit 2901EE was used to determine the heating values of both empty fruit bunches fiber and briquette. Meanwhile, to obtain the fiber and adhesive weight on each sample, digital scale with KW 0600378 500 g x 0.01 g was used.

(a)

(b)

Figure 2. (a). Empty Fruit Bunch of dried under the open sun drying , (b). empty fruit bunches fiber after cutting

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 192 – 200


Int J Adv Appl Sci

ISSN: 2252-8814

195

2.2. Mold The process of this research was to use one unit of briquette molding equipment by giving a heating system to the mold cylinder as shown in Figure 3a. Variation of heat temperature was given at 150 ºC, 200 ºC, and 250 ºC with a holding time of 20 minutes for each temperature. During the holding time, the heated fiber will undergo a compaction process both physically and chemically. Densification temperature is an important factor that could affect the combustion properties especially calorific value. When the operating temperature is relatively high, the calorific value of the densified products increases if compared to the value of the raw materials. The tests given in the sample are toughness and heat value, ash and charcoal content. Cattaneo mentioned that briquette is compressing the materials into small logs with a diameter of between 30 mm and 100 mm and of any length depending on the technology used, either screw or piston compression [11]. For producing high-quality briquette, several characteristics such as strength and durability must be considered [12]. In this study, mold briquette was used as the support molding to make briquettes which have a predetermined classification. The mold was processed by pressing a part of suppressant resulting the optimal density. Figure 3b shows the dimension of a mold in cylinder shape; 100 mm length, 40 mm diameter, and producing 5 cm high briquettes, with heating systems using the Arduino UNO programs and thermal controller. Briquette volume is determined by the formula [2, 13]. Volume = π r2. t

(1)

Figure 3. Heater systems using the Arduino program and thermo controller (a), and mold of briquettes with heater (b). Bio-briquette which is mostly produced has above 1000 g/cm3 density, i.e. by sinking the briquette into the water for quality testing. Lignocellulosic materials have physical upper density limit of 1500 g/cm 3. The process using high pressure, e.g. pellet presses, mechanical piston, or some screw extruders, made the briquettes compact with the range of density between 1200 to 1400 g/cm3. Meanwhile, briquette was pressed using hydraulic piston to gain less dense bio-briquettes, i.e. under 1000 kg. It was ineffective to produce dense briquette as the properties of combustion probably came up [14]. In this study, the density was calculated using the following (2) : ρ=B/V

(2)

where : ρ = density (g/cm3) B = initial mass of briquette (g) V = briquette volume (cm3). In this case, the density was set at 0.8 g/cm3 while the briquette mass (B) was set at 50.3 g.

3.

RESULTS AND DISCUSSION Empty Fruit Bunches as organic material has a basic characteristic in the form of physical and chemical properties. The physical properties and chemical properties of empty fruit bunches [3] can be seen in Table 1.

Effect of heating temperature on quality of bio-briquette empty fruit bunch fiber … (Anwar Kasim)


196

ISSN: 2252-8814 Table 1. Chemical composition of empty fruit bunches fiber No 1 2 3 4

Chemical Components Lignin Cellulose Hemicellulose Ash

Composition (wt%) 15 - 17 36 - 42 25 - 27 0.7 - 6

This research is called as biomass briquettes and is all about a conversion process of agricultural waste to be biomass briquette. The main ingredients of empty fruit bunches are lignin, cellulose, and hemicellulose. Lignin is a natural polymer that has the main function as an adhesive in plant layers. Lignin has functional groups such as hydroxyl, carbonyl and methoxy and has a low solubility to water so it has the potential to be used as an binder. Every material that contains cellulose and lignite is considered suitable for densification process. This research was mainly about converting the materials as bio-briquettes utilizing wastes of empty fruit bunches to get calorific value. 3.1. Calorific value According to Table 2 Calorific Value is one of the fuel characteristics. It can be defined as the energy per kg it disburses when it is burnt. Therefore, it can be utilized to measure the competitiveness of fuel processed in a specific situation of the market. There are some other factors that may influence the market value such as burning characteristics and ease of handling. However, calorific value may remain becoming the most critical factor and it should be considered while choosing the input of raw materials [12, 14].

Table 2. Calorific values of the raw materials Materials Bagasse Sawdust briquette Cotton Stalks /chips Bamboo dust Coffee husk Tobacco waste Tea waste Paddy straw Mustard straw Wheat straw Sunflower stalk Jute waste Palm husk Soya bean husk Barks wood Forestry waste Coir pitch Rice husk Wood chips Groundnut shell

Calorific Values 4380 k 3860 k 4252 k 4160 k 4045 k 2910 k 4237 k 3469 k 4200 k 4300 k 4300 k 4428 k 3900 k 4170 k 1270 k 3000 k 4146 k 3200 k 4785 k 4524 k

Empty Fruit Bunches fiber moisture content greatly determines the quality of bio-briquettes produced. Bio-briquette with a low water content will produce high heat values, as it is produced from a type of fiber that has a low moisture content. In this case, the higher the water content of briquettes, the lower the calorific value is. This is because the generated heat first used to evaporate water in the fiber before producing heat that can be used as heat combustion. In other words, the water content is directly related to the heating value. The calorific value determines the quality of briquettes. The higher the heat value, better quality of the briquettes is produced. Low water content, ash content and volatile matter can increase heating values. High carbon content can increase heating values. The test of the calorific value aims to determine the extent of the value of combustion heat produced by briquettes. Densification of temperature is an important factor that could affect the combustion properties especially calorific value. When the operating temperature is relatively high, the calorific value of the densified products increases if compared to the value of the raw materials [15]. The calorific value test is shown in Figure 4. The highest calorific value was produced at a temperature of 250 ºC with a value of 5256 cal/g. Meanwhile, the lowest value was produced at a temperature of 150 ºC with a value of 4117.8 cal/g. One of the factors is that at a temperature of 250 ºC, Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 192 – 200


Int J Adv Appl Sci

ISSN: 2252-8814

197

the briquette produces a low moisture content and high charcoal content so that the briquette produces an optimal heat value at this temperature. Based on the experimental works performed on the binderless empty fruit bunches briquettes, it can be concluded that the physical appearance of a binderless palm biomass briquette is the best when the smallest size of particles is used. This is mainly due to the increase in contact surface area when smaller size is used, thus stimulates the production of lignin, which in turn improves the effectiveness of lignin as a natural binder.

Figure 4. Graph of calorific value vs mold temperature

Figure 5 shows an analysis of water content at a mold temperature of 150 ºC with a water content of 6.2 %. This condition is higher than the value of water content at mold temperatures of 250 ºC and 200 ºC. Generally, this value is still below the water content standard value according to SNI 01-6235-2000 regarding the quality standards of charcoal briquettes that are equal to a maximum of 8 %. The water content showed different loss since the higher mold temperature given to the briquettes, the lower water content contained in the briquettes, the water content in the fiber evaporates during the molding process. The lower the water content in briquettes, the higher the heat value is produced. The cellulose content in the fiber will affect the amount of charcoal bound in the briquette. The greater cellulose content causes the greater level of charcoal bound; this is because the constituent component of cellulose is carbon [16]. The greater the carbon content of the substance bound to the raw material, the higher the calorific value is. The amount of charcoal in graph of Figure 5 produces the highest level of charcoal at a mold temperature of 250 ºC with a value of 49.4 %. Meanwhile, the lowest level of charcoal was at a mold temperature of 150 ºC with a value of 16.2 %. This is because the higher temperature given produces high levels of charcoal. High levels of charcoal will produce higher heat values. Ash is the remaining part of the combustion process which has no element of carbon. The main element of ash is silica; and its effect is not good on the heat value produced. In this case, the higher the ash content, the lower the quality of briquettes as the high ash content can reduce the heat value. On ash content testing, briquettes produced a high ash content of 12.2 % at a mold temperature of 250 ºC, and only produced the lowest ash content of 5.7 % at a temperature of 150 ºC. By increasing the temperature of the briquette, there is an increase in the amount of ash in the briquette. This will have an impact on the quality of briquettes, especially on calorific values.

Figure 5. Graph of relation between water, ash, charcoal contents vs mold temperature Effect of heating temperature on quality of bio-briquette empty fruit bunch fiber … (Anwar Kasim)


198

ISSN: 2252-8814

3.2. Bio-briquette combustion test One of the main variables in a bio-briquette is to obtain the maximum activation time for a combustion process so that economically the value will be beneficial for the user. Figure 6 indicates that the ignition/combustion of fire lasts longer in a bio-briquette sample at a mold temperature of 150 ºC with a time of 11.3 minutes. Meanwhile, the fastest bio-briquette sample ignition of fire in mold temperature briquettes is 250 ºC with a time of 6.3 minutes. The higher the heating temperature given, the bio-briquette activation time will also experience faster blackouts.

Figure 6. Graph of the relation between ignition time vs mold temperature

The results of the combustion test indicate that the bio-briquette sample is burned and takes ± 15 to 20 seconds so that the sample emits fire until it burns completely. Bio-briquettes at mold temperatures of 250 ºC went through faster combustion processes when compared to heating temperatures at 200 ºC and 150 ºC. It can be indicated that at a mold temperature of 250 ºC the quality of the fiber has undergone drying at such high temperatures so that the fiber is slightly charred around the outer side. Result of combustion of biobriquette samples is shown in Figure 7.

Figure 7. Burning the sample at each temperature, 150 ºC (a), 200 ºC (a), and 250 ºC (c)

In the briquette ignition test, the resulted average fire has a good ignition and produces a large fire. Furthermore, there is also the burning of blue flame in all three samples as shown in the arrow, even though there is no significant blue flame produced. The briquette process is also highly affected by the size of fraction. In this case, it needs higher compacting power for coarser fraction of briquette. Bigger size of fraction will decrease the force of binding contained in the materials. As a result, decay process will be found faster by burning it and it is considered a disadvantage. Bigger size of fraction increases the compacting pressure and lowers the quality of briquette. On the other hand, smaller size of fraction brings an advantage in the drying process [17]. In this case, the drying process ends faster yet achieves better drying quality. Hence, the materials must be set at a suitable size of fraction and dried to a certain moisture content before conducting the briquette process [18].

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 192 – 200


Int J Adv Appl Sci

ISSN: 2252-8814

199

4.

CONCLUSION The use of waste as an alternative energy fuel is beneficial to reduce pollution to the environment by producing bio-briquette products used by the community. The results show that the effect of using the appropriate heating temperature on bio-heating of empty fruit bunches fiber briquettes affected the resulted heating value and the toughness of bio-briquettes. The highest heating value was obtained at a mold temperature of 250 ºC with a value of 5256 cal/g, and the lowest one was resulted at a temperature of 150 ºC at 4117 cal/g. Manufacture and testing of bio-briquettes of empty fruit bunches binderless with heating methods on certain molds and tempers was capable of producing briquettes with heating values equivalent to coal briquettes.

ACKNOWLEDGMENTS The authors would like to express their gratitude to the supervisors, Agroindustry Laboratory of Andalas University, and Laboratory of Mechanical Engineering Department of Institut Teknologi Padang (ITP).

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

[15]

[16]

[17] [18]

Z. Helwani, et al., “Effect of process variables on the calorific value and compressive strength of the briquettes made from high moisture EFB,” IOP Conf. Series: Materials Science and Engineering, vol. 345, pp. 012020, 2018. H. Nofriady, et al., “Effect of Binders on EFB Bio-briquettes of Fuel Calorific Value,” IJASEIT Journal, vol. 8 no. 4, pp. 1071-1076, 2018. M. Kumar, et al., “Biomass briquette production: A propagation of non-convention technology and future of pollution free thermal energy sources,” (AJER), vol. 04, pp. 44-50, 2015. H. Nofriady, et al., “Effect of particles size on EFB Bio-briquettes of calorific value,” JTM. vol. 7, no. 1, pp. 56-62, 2017. P. Wilaipon, “The effects of briquetting pressure on banana-peel briquette and the banana waste in Northern Thailand,” American Journal of Applied Sciences, vol. 6, no. 1, pp.167-171, 2009. H. M. Faizal, et al., “Review on densification of palm residues as a technique for biomass energy utilization,” J. Teknologi. vol. 78, no. 9-2, pp. 9-18, 2016. H. Djeni, “Engineering of wood pellet making and testing results,” Forest Products Research Journal, vol. 30, no. 2, pp. 144-154, 2012. R. Yuli et al., “Effect of temperature and concentration of adhesives on the characteristics of bioarang briquettes made from palm oil empty fruit bunches by pyrolysis process (in Bahasa),” Konversi, vol. 4, no. 2, pp. 16-22, 2015. F. Hasan Mohd. et al., “Characteristics of binderless palm biomass briquettes with various particle sizes,” Jurnal Teknologi, vol. 77, no. 8, pp. 1-5, 2015. S. Danang et al., “Characteristics of briquettes from sengon wood processing waste by hot print method,” Proceedings of the national seminar on science & technology applications III, A-394-A400, 2012. A. B. Nasrin, et al., “Oil Palm Biomass as Potential Substitution Raw Materials for Commercial Briquettes Biomass Production,” American Journal of Applied Sciences, vol. 5, no. 3, pp. 179-183, 2008. D. Cattaneo, “Briquetting - a forgotten opportunity,” Wood Energy, The University of Brescia, 2003. H. M. Faizal, et al., “Review on densification of palm residues as a technique for biomass energy utilization,” J. Teknologi, vol. 78, no. 9-2, pp. 9-18, 2016. K. Basar, et al., “Frequency and temperature-dependent on conductivity from superionic conducting glass (AgI)x(AgPO3)1-x,” Proceedings of the International Conference on Mathematics and Natural Science, pp. 881, 2006. A. Kurniawan, “Analysis of Combustion Characteristics of Palm Oil Industrial Waste Briquette Varying Temperature Adhesives and 300°C Wall Furnaces, 400°C and 500°C Heat Flux Method Using Constant (Hfc) " 2015. M. Ervando, “Effect of mold temperature variations on the characteristics of sengon wood briquettes on compression pressure of 600 psig (in Bahasa),” 2013, Online [Available]: https://lib.unnes.ac.id/17992/1/5201408077.pdf. K. Jaan, et al., “Determination of physical, mechanical and burning characteristics of polymeric waste material briquettes,” Estonian Journal of Engineering, vol. 16, no. 4, pp. 307–316, 2010. C. Antwi-Boasiako and B.B. Acheaampong, “Strength properties and calorific values of sawdust-briquettes as wood-residue energy generation source from tropical hardwoods of different densities, biomass and bioenergy,” vol. 85, pp. 144-152, 2016.

Effect of heating temperature on quality of bio-briquette empty fruit bunch fiber … (Anwar Kasim)


200

ISSN: 2252-8814

BIOGRAPHIES OF AUTHORS Currently, the researcher is a lecturer at the Institut Teknologi Padang (ITP) in the mechanical engineering study program. He completed his master's program at the University of Kebangsaan Malaysia in the field of mechanical and materials engineering in 2010. Currently, he is finishing his doctoral program at Universitas Andalas Padang with Empty Fruit Bunch (EFB) biomass waste processing specifications into briquette products as a source of fuel energy.

Prof. Dr. rer.nat. Anwar Kasim is a lecturer in the Faculty of Agricultural Technology at Universitas Andalas Padang Indonesia. He finished his Bachelor's degree in Agriculture Technology. He graduated with his Ph.D. in Forestry and Wood Technology in Technische Universitaet Hamburg, Germany in 1990. His recent works are about Food Technology and Engineering. He had a lot of research since 1995 about Agroforestry. He had three national Patents in Uncaria Gambier.

Prof. Dr. Eng. Gunawarman is a teaching staff in the Mechanical Engineering Department at the Andalas University Padang. 1995: Master of Engineering; Study Program of Magister Material Engineering, Graduate School of Institut Teknologi Bandung, Bandung, Indonesia 2002 and Doctor of Engineering; Toyohashi University of Technology (TUT), Toyohashi, Aichi-ken, Japan.

Prof. Dr. Santosa is a lecturer at the Agricultural Engineering Faculty of the Agricultural Technology Andalas University Padang. The field of research science is Agricultural mechanization in the machine management system regional specifications. The master's program was completed in 1993 and the doctor completed in 2002 at the Institut Pertanian Bogor. Fields of research: Agricultural Machinery Management System, Agricultural Equipment and Machinery Design, Techno-Economic Agricultural Tools and Machines, System Simulation and Computer Programming.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 192 – 200


International Journal of Advances in Applied Sciences (IJAAS) Vol. 9, No. 3, September 2020, pp. 201~210 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i3.pp201-210

201

Evidential reasoning based decision system to select health care location Md. Mahashin Mia, Atiqur Rahman, Mohammad Shahadat Hossain Department of Computer Science and Engineering, University of Chittagong, Bangladesh

Article Info

ABSTRACT

Article history:

The general public’s demand of Bangladesh for safe health is rising promptly with the improvement of the living standard. However, the allocation of limited and unbalanced medical resources is deteriorating the assurance of safe health of the people. Therefore, the new hospital construction with rational allocation of resources is imminent and significant. The site selection for establishing a hospital is one of the crucial policy-related decisions taken by planners and policy makers. The process of hospital site selection is inherently complicated because of this involves many factors to be measured and evaluated. These factors are expressed both in objective and subjective ways where as a hierarchical relationship exists among the factors. In addition, it is difficult to measure qualitative factors in a quantitative way, resulting incompleteness in data and hence, uncertainty. Besides it is essential to address the subject of uncertainty by using apt methodology; otherwise, the decision to choose a suitable site will become inapt. Therefore, this paper demonstrates the application of a novel method named belief rulebased inference methodology-RIMER base intelligent decision system(IDS), which is capable of addressing suitable site for hospital by taking account of large number of criteria, where there exist factors of both subjective and objective nature.

Received Mar 20, 2020 Revised Apr 24, 2020 Accepted May 11, 2020 Keywords: Belief rule base Evidential reasoning Intelligent decision system Multiple criteria decision analysis Uncertainty

This is an open access article under the CC BY-SA license.

Corresponding Author: Atiqur Rahman, Department of Computer Science & Engineering, University of Chittagong, Chittagong-4331, Bangladesh. Email: bulbul.cse.cu@gmail.com

1.

INTRODUCTION When we attempt to select suitable site for hospital, it involves multiple criterions such as, location, safety, environment, parking space, Land cost, Risk, transportation cost and utility cost etc. which are quantitative and qualitative in nature [1, 2]. Numerical data which uses numbers is considered as quantitative data and can be measured with 100% certainty [2]. On the contrary, qualitative data is descriptive in nature, which defines some concepts or imprecise characteristics or quality of things [3]. Hence, this data can’t describe a thing with certainty since it lacks the precision and inherits ambiguity, ignorance, vagueness. Consequently, it can be argued that qualitative data involves uncertainty since it is difficult to measure concepts or characteristics or quality of a thing with 100% certainty. “Quality of Location” is an example of equivocal term since it is an example of linguistic term. Hence, it is difficult to extract its correct semantics (meaning). However, this can be evaluated using some referential value such as excellent, good, average and bad. Therefore, it can be seen that qualitative criterions which have been considered in selecting hospital location involves lot of uncertainties and they should be treated with appropriate methodology is

Journal homepage: http://ijaas.iaescore.com


202

ISSN: 2252-8814

RIMER, which is connect to Evidential reasoning (ER) is a multi-criteria decision analysis (MCDA) method [4]. ER deals with problems, consisting of both quantitative and qualitative criteria under various uncertainties such as incomplete information, vagueness, ambiguity [4]. The ER approach, developed based on decision theory in particular utility theory [5], artificial intelligence in particular the theory of evidence [5, 6]. It uses a belief structure to model a judgment with uncertainty. Qualitative attribute such as location or safety needs to be evaluated using some linguistic referential value such as excellent, average, good and bad etc [5, 6]. This requires human judgment for evaluating the attributes based on the mentioned referential value. In this way, the issue of uncertainty can be addressed and more accurate and robust decision can be made. The belief rule-based inference methodology-RIMER [7] has addressed such issue by proposing a belief structure, which assigns degree of belief in the various referential values of the attributes. Road Map: In section 2 will briefly represent belief rule base inference methodology-RIMER. Section 3 will demonstrate the application of BRB in hospital site selection assessment problem. Section 4 will represent the results and achievement. Finally section 5 will conclude the research.

2.

RIMER TO DEVELOP IDS In RIMER, Belief Rule Base (BRB) can capture complicated nonlinear causal relationships between antecedent attributes and consequents, which is not possible in traditional IF-THEN rules. BRB is used to model domain specific knowledge under uncertainty, and the ER approach is employed to facilitate inference. This section introduces BRB as a knowledge representation schema under uncertainty as well as inference procedures of RIMER. 2.1. Modeling doain knowledge using BRB Belief Rules are the key constituents of a BRB, which include belief degree. This is the extended form of traditional IF-THEN rules. In a belief rule, each antecedent attribute takes referential values and each possible consequent is associated with belief degrees [8]. The knowledge representation parameters are rule weights, attribute weights and belief degrees in consequent attribute, which are not available in traditional IFTHEN rules. A belief rule can be defined in the following way. 𝑘 𝐼𝐹 (𝑃1 𝑖𝑠 𝐴𝑘 1 )∩…∩ 𝑃𝑇𝑘 𝑖𝑠 𝐴𝑇

𝑅𝑘 : { 𝑇𝐻𝐸𝑁 {(𝐶

𝑘

1 ,𝛽1𝑘 ),(𝐶2 ,𝛽2𝑘 ),…,(𝐶𝑁 ,𝛽𝑁𝑘 )}

𝑁

,

𝑅𝑘 : (𝛽𝑗𝑘 ≥ 0, ∑ 𝛽𝑗𝑘 ≤ 1) 𝑤𝑖𝑡ℎ 𝑎 𝑟𝑢𝑙𝑒 𝜃𝑘 , 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒 𝑗=1

𝑤𝑒𝑖𝑔ℎ𝑡 𝛿𝑘1 , 𝛿𝑘2 , 𝛿𝑘3 , … , 𝛿𝑘𝑇𝑘 𝑘 ∈ {1, … , 𝐿} 𝑤𝑒𝑖𝑔ℎ𝑡 𝛿𝑘1 , 𝛿𝑘2 , 𝛿𝑘3 , … , 𝛿𝑘𝑇𝑘 𝑘 ∈ {1, … , 𝐿}

(1)

Where P1, P2, P3 … PTk PTk represent the antecedent attributes in the kth rule. represents one of A (i  1,...,Tk , k  1,..., L) Aik (i  1,...,Tk , k  1,..., L) the referential values of the ith antecedent attribute Pi in the kth rule. C j C j C j is one of the consequent reference values of the belief rule.  jk ( j  1,..., N , k  1,..., L k i

 jk ( j  1,..., N , k  1,..., L  jk ( j  1,..., N , k  1,..., L is one of the the belief degrees to which the consequent 𝑁 reference value C j C j C j is believed to be true. If 1 ∑𝑁 𝑗 = 1𝛽1𝑘 = 1 ∑𝑗 = 1𝛽1𝑘 = 1 the kth rule is said to be complete; otherwise, it is incomplete. Tk is the total number of antecedent attributes used in kth rule L is the number of all belief rules in the rule base. N is the number of all possible consequent in the rule base. For example a belief rule to assess accessibility term for hospital can be written in the following way. 𝐼𝐹 𝑇𝑟𝑎𝑛𝑠𝑝𝑜𝑟𝑡𝑎𝑡𝑖𝑜𝑛 𝑐𝑜𝑠 𝑡 𝑖𝑠 𝑔𝑜𝑜𝑑 𝐴𝑁𝐷 𝑡𝑟𝑎𝑓𝑓𝑖𝑐_𝑎𝑐𝑐𝑒𝑠𝑠 𝑅𝑘 : {𝑇𝐻𝐸𝑁 𝐴𝑐𝑐𝑒𝑠𝑠𝑖𝑏𝑖𝑙𝑖𝑡𝑦 𝑖𝑠 {(𝐸𝑥𝑐𝑒𝑙𝑙𝑒𝑛𝑡, (0.00)), (𝑔𝑜𝑜𝑑, (1.00)), (𝑎𝑣𝑒𝑟𝑎𝑔𝑒, (0.00))}

𝑖𝑠 𝑔𝑜𝑜𝑑 𝑁𝑒𝑢𝑡𝑟𝑎𝑙_𝐿𝑜𝑐𝑎𝑡𝑖𝑜𝑛 𝑖𝑠 𝑔𝑜𝑜𝑑

(2)

Where {(Excellent, 0.00), (Good, 1.00), (Average, 0.00)} is a belief distribution for accessibility consequent, stating that the degree of belief associated with Excellent is 0%, 100% with Good and 0% with Average. In this belief rule, the total degree of belief is (0+1+0) =1, hence that the assessment is complete. 2.2. BRB Inference using ER The ER approach [9] developed to handle multiple attribute decision analysis (MADA) problem having both qualitative and quantitative attributes. Different from traditional MADA approaches, ER presents MADA problem by using a decision matrix, or a belief expression matrix, in which each attribute of an alternative described by a distribution assessment using a belief structure. The inference procedures in BRB inference system consists of various components such as input transformation, rule activation weight Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 201 – 210


Int J Adv Appl Sci

ISSN: 2252-8814

203

calculation, rule update mechanism, followed by the aggregation of the rules of a BRB by using ER [10-13]. The input transformation of a value of an antecedent attribute P i consists of distributing the value into belief degrees of different referential values of that antecedent. This is equivalent to transforming an input into a distribution on referential values of an antecedent attribute by using their corresponding belief degrees [14]. The ith value of an antecedent attribute at instant point in time can equivalently be transformed into a distribution over the referential values, defined for the attribute by using their belief degrees. The input value of Pi, Pi which is the ith antecedent attribute of a rule, along with its belief degree 𝜀𝑖 𝜀𝑖 is shown by (3). The belief degree 𝜀𝑖 𝜀𝑖 to the input value is assigned by the expert in this research. 𝐻(𝑃𝑖 , 𝜀𝑖 ) = {(𝐴𝑖𝑗 , 𝑎𝑖𝑗 ), 𝑗 = 1, … , 𝑗𝑖 }, 𝑖 = 1, … , 𝑇𝑘

(3)

Here H is used to show the assessment of the belief degree assigned to the input value of the antecedent attribute. In (3) 𝐴𝑖𝑗 𝐴𝑖𝑗 (ith value) is the jth referential value of the input Pi, Pi. 𝑎𝑖𝑗 𝑎𝑖𝑗 is the belief 𝑗1 𝑗1 degree to the referential value 𝐴𝑖𝑗 𝐴𝑖𝑗 with 𝑎𝑖𝑗 ≥ 0 𝑎𝑖𝑗 ≥ 0 ∑𝑗=1 𝑎𝑖𝑗 ≤ 1(𝑖 = 1, … , 𝑇𝑘 ). ∑𝑗=1 𝑎𝑖𝑗 ≤ 1(𝑖 = 1, … , 𝑇𝑘 ) , and 𝒋𝒊 𝒋𝒊 is the number of the referential values. For example, the input 0.82 for Accessibility is equivalently transformed to {(Excellent, 0.81), (Good, 0.19), (Average, 0.00)}. The input value of an antecedent attribute is collected from the expert in terms of linguistic values such as ‘Excellent’, ‘Good’, ‘Average’ and ‘Bad’. This linguistic value is then assigned degree of belief 𝜀𝑖 𝜀𝑖 by taking account of expert judgment. This assigned degree of belief is then distributed in terms of belief degree 𝑎𝑖𝑗 𝑎𝑖𝑗 of the different referential values 𝐴𝑖𝑗 𝐴𝑖𝑗 [Excellent, Good, Average, Bad] of the antecedent attribute. The above procedure of input transformation is elaborated by (4) (5) given below. However, when a hospital is located 1.1 km of the place, it can be both excellent and average. However, it is important for us to know, with what degree of belief it is excellent and with what degree of belief it is average. This phenomenon can be calculated with the following formula.

 n, i 

hn 1  h ,  n 1,i  1   n ,i hn 1 , i  hn ,i

if

hn, i  h  hn1,i

Here, the degree of belief

 n,i

(4) (5)

is associated with the evaluation grade ‘average’ while  n1,i is associated

with the upper level evaluation grade i.e. excellent. The value of hn+1 is the value related to excellent, which is considered as 1km i.e. the location of the hospital. The value of hn 1 is related to average, which is 1.5 km. Hence, applying (2) the distribution of the degree of belief with respect to 1.3 Km of the location of the hospital can be assessed by using (2) and the result is given: {(Excellent, 0.4), (Good, 0.6), (Average, 0), (Bad, 0)}, When the kth rule is activated, the weight of activation of the kth rule, ωk, ωk is calculated by using the flowing formula [15].

𝜔𝑘 = ∑𝑙

𝜃𝑘 𝑎𝑘

𝑗=1 𝜃𝑗 𝑎𝑗

7

=

̅​̅​̅​̅​̅

𝑘 (𝑎𝑘 )𝛿𝑘𝑖 𝜃𝑘 ∏𝑖=1 𝑖

7𝑘 𝛿 𝑗 ̅​̅​̅​̅​̅ ∑𝑙𝑗=1 𝜃𝑗 [∏𝑙=1 (𝑎𝑖 ) 𝑗𝑙 ]

and

̅​̅​̅​̅ 𝛿𝑘𝑖 =

𝛿𝑘𝑖 𝑚𝑎𝑥𝑖=1,…,7 {𝛿𝑘𝑖 } 𝑘

(6) Where ̅​̅​̅​̅ 𝛿𝑘𝑖 ̅​̅​̅​̅ 𝛿𝑘𝑖 is the relative weight of Pi Pi used in the kth rule, which is calculated by dividing weight of 𝑃𝑖 𝑃𝑖 ̅​̅​̅​̅ 𝛿𝑘𝑖 ̅​̅​̅​̅ 𝛿𝑘𝑖 with maximum weight of all the antecedent attributes of the kth rule. By doing so, the value of becomes normalize, meaning that the range of its value should be between 0 and 1.𝑎𝑘 = 𝑇𝑘 𝑇𝑘 𝑘 ̅​̅​̅​̅ ̅​̅​̅​̅ ∏𝑖=1 (𝑎𝑖𝑘 )𝛿 𝑘𝑖 𝑎𝑘 = ∏𝑖=1(𝑎𝑖 )𝛿𝑘𝑖 is the combined matching degree, which is calculated by using multiplicative aggregation function. When the kth rule as given in (1) is activated, the incompleteness of the consequent of a rule can also result from its antecedents due to lack of data. An incomplete input for an attribute will lead to an incomplete output in each of the rules in which the attribute is used. The original belief degree ̅​̅​̅​̅ 𝛽1𝑘 ̅​̅​̅​̅ 𝛽1𝑘 in the ith consequent Ci Ci of the kth rule is updated based on the actual input information as [10-13].

Evidential reasoning based decision system to select health care location (Md. Mahashin Mia)


204

ISSN: 2252-8814

𝒊

𝜷𝒊𝒌 = ̅​̅​̅​̅ 𝜷𝒊𝒌 1, 𝑖𝑓 𝑝 𝑖𝑠 𝑢𝑠𝑒𝑑 𝑖𝑛 𝑑𝑒𝑓𝑖𝑛𝑖𝑛𝑔 𝑅 (𝑡=1,…,𝑇 )

𝑱

𝒌 (𝒓(𝒕,𝒌) ∑ 𝒕 =𝒕𝒋) ∑𝒊=𝟏 𝒋=𝟏 𝑻

𝒌 𝒓(𝒕,𝒌) ∑𝒊=𝟏

(7)

1, 𝑖𝑓 𝑝 𝑖𝑠 𝑢𝑠𝑒𝑑 𝑖𝑛 𝑑𝑒𝑓𝑖𝑛𝑖𝑛𝑔 𝑅𝑘 (𝑡=1,…,𝑇𝑘 )

𝑘 𝑘 𝑖 𝑖 (𝑡, 𝑘) = {0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 Where, (𝑡, 𝑘) = {0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 ̅​̅​̅​̅ 𝛽1𝑘 ̅​̅​̅​̅ 𝛽1𝑘 is the original belief degree and βik βik is the updated belief degree.

Due to the incomplete input for ‘Accessibility’,the belief degree of the connected rules needs to be modified to show the incompleteness by using (7)

ik  ik

1.6   ik * 0.8, 2

i  1,2,3; k  1,...9. (8)

Therefore 0 < ∑3𝑖=1 𝛽𝑖𝑘 < 1 for all rules that are associated with ‘Cost’. Using the subrule base, the assessment result for ‘Accessibility’ is obtained using IDS system as Accessibility: {(Excellent, 0.66),(Good,0.23),(Average,0.02),(Bad,0.00), (Unknown,0.09)} Where, “Unknown” in the above result means that the output is also incomplete input. ER approach is used to aggregate all the packet antecedents of the L rules to obtain the degree of belief of each referential values of the consequent attribute by taking account of given input values Pi Pi of antecedent attributes. This aggregation can be carried out either using recursive or analytical approach. In this research analytical approach [14] has been considered since it is computationally efficient than recursive approach [10-12], because analytical approach deal with all parameter such as rule weight, attribute weight ,belief degree, utility etc. For this why there is no chance of absence of any parameter. The conclusion O(Y), consisting of referential values of the consequent attribute, is generated. Equation (9) as given below illustrates the above phenomenon: 𝑂(𝑌) = 𝑆(𝑝𝑖 ) = {(𝐶𝑗 , 𝛽𝑗 ), 𝑗 = 1, … , 𝑁} 𝑂(𝑌) = 𝑆(𝑝𝑖 ) = {(𝐶𝑗 , 𝛽𝑗 ), 𝑗 = 1, … , 𝑁}

(9)

Where βj βj denotes the belief degree associated with one of the consequent reference values such as Cj Cj The βj βj is calculating by analytical format of the ER algorithm [3] as illustrated in (10). 𝛽𝑗 =

𝑁 𝐿 𝑁 𝜇×[∏𝐿 𝑘=1((𝜔𝑘 𝛽𝑗𝑘 +1−𝜔𝑘 ∑𝑗=1 𝛽𝑗𝑘 ))−∏𝑘=1(1−𝜔𝑘 ∑𝑗=1 𝛽𝑗𝑘 )]

𝜔𝑘 ∑𝑁 𝑗=1 𝛽𝑗𝑘 ) − (𝑁 − 1) ×

1−𝜇×[∏𝐿 𝑘=1 1−𝜔𝑘 ] 𝑛 ∏𝑘=1(1 − 𝜔𝑘 ∑𝑁 𝑗=1 𝛽𝑗𝑘 )]

𝐿 With 𝜇 = [∑𝑁 𝑗=1 ∏𝑘=1(𝜔𝑘 𝛽𝑗𝑘 + 1 −

(10)

The final combined result or output generated by ER is represented by {(𝐶1 , 𝛽1 ), (𝐶2 , 𝛽1 ), (𝐶3 , 𝛽1 ), … , (𝐶𝑁 , 𝛽𝑁 )}{(𝐶1 , 𝛽1 ), (𝐶2 , 𝛽1 ), (𝐶3 , 𝛽1 ), … , (𝐶𝑁 , 𝛽𝑁 )} Where βj βj is the final belief degree attached to the jth referential value of Cj Cj the consequent attribute, obtained after combining all activated rules in the BRB by using ER. 2.3. Output of the BRB system The output of the BRB system is not crisp/numerical value. Hence, this output can be converted into crisp/numerical value by assigning utility score to each referential value of the consequent attribute [16]. 𝑁 ∗ 𝐻(𝐴∗ ) = ∑𝑁 𝑗=1 𝑢(𝐶𝑗 )𝐵𝑗 𝐻(𝐴 ) = ∑𝑗=1 𝑢(𝐶𝑗 )𝐵𝑗

(11)

Where 𝐻(𝐴∗ ) 𝐻(𝐴∗ )is the expected score expressed as numerical value and 𝑢(𝐶𝑗 ) 𝑢(𝐶𝑗 )is the utility score of each referential value. For example, in this paper the overall assessment result is {(Excellent,0.55),(Good, 0.25),(Average,0.20),(Bad,0.00)}for selecting hospital, then the expected utility score is 0.675 or 68% which represents good risk for suitable hospital location. In this paper the RIMER methodology to address various type of uncertainty such as incompleteness, ignorance and impreciseness by using (7) and (12). The incompleteness as mentioned occurs due to ignorance, meaning that belief degree has not been assigned to any specific evaluation grade and this can be represented using (12).

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 201 – 210


Int J Adv Appl Sci

ISSN: 2252-8814

𝛽𝐻 = 1 − ∑𝑁 𝑛=1 𝛽𝑛

205 (12)

Where βH is the belief degree unassigned to any specific grade. If the value of βH is zero then it can argued that there is an absence of ignorance or incompleteness. If the value of βH is greater than zero then it can be inferred that there exists ignorance or incompleteness in the assessment.

3.

BRB IDS ARCHITECTURE Architectural design represents the structure of data and program components that are required to build a computer-based system. It also considers the pattern of the system organization, known as architectural style. BRB IDS adopts the three-layer architecture [15, 17], which consist of presentation layer, application layer and data processing layer as shown in Figure 1.

Figure 1. BRB IDS architecture 3.1. System components The input clarification of input antecedent W11 (Security ward around) ,W12(Vandal Proof),W13( Open Location),W21(Expansion Capacity),W22(Parking Space),W23(Storey Number),W31(Neutral Location),W32(Traffic Access), W33(Public Transport Link), W41(Construction Cost),W42(Land Cost),W51(Land Risk) ,W52(Construction Risk),W53(Time Frame and delivery Speed) are transformed to referential value by equation (4), (5) on behalf of expert. The input clarifications of this BRB system transformed to referential is shown in Table 1. Table 1. The input are transformed in referential value SI No. 0 1 2 3 4 5 6 7 8 9 10 11 12 13

Input antecedent W11 W12 W13 W21 W22 W23 W31 W32 W33 W41 W42 W51 W52 W53

Expert belief 0.2 1 0.8 0.5 1 0.9 0.5 1 1 0.4 0.5 1 0.6 0.7

Excellent 0.05 1 0.5 0.1 0.8 0.86 0.1 0.8 0.8 0.1 0.5 0.8 0.5 0.65

Referential value Good Average 0.1 0.3 0 0 0.5 0 0.8 0.1 02 0 0.14 0 0.4 0.5 0.2 0 0.2 0 0.5 0.4 0.4 0.1 0.2 0 0.3 0.1 0.2 0.1

Bad 0.55 0 0 0 0 0 0 0 0 0 0 0 0.1 0.05

3.2. Knowledge base constructed using BRB In present paper, we worked on assessment process to select the suitable location for hospital establishment. In order to construct BRB knowledge base of this system we designed a BRB framework to site assessment according to domain expert. The BRB framework of suitable location assessment as illustrated in Figure 2, from the framework, it can be observed that input factors that determine suitable location for hospital. The BRB knowledge base has different traditional rule to assessment, which need to convert belief rules.

Evidential reasoning based decision system to select health care location (Md. Mahashin Mia)


206

ISSN: 2252-8814

Figure 2. Hierarchical relationship among location evaluation variable In such situations, belief rules may provide an alternative solution to accommodate different types and degrees of uncertainty in representing domain knowledge. A BRB can be established in the following four ways [16] (1) Extracting belief rules from expert knowledge (2) Extracting belief rules by examining historical data; (3) Using the previous rule bases if available, and (4) Random rules without any preknowledge. In this paper, we constructed initial BRB by the domain expert knowledge. This BRB consists of four sub-rule-bases namely environment and safety (W1), size (W2), accessibility (W3), cost effectiveness (W4), risk (W5) and location of healthcare center(S). W4 (Cost Effectiveness) sub-rule-base has three antecedent attributes. Each antecedent attribute consists of four referential values. Hence, this sub-rule-base consists of 16 rules. The entire BRB (which consists of six sub-rule bases) consists of (64+64+64+16+64+1024) =1296 belief rules. It is assumed that all belief rules have equal rule weight; all antecedent equal weight, and the initial belief degree assigned to each possible consequent by two expert from accumulated the data. To better handle uncertainties, each belief rule considered the three referential values are Excellent (E), Good (G), Average (A) and Bad (B).

Table 2: Initial belief rules of sub-rule-base (cost effectiveness) IF

THEN Cost effetiveness Excellent Good Average 0 1 E E 1 0 0 1 1 E G 0.4 0.5 0.1 2 1 E A 0.5 0 0.5 3 1 E B 0.6 0.1 0.1 4 1 G E 0 0.8 0.3 5 1 G G 0 0.6 0 6 1 G A 0.33 0.66 0 7 1 G B 0 0.93 0.1 8 1 A E 0 0.8 0.2 14 1 B A 0.2 0 0.8 15 1 B B 0 0.06 0.93 An example of a belief rule taken from Table 2 illustrated in follow: R1: IF W41 is ‘E‘AND W42 is ‘E‘ THEN Cost Effectiveness (W2) is {E (1.00), G (0.00), A (0.00), B (0.00)} Rule No.

Rule weight

W41

W41

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 201 – 210

bad 0 0 0 0.2 0 0 0 0 0 0 0


Int J Adv Appl Sci

ISSN: 2252-8814

207

3.3. Inference engine using ER This BRB IDS designed using the ER approach [15, 17] which is described in section 2.2. It is similar to traditional forward chaining. The inference with a BRB using the ER approach also involves assigning values to attributes, evaluating conditions and checking to see if all of the conditions in a rule are satisfied. The BRB inference process using the ER approach described by the following steps are input transformation, calculation of the activation weight, calculating combined belief degrees to all consequents, belief degree update and aggregate multiple activated belief rules. The inputs of data are of two types, objective and subjective. Input transformation of this system and input clarification are deduced in previous section and table 1 by using (4), (5). After the value assignment for antecedent, calculating the combined matching degrees between the inputs and the rule’s antecedents, the next step is to calculate activation weight for each packet antecedent in the rule base using (6). The belief degrees in the possible consequent of the activated rules in the rule base are updated using (7). Then aggregating all activated rules using the ER approach to generate a combined belief degree in possible consequents using (9)(10). Then expected result of suitable location assessment was calculated from its different consequents of factors. Finally, presenting the system inference results of suitable location consequent which is not crisp/numerical value, then it is converted into crisp/numerical value for recommendation using (11). 3.4. BRB IDS interface System interface is an intermediate position that represents the interaction between user and system. Figure 3 represents the BRB system interface of this paper.

Figure 3. Graphical user interface of the IDS

4.

RESULTS AND DISCUSSION In the previous section, we have discussed about the RIMER method and how to implement it. Therefore, in this section we will look at the results from using this method on the different types of alternatives. Figure 4 shows the assessment distribution which must be done first by employing the transformation equation. Any measurements of quality can be translated to the same set of grades as the top attribute which make it easy for further analysis. The assessments given by the Decision Maker (DM) in Figure 5 are fed into IDS and the aggregated results are yielded at the main criteria level (Figure 5)

Evidential reasoning based decision system to select health care location (Md. Mahashin Mia)


208

ISSN: 2252-8814

Figure 4. Assessment scores of suitable location based on sub criteria (E-excellent, G-good, A-average, B-bad)

Figure 5. The overall assessment (alternatives) (DoB-degree of belief)

Figure 6. Overall assessment for suitable location

The three alternatives (location) simulated data set with assessment outcome is presented in Figure 6. This figure represents overall assessment outcome from location information. The result of this system is measured in percentage for recommendation. The output of this system was generated based on output utility (11). In this paper, the utility score of (100-90) % assigned to ‘Excellent’, (85-89) % assigned to ‘Good’, (80-84)% assigned to ‘Average’ and (0-79)% assigned to ‘Bad’. In the case study, the location assessment of three alternatives using this system, manual system and benchmark result is shown in Figure 6. The historical results were considered as benchmark. From Figure 4 it can be observed that IDS generated result has less deviation than from benchmark result. Hence, it can be argued that IDS output is more reliable than manual system. Therefore, it can be concluded that if the assessment of suitable location evaluation is carried out by using the IDS, eventually this will play an important role in taking decision to avoid uncertainty issue. The possible expected utilities of each alternative generated by the IDS (Figure 6) (based on the given utility values for each grade above). The alternatives ranked based on the expected utility. The ranking of alternatives is as follows: Highway Road > Kandipar > Racecourse

5.

CONCLUSION The development and application of a belief rule based IDS to choose suitable place by using attribute of different types of alternatives have been presented. The prototype IDS is embedded with a novel methodology known as RIMER, allows the handling of various types of uncertainty and hence, be considered as a robust tool can be utilized in selecting suitable location for hospital . Consequently, the prototype IDS Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 201 – 210


Int J Adv Appl Sci

ISSN: 2252-8814

209

can handle various types of uncertainties found in suitable area assessment domain knowledge as well as in attribute/criterion of a alternative. This system can also provide a percentage of recommendation, which is more reliable and informative than from the traditional expert’s opinion. The prototype IDS can only is used to select good location by using attribute of a alternative.

REFERENCES [1] [2] [3] [4]

[5] [6] [7] [8] [9] [10]

[11] [12] [13] [14] [15] [16] [17]

M Sonmez, G. Graham and J. B. Yang and G D Holt, “Applying evidential reasoning to pre-qualifying construction contractors”, Journal of Management in Engineering, vol.18, no.3, pp. 111-119, 2002. Lisa M., The Sage encyclopedia of qualitative research methods, Los Angeles, Calif.: Sage Publications, 2008. [Online]. http://www.pearson.ch/1449/9780273722595/An-Introduction-to-Geographical.aspx D. L. Xu and J. B. Yang, “Introduction to multi-criteria decision making and the evidential reasoning approach,” Working Paper Series, No. 0106, Manchester School of Management, UMIST, pp. 1-21, 2001. [Online] Available: http://www.umist.ac.uk/management. L. Zadeh, “A simple view of the Dempster-Shafer theory of evidence and its implication for the rule of combination,” The Al Magazine, vol. 7, no. 2, pp. 85-90, 1986. Kari Sentz and Scott Ferson, Combination of Evidence in Dempster–Shafer Theory, Sandia National Laboratories SAND 2002-0835, 2002 Bragge, J, et al, “Bibliometric analysis of multiple criteria decision making/multiattribute utility theory,” Multiple criteria decision making for sustainable energy and transportation systems, pp. 259–268, 2010. B. Yang and P. Sen, “Multiple attribute design evaluation of large engineering products using the evidential reasoning approach,” Journal of Engineering Design, vol. 8, no. 3, pp. 211-230, 1997 Mahmud, T , Hossain, M.S, “An Evidential Reasoning-based Decision Support System to Support House Hunting,” International Journal of Computer Applications, vol. 57, no. 21, pp. 51-58, 2012. Yang, J. B., Liu, J., Wang, J., Sii, H. S. & Wang, H. W., “Belief rule-base inference methodology using the evidential reasoning approach -RIMER,” IEEE Transactions on Systems Man and Cybernetics Part A-Systems and Humans, vol. 36, no. 2, pp. 266-285, 2006. D.L. Xu, et al, “Inference and learning methodology of belief-rule-based expert system for pipeline leak detection,” Expert Systems with Applications, vol. 32, no. 7, pp. 103–113, 2007 J. Yang, et al, “Optimization models for training belief-rule-based systems,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 37, no. 4, pp. 569–585, 2007 Patwary, M.J.A , Akter, S, Mahmud, T., “An expert system to detect uterine cancer under uncertainty,” IOSR Journal of Computer Engineering (IOSR-JCE), vol. 16, no. 5, pp. 36-47, 2014 A. M. Nonvich and T. B. Turksen, “A model for the measurement of membership and the consequences of its empirical implementa-tion,” Fuzzy sets and systems, vol. 12, no. 1, pp. 1-25. 1984. Roger S. Pressman, Software engineering: a practitioner‘s approach, 6th ed. p. cm.(McGraw-Hill series in computer science) Includes index, pp. 373- 374 , 2005 Mahmud,T.;Rahman,K.N.;Hossain,M.S., “Evaluation of job offers using the evidential reasoning approach,” Global Journal of Computer Science and Technology, Vol. 13, no. 2, 2013. Mahmud,T ,Mia ,Mahashin, “Intelligent decision system for evaluation of job offers,” 1st National Conference on Intelligent Computing and Information Technology( NCICIT ), 2013

BIOGRAPHIES OF AUTHORS Mr. Md. Mahashin Mia received his Bachelor of Science (B.Sc) and Master of Engineering (M.Engg.) degree from the Department of Computer Science and Engineering at University of Chittagong, Chittagong, Bangladesh. In profession, he is an assistant registrar at Chittagong Veterinary and Animal Sciences University, Chittagong, Bangladesh. He is now conducting his M.Phil research work at Department of Computer Science and Engineering in University of Chittagong, Chittagong, Bangladesh. His current research interest lies in the field of machine learning arena.

Evidential reasoning based decision system to select health care location (Md. Mahashin Mia)


210

ISSN: 2252-8814 Mr. Atiqur Rahman received his Bachelor of Science (B.Sc) and Master of Engineering (M.Engg.) degree from the Department of Computer Science and Engineering at University of Chittagong, Chittagong, Bangladesh. In profession, he worked in the Department of Computer Science and Engineering, University of Chittagong, Bangladesh as an Assistant Professor since April 2016.Formar he was a lecturer in the Department of Computer Science and Engineering, University of Chittagong, Bangladesh. He is now conducting his Ph.D. research works under the Chinese Government Scholarships (CGS) Program at Chongqing University of Posts and Telecommunications, Chongqing, China. His current research interest lies in the field of edge computing based IoT systems. Dr. Mohammad Shahadat Hossain is a Professor of Computer Science and Engineering of Chittagong University, Bangladesh. He earned his M.Phil and Ph.D. from the University of Manchester Institute of Science and Technology (UMIST). He has published several scholarly articles in learned refereed journals. He awarded prestigious Commonwealth Academic Staff Fellowship and European Commission sponsored Erasmus Mundus Fellowship in 2009 and 2011 respectively. He successfully completed a number of research projects as a co-investigator. His current research area includes the modeling of risk and uncertainty using evolutionary computing techniques. Investigation of pragmatic software development tools and methods for information system in general and for GIS in particular are also his area of research. He is continuing his research at the intersection of computing and real world issues like economics, business, engineering and environment. He is the innovator of SDA (Spatial Domain Analysis) approach used to facilitate socio-economic research. In addition, he earned reputation as a Tawhidi scientist, who uses this method to develop pragmatic computer model of reality. His jointly authored book entitled “Computing Reality”, published by Aoishima Research Institute (blue ocean press) in Tokyo, Japan, contributed significantly to enrich the knowledge of computer science.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 201 – 210


International Journal of Advances in Applied Sciences (IJAAS) Vol. 9, No. 3, September 2020, pp. 211~219 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i3.pp211-219

211

Spectroscopic properties of lithium borate glass containing Sm3+ and Nd3+ ions I. Kashif1, A. Ratep2, S. Ahmed3 1Faculty

of Science, Department of Physics, Al-Azhar University, Egypt 2.3 Faculty of womens for arts, Science& Education, Department of Physics, Ain Shams University, Egypt

Article Info

ABSTRACT

Article history:

Lithium borate glass samples mixed with a different concentration of Sm3+ and Nd3+ ions organized by quenching technique. Structural, vibration groups and spectral properties of glass samples investigated using X-ray diffraction, FTIR, UV/Vis/NIR and photoluminescence spectroscopy. The X-ray confirmed the lithium borate glass samples containing Sm3+ and Nd3+ ions in the amorphous state. Luminescence spectra of glass samples excited at 400 nm recorded, here three luminescence bands observed in Visible region, which due to spectra materials (Sm3+, Nd3+). These indicate that these glass samples responsible orange emission and used in the improvement of materials for LED, and optical devices. The functional vibration groups of the glass matrix studied using FTIR spectroscopy.

Received Jan 24, 2020 Revised Apr 23, 2020 Accepted May 11, 2020 Keywords: Borate glass containing Sm3+ and Nd3+ ion DTA FTIR Optical properties XRD

This is an open access article under the CC BY-SA license.

Corresponding Author: I. Kashif, Department of Physics, Al-Azhar University, Nasr City, Cairo, Egypt. Email: ismailkashif52@yahoo.com

1.

INTRODUCTION Borate glasses act as host substances for studies the character, structure of the luminescence and useful practical applications. Specifically, the borate glass, free and containing rare earth or transition elements is a promising substance for nonlinear optics, quantum electronics, laser generation, scintillations, thermoluminescent dosimeters, detectors, transformers of the ionizing radiation, and many other applications [1-11]. Borate glasses are vital glass formers and perform a major function in diverse applications. The BO 3 group’s vibration and non-bridging oxygen (NBOs) increases in borate glass structure when the B 2O3 content increase from 10 mol% to 30 mol% [12-14]. Silicate glasses are a host material for the luminescence of rareearth and transition metal ions, because of the silicate glasses good optical and mechanical properties in addition to excellent chemical durability [15] The physical and spectroscopic properties of lithium borate glasses containing Sm 3+ studied. And the rise of Sm3+ content in glass samples increases the glass sample density due to the formation of BO 4 modules. The number of transitions peaks defined within the absorption spectra of glass containing Sm3+ compared to samarium-free glass. These glass samples emitted sturdy peak at 598 nm which corresponds to 4 G5/2→6H7/2 transition. This indicates that these samples of glass can adjust for LED applications [16, 17]. From rare-earth ions, the Sm3+ ion is considerable, Sm3+ ion growing stipulate in various fluorescent gadgets, high-density optical storage, color displays, undersea communication and visible solid-state lasers because of its vivid emission in orange-red areas [18]. The 4G5/2 level of Sm3+ possesses relatively high quantum Journal homepage: http://ijaas.iaescore.com


212

ISSN: 2252-8814

efficiency and indicates numerous populating in addition to quenching emission channels [19]. Some authors studied the optical properties of Sm3+ ion-doped various host glass networks [20-22]. Neodymium is one of the maxima studied rare-earth ions and discovered to have vast applications in photonic gadgets [23, 24]. From the mentioned above and other many studies of synthetic and optical and physical properties have made on different types of glass groups containing component Nd 3+ or Sm3+. But there have few studies of their presence together in the glass samples. The effect of changing the ratio of one of them with the stability of the ratio of the second element studied. It found that the emission intensity decreased by increasing the ratio of Nd3+ with the constant of Sm3+. As well as the emission intensity increase with increasing of Sm3+ and constant of Nd3+ content [25, 26]. In this study, we study the effect of replacing Sm3+ by Nd3+ on the structural, thermal, optical, spectroscopic properties of Sm3+ and Nd3+ ions on this glass. Judd-ofelt parameters calculated, for observed absorption spectra for Sm3+ and Nd3+ ions as well as the emission intensity.

2.

EXPERIMENTAL WORK Sm+3 and Nd3+ doped ion synthesized in the Borate glass system by conventional melt quenching method. The starting chemicals used reagent grade of H3BO3, Li2CO3, Sm2O3, and Nd2O3 with 99.99% purity. Chemical compositions prepared glasses as shown in Table 1. Table 1. The code and glass sample’s composition (mol %). Sample no. 1 2 3 4 5

Li2O 33 33 33 33 33

Mol % B2O3 Nd2O3 66 1 66 0.75 66 0.5 66 0.25 66 -

Sm2O3 0.25 0.5 0.75 1

The mixture melted in porcelain crucibles at the 1100 OC for 2h. The structure of each sample confirmed amorphous by X-ray diffraction with a Phillips diffractometer PW3700 using CuKα1 radiation. The density measured using the Archimedes method. Optical absorption spectra of samples recorded using the UV-Vis spectrometer (Model-JASCO V570). The IR spectra of the glasses recorded using the FTIR 4100 JASCO spectrophotometer Michelson interferometer type in the wavenumber region from 400 to 2000 cm -1. The Differential thermal analysis of glass samples carried using a SHIMADZUDTA-50 ANALYZER. The emission measure using Spectrofluorometer type JASCO-FP-6300.

3.

RESULTS AND DISCUSSION Figure 1 demonstrates the XRD of the prepared glass sample containing a different Nd and Sm oxide content. That indicates the amorphous nature of the samples.

Figure 1. The XRD of glass sample containing a different concentration of Nd and Sm oxides. The glass density tendency increase with the increase of Sm2O3 content as shown in Figure 2. It’s due to the structural atom arrangement change when Sm2O3 substitute Nd2O3 in the Li2O-B2O3 glass network, Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 211 – 219


Int J Adv Appl Sci

ISSN: 2252-8814

213

and the density of Sm2O3 (8.347 g/cm3) greater than the density of Nd2O3 (7.24 g/cm3). The excess density of the samples is due to the molecular weight of the samarium higher than any other component in the glass samples.

Figure 2. The relation between the density and samarium oxide content

Figure 3 shows the DTA curves obtained for Sm2O3- Nd2O3 doped lithium borate glass. This figure indicates the presence of endothermic peak Tg (glass transition temperature), the exothermic peak Tc (the crystallization temperature) and the endothermic peak Tm (melting temperature) which tabulated in Table 2. The Tg represents the strength or rigidity of the glassy structure [27].

Figure 3. The DTA curve of glass samples.

The difference (△x) among Tx and Tg which employ the glass forming ability [28].

Table 2. Thermal stability, the glass transition, start crystallization, crystallization and melting temperatures sample 1 2 3 4 5

Tg(oC) 519 533 529 530 530

Tx(oC) 689 671 697 676 646

Tm(oC) 839 828 832 822 835

Δx(oC) 170 138 168 146 116

According to DTA curves, the values of 𝞓x calculated. The impact substitution of Nd with Sm on the glass-forming ability can appear. From Table 2, observed that the quantity of 𝞓x of all samples > 100, it means that all glass samples have glass-forming ability and thermal stability. Figure 4 shows the FTIR spectra of glasses doped with Nd3+ and Sm3+ ion with different concentrations. Three areas defined the borate glass transmission spectra, the band (1200 - 1600 cm-1) is the primary region, the second region from 800 to 1200 cm-1 and the last from 600 to 800 cm-1. Where the primary bands are the stretching, relaxation of the B—O bond of trigonal BO3 units, the second attributed to BO4units, and the third due to the bending vibrations of B—O—B linkages inside the borate network [29-31]. The rare earth oxides doped borate glass outcomes within the conversion of BO 3 Spectroscopic properties of lithium borate glass containing Sm3+ and Nd3+ ions (I. Kashif)


214

ISSN: 2252-8814

units into tetrahedral BO4 units, and create non-bridging oxygen. Every BO4 unit connected two different units, the band at 485 cm-1 due O−Sm or Nd shifted to higher wavenumber with the growing attention of Sm. Figure 5 shows the Vis-NIR absorption spectrum acquired from the lithium borate glasses doped Nd3+ and Sm3+ with different concentrations. Figure 5 shows eight electronic f - f transition bands of Nd3+ in Table 3. This result compared with the prior referenced recommendations [32]. Figure 6 shows the optical absorption spectra of the lithium borate glass doped with 1 mol % Sm 2O3 or Nd2O3. The observed absorption bands assigned to appropriate fitting electronic f—f transitions inside Sm3+ ion as shown in Table 4.

Figure 4. The IR spectra of glasses doped with Nd 3+ and Sm3+ ion with different concentrations

Figure 5. The Vis-NIR absorption spectrum obtained from the lithium borate glasses doped Nd 3+ and Sm3+ with different concentrations Table 3. The 4f transition levels of Nd3+ doped in lithium borate glasses compared with the reported (Rai and Rai 2006). Transition 4 I9/2 → 2 P1/2 2 G9/2 4 G9/2 4 G7/2 4 G5/2 2 H11/2 4 F9/2 4 S3/2 4 F5/2 4 F3/2

Wavelength (nm) 428 472 510 524 582 624 680 746 802 868

Wavenumber (cm-1) 23365 21186 19607 19084 17182 16026 14706 13405 12469 11521

Wavenumber reported (cm-1)

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 211 – 219

23140 21171 19544 19018 17167 16026 14854 13460 12573 11527


Int J Adv Appl Sci

ISSN: 2252-8814

215

Figure 6. The optical absorption spectra of the lithium borate glass doped with 1 mol % Sm2O3 or Nd2O3 Table 4. The 4f transition levels of Sm3+ doped in lithium borate glasses Transition 6 H5/2→ 4F7/2 6 H5/2→ 6F9/2 6 H5/2→ 6F7/2 6 H5/2→ 6F5/2 6 H5/2→ 6F3/2

Wavelength(nm) 400 1066 1214 1358 1458

Wavenumber(cm-1) 25000 9380 8237 7363 6858

The optical spectra of glass contain combined Nd 2O3 and Sm2O3 indicates fifteen distinct absorption bands at 346, 428, 472, 510, 524, 582, 680, 746, 802, 868, 400, 1066, 1214, 1358, and 1458 nm which due to the transitions of 4I9/2 → 4D1/2, 2P1/2, 2G9/2, 4G9/2,4I9/2→ 4G7/2, 4I9/2 → 4G5/2, 4F9/2, 4S3/2, 4F5/2, 4F3/2 , for the 4f transition levels of Nd3+ and 6H5/2→ 4F7/2, 6H5/2→ 6F9/2, 6H5/2→ 6F7/2, 6H5/2→ 6F5/2, 6H5/2→ 6F3/2 for the 4f transition levels of Sm3+ respectively. From figure 5, found the absorption intensity band at 582 nm reduce as the content of Sm3+ increases. The combined doping does not alter the level positions of the Nd 3+ and Sm3+ ions. Moreover, the increase of Nd2O3 content in the glass caused absorption bands to become sharper. The optical band gap Eopt determined using the relation αhν = A (hν- Eopt)n. Where: A is constant. The value of the power n shows the transition type, wherein n=2 indicates an indirect transition respectively. Figure 7 suggests the indirect transition via plotting (αhν)1/2 vs. hν. Extrapolating the line (straight) to the hν axis gives the indirect band gaps of the studied samples. The values of the indirect band gaps had been (3.45, 3.4, 3.43, 3.19 and 3.41 eV). It noted that the lower optical band gap energy (E opt) in a sample containing 0.75 mol % samarium oxide. The addition of rare earth oxide into the glasses increases the nonbridging oxygen and hence generates different oxidation states because of the mixed ions within the bridging oxygen.

Figure 7. The indirect transition by plotting (αhν)1/2 vs. hν

Table 5 shows the calculated (fcal), experimental (fexp) oscillator strengths of the glass system containing Sm3+ and RMS deviation. The oscillating strengths of the various transformations (experimental and theoretical) calculated, and therefore the parameters of the Judd-Ofelt are calculated [33, 34]. The RMS deviation 𝛿𝑟𝑚𝑠 calculated using the following relation [35-37]. Spectroscopic properties of lithium borate glass containing Sm3+ and Nd3+ ions (I. Kashif)


216

ISSN: 2252-8814 2 ∑(f𝒄𝒂𝒍 −f𝐞𝐱𝐩 )2

𝛿𝑟𝑚𝑠 = √

(1)

𝑁−3

N is the total number of energy levels. Table 6 shows the measured fexp , theoretical fcal oscillator strength of the glass system containing Nd3+ and RMS deviation . From table 6, the value of 𝛿𝑟𝑚𝑠 is very low (< 1) which indicates the J-O theory is valid [38, 39]. The values of RMS imply the good fitting relating to the measured f exp and the theoretical fcal oscillator strengths. This sample shows a slight difference between experiment f exp and calculation fcal. Three Judd-Ofelt parameters of Sm3+ and Nd3+doped glass samples obtained, Ω2 parameter describes the environment asymmetric or Sm3+ and O2- ligand covalence because the samarium ions found in the different coordination environment. Sometimes the samarium has the same coordination, however, there may a chance of change in the crystalline field due to the deviation in the samarium position.

Table 5. Experimental energies (Eexp), experimental (fexp) and calculated (fcal) oscillator strengths for the energy levels of the Sm3+ glass H5/2→

6

Eexp (cm–1)

6

F3/2 6858 F5/2 7363 F7/2 8237 6 F9/2 9380 6 F11/2 10683 RMS x10-6 6 6

Sample 2 fcal x10-6 fexp x10-6 0.295 0.535 0.582 0.959 1.02 1.56 0.681 o.869 0.111 0.110 0.124

Sample 3 fcal x10-6 fexp x10-6 0.337 0.343 0.841 0.832 1.04 1.09 0.591 0.537 0.0914 0.0830 0.882

Sample 4 fcal x10-6 fexp x10-6 0.237 0.400 0.511 0.957 0.822 0.974 0.532 1.01 0.0855 0.0952 0.015

Sample 5 fcal x10-6 fexp x10-6 0.972 0.979 1.80 1.76 2.61 2.70 1.63 1.51 0.259 0.138 0.952

Table 6. Experimental energies (Eexp), experimental (fexp) and calculated (fcal) oscillator strengths for the energy levels of the Nd+ glass 4

I9/2→

Eexp (cm–1) 4 F3/2 11520.74 4 F5/2 12468.83 4 S3/2 13404.83 4 F9/2 14705.88 4 G5/2 17182.13 4 G7/2 19083.97 4 G9/2 19607.84 2 G9/2 21186.44 2 P1/2 23364.49 RMS x10-6

Sample 1 fcal x10-6 0.928 2.49 2.36 0.192 4.21 1.35 0.550 0.390 0.264 1.26

fexp x10-6 0.623 3.21 1.72 0.130 3.06 0.400 0.332 0.105 0.0467

Sample 2 fcal x10-6 ------1.59 1.60 0.126 1.92 0.776 0.331 0.237 0.150 0.609

fexp x10-6 ------1.64 1.45 0.0598 1.39 0.373 0.295 0.112 0.0453

Sample 3 fcal x10-6 ------0.871 0.953 0.0732 2.30 0.471 0.167 0.117 0.0606 0.47

fexp x10-6 ------0.911 0.864 0.0438 1.67 0.214 0.114 0.115 0.0285

Sample 4 fcal x10-6 -------1.07 1.08 -------2.54 0.605 0.227 0.159 0.0986 0.606

fexp x10-6 --------1.08 0.971 --------1.82 0.322 0.0577 0.292 0.0276

These distortions may contribute effectively to covalent or asymmetric environments. The parameters Ω4 and Ω6 indicate the large properties of the glass such as hardness and viscosity. In current glass systems, J-O parameter values presented in Table 7, Table 8 and follow the tendency as Ω4> Ω 6> Ω2. The same trend observed in other glass systems [38-41]. According Jorgensen and Reisfeld [42], the Ωλ extra affected the crystal- field asymmetry and the changes in the energy distinction relating to 4fN and 4fN—15d configuration. In other phrases, Ω2 will increase because of the nephelauxetic impact. This occurs due to the deformation of the electronic orbital within the 4f configuration. Increase the overlap the 4f of Nd 3+ ion and oxygen orbital induced the energy level of Nd 3+ ion contracts and shifting inside the wavelength. Furthermore, shifting all transitions to higher wavelength indicate the presence of Nd — O linkages in the glass system. The transition 4I9/2→ 2G9/2 observed is greater intense than the alternative transitions which well see from the intensity of the calculated oscillator strength increases empirically and relates to the structural changes of the location of the rare-earth ions. Ω2 rose significantly by reducing the symmetry of the rare-earth site and the more covalent its chemical bond with the ligands field. As a whole, the Ω 2 increases because of the covalence among the rare earth ion and the ligand field increases, as the symmetry lowers, and as the electric gradient relating the rare earth ion and the ligand fields increases. The higher the value of Ω4 in the current glass indicates the higher the hardness of the glass network and the higher covalent around the sm3+ ions. The ratio between Ω4 to Ω6 indicates that all the samples containing Sm found this ratio greater than 1. These resulting analyses verify that the glass used as a laser generator. Figure 8 shows the variation of emission intensity for the transition of Sm 3+ -Nd3+ containing glasses excited at 400 nm. It clears the three peaks at 561, 599 and 647 nm, which assigned to 4G5/2→6H5/2, 6H7/2, Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 211 – 219


Int J Adv Appl Sci

ISSN: 2252-8814

217

6

H9/2 transitions of Sm3+ ions. The intensities of the bands gradually elevated with Sm3+ ion attention enhanced within the samples, the glasses emit reddish-orange light. Luminescence spectra give detailed information for energy level splitting of doping ions in Li 2O—B2O3—(Nd2O3/ Sm2O3) glasses. The luminescence spectrum of glass contains neodymium (samarium free), the luminescence vulnerable (weak) band noticed at 599 nm corresponding transitions 4G7/2→4I11/2, 4G5/2→4I9/2. Alternative Sm3+ ions doped glasses (free from neodymium) reveal four luminescence strong bands at 562, 599, and 646 nm. Those bands attributed 4G5/2→6H5/2, 4G5/2→6H7/2, 4G5/2→6H9/2 transitions of Sm3+ ions in glass network, the band intensities regular elevated Sm3+ ion attention increased by mixed rare earth glass network [43, 44]. Table 7. Judd-Ofelt parameters (Ωλ×10–20 cm2) and trends of the Ωλ parameters for various Nd3 + glasses Sample 2 3 4 5

Ω2 x10-20cm-2 0.135 0.143 0.0312 0.575

Ω4 x10-20cm-2 0.907 1.36 0.772 2.49

Ω6 x10-20cm-2 0.656 0.540 0.485 1.37

Trend Ω4>Ω6>Ω2 Ω4>Ω6>Ω2 Ω4>Ω6>Ω2 Ω4>Ω6>Ω2

Table 8. Judd-Ofelt parameters (Ωλ×10–20 cm2) and trends of the Ωλ parameters for various Sm3+ glasses Sample 1 2 3 4

Ω2 x10-20cm-2 0.316 0.0160 0.510 0.418

Ω4 x10-20cm-2 1.64 0.914 0.362 0.575

Ω6 x10-20cm-2 1.39 0.947 0.580 0.611

Trend Ω4>Ω6>Ω2 Ω6>Ω4>Ω2 Ω6>Ω2>Ω4 Ω6>Ω4>Ω2

Figure 8. the emission spectra of Sm3+ - Nd3+containing lithium borate glasses excited at 400 nm As it appears (in Figure 9) possible Nd3+ ion energy 4F3/2 level transfer to the Sm3+ ion 6F9/2 level. Thereby Sm3+ ion excited 6F9/2 to 4G7/2 and subsequent de-excites to 4G5/2 via nonradiative decay and strengthens the emission transitions from Sm3+ ions 4G5/2. This increases the intensity of the Sm3+ emission lines expenses the Nd3+ emission lines. The branching ratio B value found highest the transition 4 G5/2 →6H7/2 (near orange emission) in the glasses and found that the value B for the transition 4G5/2 →6H7/2 is 60, 57 and 51 % respectively. In many other glass systems, the highest B value of this transition reported from Sm3+ ions. Finally the general analysis of the current results suggests that the combined interaction of the Sm3+ ions containing Nd3+ ions significantly improves the transfer of orange emissions from Sm 3+ ions into the studied glass system and makes the glasses suitable for orange emissions devices. Besides, the replacing 0.25 mole% Sm3+ by Nd3+ gives the highest intensity of the emitted radiation.

Spectroscopic properties of lithium borate glass containing Sm3+ and Nd3+ ions (I. Kashif)


218

ISSN: 2252-8814

Figure 9. The energy level transition of Nd3+, Sm 3+ ions

4.

CONCLUSION Samarium and neodymium ions doped Lithium borate glass prepared and studied. The density of glass samples indicates that the density measurement increases as samarium content increase and the distinction among the experimental and calculated density increase as the samarium content increase. The functional vibration groups within the glass matrix have studied and indicate the addition of rare earth ion transfer the BO3 vibration groups to BO4 and forming nonbridging oxygen. Judd—Ofelt (J—O) principle has applied and evaluates J—O intensity parameters. The general analysis of the results of the present study optical properties (absorption and emission) indicates that these glass samples are answerable for orange. Based on the results obtained from the J-O analysis, the parameters concluded that the glass under study is a promising luminescent and laser material. The current glasses in the study have the potential to act like an orange emission device as well as photovoltaic applications.

REFERENCES Sun X.-Y., et al, “Luminescent properties of Tb3+-activated B2O3–GeO2–Gd2O3 scintillating glasses,” J. NonCryst. Solids, vol. 379, pp. 127-130, 2013. [2] Thomas S., et al, “Spectroscopic and dielectric studies of Sm3+ ions in lithium zinc borate glasses,” J. Non-Cryst. Solids, vol. 376, pp. 106-116, 2013. [3] Babu, A. M., Jamalaiah, B.C., Sasikala, T., Saleem, S.A., Moorthy, L. R., “Absorption and emission spectral studies of Sm3+ doped lead tungstate glasses,” J. Alloys Compd, vol. 509, no. 14, pp. 4743-4747, 2011. [4] Jamalaiah B.C., Kumar J. S., Babu A. M., Suhasini T., Moorthy L. R., “Photoluminescence properties of Sm3+ in LBTAF glasses,” J. Lumin, vol. 129, no. 4, pp. 363-369, 2009. [5] Lakshminarayana G., Qiu J., “Photoluminescence of Pr3+, Sm3+ and Dy3+-doped SiO2–Al2O3–BaF2–GdF3 glasses,” J Alloys Compd., vol. 476, no. 1-2, pp. 470-476, 2009. [6] Som T., Karmakar B., “Infrared-to-red upconversion luminescence in samarium-doped antimony glasses,” J. Lumin, vol. 128, no. 12, pp. 1989-1996, 2008. [7] Kindrat I.I., Padlyak B.V., Drzewiecki A., “Luminescence properties of the Sm-doped borate glasses,” J. Lumin, vol. 166, pp. 264–275, 2015. [8] Tripathi G., Rai V.K., Rai S.B., “Optical properties of Sm3+:CaO-Li2O-B2O3-BaO glass and codoped Sm3+:Eu3+,” Appl. Phys. B, vol. 84, no. 3, pp. 459-464, 2006. [9] Biju, P.R., Ajithkumar, G., Jose, G., Unnikrishnan, N.V., “Spectroscopic studies of Sm3+ doped phosphate glasses Bull,” Bulletin of Materials Science, vol. 21, no. 5, pp. 415-419, 1998. [10] Lakshminarayana G., Buddhudu S., “Spectral analysis of Sm3+ and Dy3+: B2O3–ZnO–PbO glasses,” Physica B, vol. 373, no. 1, pp. 100-106, 2006. [11] Sudhakar K.S.V., et al, “Influence of modifier oxide on spectroscopic and thermoluminescence characteristics of Sm3+ ion in antimony borate glass system,” J. Lumin, vol. 128, no. 11, pp. 1791-1798, 2008. [1]

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 211 – 219


Int J Adv Appl Sci

ISSN: 2252-8814

219

[12] Becker, P., “Thermal and optical properties of glasses of the system Bi2O3 – B2O3,” Crystal Research and Technology: Journal of Experimental and Industrial Crystallography, vol. 38, no. 1, pp.74-82, 2003. [13] Bajaj, A., et al, “Structural investigation of bismuth borate glasses and crystalline phases,” J. Non-Cryst. Solids, vol. 355, no. 1, pp. 45-53, 2009. [14] Zhu X., Mai C., Li M., “Effects of B2O3 content variation on the Bi ions in Bi2O3–B2O3–SiO2 glass structure,” J. Non-Cryst. Solids, vol. 388, pp. 55-61, 2014. [15] Chewpraditkul,W., Shen,Y., Chen,D., Yu, B., Prusa, P., Nikl,M., Beitlerova,A., Wanarak,C., “Luminescence and scintillation of Ce3+-doped high silica glass,” Opt. Mater., vol. 34, no. 11, pp. 1762-1766, 2012. [16] Wantana N., et al, “Energy transfer from Gd3+ to Sm3+ and luminescence characteristics of CaO–Gd2O3–SiO2– B2O3 scintillating glasses, J. Lumin., vol. 181, pp. 382–386 , 2017. [17] Ramteke D.D., Ganvir V. Y., Munishwar S. R., Gedam R. S., “Concentration effect of Sm3+ Ions on structural and luminescence properties of lithium borate glasses,” Physics Procedia, vol. 76, pp. 25–30, 2015. [18] Huang L., Jha A., Shen S., “Spectroscopic properties of Sm3+-doped oxide and fluoride glasses for efficient visible lasers (560–660 nm),” Opt. Commun., vol. 281, no. 17, pp. 4370-4373, 2008. [19] Gorller-Walrand, C., Binnemans, K., in: Gschneidner, K.A., Eyring, L. (Eds.), Handbook on the Physics and Chemistry of Rare Earths, pp. 101–264. chapter 167, North-Holland Publishers, Amsterdam, 1998,. [20] Mahato K.K., Rai D.K., Rai S.B., “Optical studies of Sm3+ doped oxyfluoroborate glass,” Solid State Commun., vol. 108, no. 9, pp. 671-676, 1998. [21] Lin H., et al, “Spectral parameters and visible fluorescence of Sm3+ in alkali–barium–bismuth–tellurite glass with high refractive indexm,” J. Lumin., vol. 116, no. 1-2, pp. 139-144, 2006. [22] Praveena R., Venkatramu V., Babu P., Jayasankar C.K., “Fluorescence spectroscopy of Sm3+ ions in P2O5–PbO– Nb2O5 glasses,” Physica B: Condensed Matter, vol. 403, no. 19-20, pp. 3527-3534, 2008. [23] Gatterer, K, et al, “Suitability of Nd(III) absorption spectroscopy of probe the structure of glasses from the ternary system Na2O-B2O3-SiO2,” J. Non-Cryst. Solids, vol. 231, no. 1-2, pp. 189-199, 1998. [24] Maumita Das, Annapurna K, Kundu P, Dwivedi RN, Buddhudu S., “Optical spectra of Nd3+:CaO–La2O3–B2O3 glasses,” Materials Letters, vol. 60, no. 2, pp. 222-229, 2006. [25] .Rao, T.G.V.M., et al, “Optical and structural investigation of Sm3+–Nd3+ co-doped in magnesium lead borosilicate glasses,” Journal of Physics and Chemistry of Solids, vol. 74, no. 3, pp. 410–417, 2013. [26] Joshi, J. C., SHI, J. A., Belwalab, R., Joshi, C., Pandey, N. C., “Non-radiative energy transfer from Sm3+→Nd3+ in sodium borate glass,” j. Phys Chem. Solids, vol. 39, no. 5, pp. 581-584, 1978. [27] Wang F., et al, “The influence of TeO2 on thermal stability and 1.53μm spectroscopic properties in Er3+ doped oxyfluorite glasses,” Spectrochim Acta A: Mol. Biomol. Spect., vol. 150, pp. 162–169, 2015. [28] Dahshan, A., “Thermal stability and crystallization kinetics of new As-Ge-Se-Sb glasses,” J. Non-Cryst. Solids, vol. 354, no. 26, pp. 3034-3039, 2008. [29] Tandon R.P., Hotchandani S., “Electrical conductivity of semiconducting tungsten oxide glasses,” Phys. Status Solidi A, vol. 185, no. 2, pp. 453-460, 2001. [30] Qiu H.-H., Mori H., Sakata H., Hirayma T., “Electrical conduction of glasses in the system Fe2O3-Sb2O3-TeO2,” J. Ceram. Soc. Jpn., vol. 103, no. 1193, pp. 32-38, 1995. [31] Khalifa F.A., El Batal H.A., Azooz A., “Infrared absorption spectra of gamma irradiated glasses of the system Li2O-B2O3-Al2O3,” Indian J. Pure Ap. Phy., vol. 36, no. 6, pp. 314-318, 1998. [32] Rai A. and Rai V. K., “Optical properties and upconversion in Pr3+ doped in aluminum, barium, calcium fluoride glass—I,” Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy, vol. 63, no. 1, pp. 27-31, 2006. [33] Judd B.R., “Optical absorption intensities of rare-earth ions,” Phys. Rev., vol. 127, no. 3, pp. 750-761, 1962. [34] Ofelt G.S., “Intensities of crystal spectra of rareearth ions,” J. Chem. Phys.. vol. 37, no. 3, pp. 511-520, 1962. [35] Padlyak B.V., Kindrat I.I., Protsiuk V.O., Drzewiecki A., “Optical spectroscopy of Li2B4O7, CaB4O7 and LiCaBO3 borate glasses doped with europium,” Ukr. J. Phys. Opt., vol. 15, no.3, pp. 103-117, 2014. [36] Joseph X., George R., Thomas S., Gopinath M., Sajna M.S., Unnikrishnan N.V., “Spectroscopic investigations on Eu3+ ions in Li–K–Zn fluorotellurite glasses,” Opt. Mate., vol. 37, pp. 552-560, 2014. [37] Mohamed E. A., Ratep A., Abdel-Khalek E. K., Kashif I., “Crystallization kinetics and optical properties of titanium–lithium tetraborate glass containing europium oxide,” Appl. Phys. A, vol. 123, no. 3, pp. 479-, 2017. [38] Kumar K. A., Babu S., Prasad R., Damodaraiah S., Ratnakaram Y.C., “Optical response and luminescence characteristics of Sm3+ and Tb3+/sm3+ co-doped potassium-fluoro-phosphate glasses for reddish-orange lighting applications,” Materials Research Bulletin, vol. 90, pp. 31-40, 2017. [39] Babu, S., et al, “Investigations on luminescence performance of Sm3+ ions activated in multi-component fluorophosphates glasses,” Spectrochim. Acta Part A, vol. 122, pp. 639–648 , 2014. [40] Sobczyk M., Szymański D., Guzik M., Legendziewicz J., “Optical behaviour of samarium doped potassium yttrium double phosphates,” J. Lumin., vol. 169, pp. 794–798, 2016. [41] Thomas S., et al., “Optical properties of Sm3+ ions in zinc potassium fluorophosphate glasses,” Opt. Maters, vol. 36, no. 2, pp. 242–250, 2013. [42] Jorgensen C K, Reisfeld R., “Judd-Ofelt parameters and chemical bonding,” J. Less-Common Metals, vol. 93, no. 1, pp. 107-112, 1983. [43] Herrmann, A., Ehrt, D., “Time-resolved fluorescence measurements on Dy3+ and Sm3+doped glasses,” J. NonCryst. Solids, vol. 354, no. 10-11, pp. 916–926, 2008. [44] Malchukova E., Boizot B., Ghaleb D., “Optical properties and valence state of Sm ions in aluminoborosilicate glass under β-irradiation,” J. Non-Cryst. Solids, vol. 353, no. 24-25, pp. 2397–2402, 2007.

Spectroscopic properties of lithium borate glass containing Sm3+ and Nd3+ ions (I. Kashif)


International Journal of Advances in Applied Sciences (IJAAS) Vol. 9, No. 3, September 2020, pp. 220~226 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i3.pp220-226

220

Method for cost-effective trans aortic valve replacement device prototyping Angelique Oncale1, Charles Taylor2, Erika Louvier3, G.H. Massiha4 1,3,4 Department

of Industrial Technology, University of Louisiana at Lafayette, United States of Petroleum Engineering, Louisiana State University, United States

2Department

Article Info

ABSTRACT

Article history:

Trans Aortic Valve Replacement (TAVR) has offered the cardiology sector of health a new alternative to open heart surgeries which treat aortic stenosis. The technologies used by TAVR manufacturers are kept private. Our research goal was to develop a process that allows college level laboratories to fabricate their own TAVR stents in order to research new designs and methods of fabrication which may improve current TAVR practices. By creating a solid model of a stent cell design in SolidWorks, we were able to export a cutting pattern we used with a waterjet. The stent frame was then hand polished to prepare for fabric skirting and leaflet attachment. Synthetic ripstop fabric was cut using a commercial fabric cutting machine and attached to the frame using a waterproof glue. Future research entails welding techniques, improved polishing methods, and implantation into a mechanical system. This prototype could be used for TAVR related research and surgical training simulations.

Received Sep 4, 2019 Revised Oct 6, 2019 Accepted May 14, 2020 Keywords: Aortic stenosis Aortic valve Computer aided design Computer numerical control Trans aortic valve replacement

This is an open access article under the CC BY-SA license.

Corresponding Author: Angelique Oncale, Department of Industrial Technology, University of Louisiana at Lafayette, 104 East University Avenue, Lafayette, Louisiana 70506, United States Email: angelique.oncale1@louisiana.edu

1.

INTRODUCTION Trans Aortic Valve Replacement (TAVR) is a relatively new medical procedure that has been evolving over nearly two decades as an alternative to open heart surgery for the treatment of Aortic Stenosis (AS). This disease is most prevalent in the elderly with almost 27,000 patients becoming eligible for a TAVR procedure annually [1]. The only two companies that create and sell FDA approved TAVR devices in the United States are Edwards Lifesciences and Medtronic. Their revolutionary designs have become an incredibly popular treatment option for high risk patients, and improved designs may become an option for patients with lower risk profiles or those with more complex combinations of heart diseases who were excluded from earlier trials of TAVR [2]. The treatment of aortic stenosis with valve implants became more widely used after the development of the bi-leaflet mechanical heart valve. This type of valve is still used today, however its popularity is decreasing with the rise of TAVR valves [3]. The TAVR valve’s predecessor, Surgical Aortic Valve Replacement (SAVR) valves, and bi-leaflet mechanical valves both involve invasive surgeries that many high-risk patients are unable to endure. TAVR technology is our safest option for the largest population of patients but still has serious issues that need to be resolved. The most common and persistent issue with TAVR valves are paravalvular leaks and paravalvular regurgitation. A two-year analysis of post-op TAVR implantations found that regurgitation “remained Journal homepage: http://ijaas.iaescore.com


Int J Adv Appl Sci

ISSN: 2252-8814

221

unchanged in 46.2% of the 143 patients studied and was worse in 22.4%” [4]. This problem is caused in large part by a difference in stent and aortic annulus sizes [5]. The current lifespan of a TAVR, 10-15 years, is one of the main reasons the technology is not being used for other medical conditions or younger patients. Valves are also not yet fully retrievable and valve-in-valve implantations reduce the annulus size, affecting hydrodynamic performance [6]. The largest hindrance to TAVR improvements is the lack of information and technology available to independent researchers. Software techniques to better size devices and predict deployment success have been developed but are unable to test their methods outside of simulation [7, 8]. Simplified devices for research have made contributions to groups seeking to validate their research in the past. In 2017, a method to create a bioprosthetic semilunar valve helped provide research-grade heart valves to independent groups [9]. This device fabrication method was used later in 2017 to simulate a paravalvular leak for repair using cardioscopic imaging [10]. The same method was used in 2019 to test autonomous robotic navigation [11]. This research can provide the same means of validation for research involving TAVR devices. A number of issues still surround the TAVR procedure including paravalvular leaks or regurgitation, tissue and frame durability, and valve longevity [4, 5, 12, 13]. These problems need to be resolved before this procedure can be expanded to a larger population. The techniques used by Edwards and Medtronic to make TAVR valves are kept private from the general public, making it difficult for researchers outside of these corporations to study alternative fabrication methods and designs which may offer a solution to common TAVR issues. Additionally, TAVR valves such as the Sapien 3 have an acquisition cost of about $32,500 and are only sold to medical professionals for interventional use [14]. A process for fast, cost efficient prototyping of TAVR valves is needed to reflect the devices being used in the current industry and to subsequently research improvements involving the technology. Without an actual device to test on or information on its construction, there is no way to validate new developments. The research presented here seeks to fully defines a method by which a collegiate level lab can produce balloon expandable TAVR stent prototypes. The prototype built in this study uses easily attainable materials such as stainless steel and nylon fabric resulting in a lower cost. Fabrication methods mentioned here rely heavily on computer numerical control (CNC) machines including a waterjet and a consumer fabric cutter. Contract manufacturing for waterjet cutting can be used by those without direct access to this equipment. This process can be used to prototype existing designs or be adapted for new designs. This prototyping method can be useful for comparison studies or studies involving a functioning TAVR valve.

2.

RESEARCH METHODS AND MATERIALS The prototype built in this study uses easily attainable materials such as stainless steel and nylon fabric resulting in a lower cost. Fabrication methods mentioned here rely heavily on computer numerical control (CNC) machines including a waterjet and a consumer fabric cutter. Contract manufacturing for waterjet cutting can be used by those without direct access to this equipment. This process can be used to prototype existing designs or be adapted for new designs. Table 1 is showing the cell design of the TAVR stent was inspired by online images of the Sapien XT frame. The Sapien XT comes in sizes 23mm, 26 mm, and 29 mm with heights of 14.3 mm, 17.2 mm, and 19.1 mm respectively [15]. Using the 29 mm diameter and 19.1 mm height, a parametric model of our frame was created in Solidworks which can be viewed in both its flat pattern form and its 3D cylindrical form. The online images showed a frame which could be divided into 3 identical sections, separated by thicker struts which are used to anchor the leaflets. The frame’s geometry was specifically chosen to ensure its crimping ability; it includes small spaces at the top and bottom of each cell as well as over 400 fillets to aid in bending. The model was made using a number of variables and simple equations so that the size of the frame and its individual parts can be easily manipulated by changing the value of a single variable. This allows users to study the device in different sizes and define its features exactly to the user’s requirements. The variables and equations may also be applied to different cell designs to make them parametric as well as shown in Figure 1, Figure 2, and Figure 3.

Method for cost-effective trans aortic valve replacement device prototyping (Angelique Oncale)


222

ISSN: 2252-8814 Table 1. CAD model variables Global Variable A B C D E F G H I J

Description Inner Diameter Overall Length Strut Thickness Overall Height Large Cell Connector Diameter Cell Connector Length Cell Connector Centering Cell Distance Linear Pattern Distance Cell/Strut Distance

Value/Equation 29 mm (((A * 3.1415) / 3) - (2 * C) - (D)) / 3 0.608 mm 1.5 mm 19.1 mm E * 0.75 E/2 B+C (B * 3) + D + (C * 2) C + 0.5

Evaluates to 29 mm 9.21728 mm 0.608 mm 1.5 mm 19.1 mm 14.325 mm 9.55 mm 9.82528 mm 30.3678 mm 1.108 mm

Figure 1. Full sketch of the parametric CAD model with variables

Figures 2. Reference photograph of the Sapien XT valve

Figures 3. Parametric model in cylindrical configuration [16]

Figure 4 is showing flat pattern form was used to cut the stent frame from a 0.015” (0.5 mm) thick strip of 316 stainless steel using an OMAX MAXIEM 1515 abrasive waterjet. The waterjet was chosen as the frame fabrication method for many reasons. Contract waterjet cutting is significantly cheaper and more widely available to the public that 3D printing or laser cutting. Fabrication with a laser may also cause heat affected zones to become weakened; waterjets cut cold which eliminates this issue [17]. The waterjet was equipped with a MAXJET 5i nozzle and a 0.015” diameter jewel. The abrasive material used was 150 grit garnet and backed the stainless steel with hardboard and a 3” thick honeycomb Rhino Board. The jet’s properties were set to 316 Stainless Steel material, 0.015” thickness, and 0.01” tool offset. The flat pattern shows strut thicknesses to be 0.606 mm, however the tool offset allows the jet to cut tight corners needed to produce the valve’s geometry and cuts material away from the struts reducing the thickness to a desired 0.5 mm. The frame was sanded wet with cushioned sanding pads to remove abrasive material attached to the surface after cutting.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 220 – 226


Int J Adv Appl Sci

ISSN: 2252-8814

223

Figure 4. Stent frame cut using the OMAX waterjet

Figure 5 is showing the leaflets and fabric skirting were both made from the same synthetic material. A nylon ripstop fabric was chosen for its water resistance, tear resistance, flexibility, and low cost. Ripstop is also extremely thin; this type of fabric would not hinder the TAVR’s crimping ability. Using Solidworks, a simplified leaflet design was created using a circular geometry and a leaflet graft sizing reference [18]. The sizing reference described the grafts as semicircular with a diameter 10-15% the size of the device’s diameter; this basic shape was kept and small tabs were added near the free edge to aid in attachment. A circular shape was cut and folded into a semicircle to create a sturdier leaflet with a free edge that would not fray.

Figure 5. Synthetic leaflet cut from nylon ripstop using a Cricut fabric cutter

Figure 6 is showing the fabric skirting pattern was designed so the fabric could be folded over the frame’s struts and secured to itself. The design is a simple rectangular shape the same length as the frame with cuts along the top to allow the fabric to be folded over the struts. There is extra length at the bottom of the pattern so the entire bottom edge could be folded up and secured between the lower cells. Both the leaflets and skirting were cut using a Cricut Maker, a consumer CNC fabric cutting machine. The type of material was set to lightweight fabric and the tool used was the rotary blade. The fabric was applied to the adhesive mat supplied with the machine and patterns were cut within seconds.

Figure 6. Fabric skirting cut from nylon ripstop using a Cricut fabric cutter

Typically, valve construction would consist of a 1-2 hours of suturing by hand. To reduce time and skill needed to complete a valve, Loctite waterproof fabric glue was used in substitution. The stent frame was carefully aligned on top of the fabric skirting, and each flap was folded over a strut and glued into place using a toothpick. The bottom of the skirting was folded over the bottom struts and glued to itself. The leaflets were glued down in the center of the circular shape to keep the free edge flat and along the circumference to keep them closed. They were then attached to the fabric skirting using the thicker struts as a guide and the tabs at the edge of each leaflet were glued to the adjacent leaflet. Construction of the device was reduced to approximately 30-40 minutes.

Method for cost-effective trans aortic valve replacement device prototyping (Angelique Oncale)


224

ISSN: 2252-8814

3.

RESULTS AND ANALYSIS This processs began with the frame of the stent. Laser cutting and 3D printing were more expensive options compared to the waterjet. It took multiple iterations of cutting to refine the OMAX waterjet settings to cut the frame accurately at size. The first cut was made at 125% of the desired size. The following cuts involved a frame with simplified geometry because of the limitations of the waterjets cutting abilities. A revision to the stent frame included wider geometries and thicker struts; we also changed the waterjet settings to achieve the final stent frame presented here. Those first cutting attempts were welded into shape but left significant burn marks on the surface of the frame and the rough weld spots meant we needed a more refined welding method for the delicate frame. We used these frames to determine methods of fabric attachment. Traditionally, fabric attachment involves hundreds of individual stitches done by hand. Trying to do this on a frame that was already welded into its cylindrical shape proved to be extremely difficult and inefficient. Therefore, the easiest assembly of the device included an attachment method using glue instead of stitches and attachment of fabric before any welding was done (see Figure 7 and Figure 8). Table 2 is showing cost of supplies.

Table 2. Cost of supplies Description Highly Corrosion-Resistant 316 Stainless Steel Sheet, 2" x 5 Feet, 0.015" Thick 1.3 Oz MTN XL Hybrid Ripstop Nylon 6.6 Cricut Maker Machine Loctite 1 fl oz Vinyl Fabric & Plastic Flex Adhesive Cushioned Sanding Pad Assortment, 9 Pieces

Figure 7. Final assembly of the device prototype

Provider McMaster-Carr Ripstop By the Roll Cricut Target McMaster-Carr Total

Price Per Unit $25.98 $8.95 $369.99 $2.99 $12.18 $420.09

Figure 8. Second prototype with view of skirting attachment

4.

CONCLUSION Further experimentation is needed to define a welding method that will not damage the fabric that is already attached to the frame. There are no methods for crimping these prototypes other than building a crimper or obtaining one from medical personnel. Implantation of the device into a simulated aortic valve could produce the necessary flow data to confirm functionality. This would be an assessment not only of the frame’s crimping and expansion ability but also the fabric’s quality. Fabric was chosen simply for its availability and price, but if coaptation of the valve and proper flow dynamics are not achieved, the material could easily be replaced. The fabric type and attachment methods need to be assessed for paravalvular leaking, valvular regurgitation, and proper coaptation. The completion of this research could result in the creation of a new bench-top training method for TAVR implantation trainees or provide a means of validation for TAVR research involving computer modeling and simulations.

REFERENCES [1]

[2]

R. L. J. Osnabrugge, et al, “Aortic stenosis in the elderly: disease prevalence and number of candidates for transcatheter aortic valve replacement: a meta-analysis and modeling study,” Journal of the American College of Cardiology, vol. 62, no. 11, pp. 1002-1012, 2013. A. Horne, et al., “Transcatheter aortic valve replacement: Historical perspectives, current evidence, and future directions,” American Heart Journal, vol. 168, no. 4, pp. 414–423, 2014.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 220 – 226


Int J Adv Appl Sci [3] [4] [5]

[6] [7] [8] [9] [10] [11] [12] [13] [14]

[15]

[16] [17] [18]

ISSN: 2252-8814

225

A. Kheradvar, et al., “Emerging Trends in Heart Valve Engineering: Part II. Novel and Standard Technologies for Aortic Valve Replacement,” Annals of Biomedical Engineering, vol. 43, no. 4, pp. 844–857, 2014. S. K. Kodali, et al., “Two-year outcomes after transcatheter or surgical aortic velve replacement,” New England Journal of Medicine, vol. Vol. 366, no.18, pp. 1686–1695, 2012. Détaint, L. Lepage, D. Himbert, E. Brochet, D. Messika-Zeitoun, B. Iung, and A. Vahanian, “Determinants of significant paravalvular regurgitation after transcatheter aortic valve implantation,” JACC: Cardiovascular Interventions, vol. 2, no. 9, pp. 821–827, 2009. O. M. Rotman, M. Bianchi, R. P. Ghosh, B. Kovarovic, and D. Bluestein, “Principles of TAVR valve design, modelling, and testing,” Expert Review of Medical Devices, vol. 15, no. 11, pp. 771–791, 2018. A. Hosny, et al, “Pre-procedural fit-testing of TAVR valves using parametric modeling and 3D printing,” Journal of Cardiovascular Computed Tomography, vol. 13, no. 1, pp. 21–30, 2019. Q. Wang, E. Sirois, and W. Sun, “Patient-specific modeling of biomechanical interaction in transcatheter aortic valve deployment,” Journal of Biomechanics, vol. 45, no. 11, pp. 1965–1971, 2012. B. Rosa, et al., “A low-cost bioprosthetic semilunar valve for research, disease modelling and surgical training applications,” Interactive CardioVascular and Thoracic Surgery, vol. 25, no. 5, pp. 785–792, 2017. B. Rosa, Z. Machaidze, M. Mencattelli, S. Manjila, et al., “Cardioscopically guided beating heart surgery: paravalvular leak repair,” The Annals of Thoracic Surgery, vol. 104, no. 3, pp. 1074–1079, 2017. G. Fagogenis, et al, “Autonomous robotic intracardiac catheter navigation using haptic vision,” Science Robotics, vol. 4, no. 29, pp. 1-12, 2019. W. D. Buhr, et al, “Impairment of pericardial leaflet structure from balloon-expanded valved stents,” The Journal of Thoracic and Cardiovascular Surgery, vol. 143, no. 6, pp. 1417–1421, 2012. O. M. Rotman, M. Bianchi, R. P. Ghosh, B. Kovarovic, and D. Bluestein, “Principles of TAVR valve design, modelling, and testing,” Expert Review of Medical Devices, vol. 15, no. 11, pp. 771–791, 2018. B. Janci and M. L. Zoler, “TAVR Wallops SAVR in cost-effectiveness for intermediate-risk patients’” Vascular Specialist, 04-Dec-2018. [Online]. Available: https://www.mdedge.com/vascularspecialistonline/article/150986/interventional-cardiology-surgery/tavr-wallopssavr-cost. Edwards Lifesciences “Edwards SAPIEN XT Transcatheter Heart Valve with the Ascendra Delivery System ,” accessdata.fda.gov, May-2014. [Online]. Available: https://www.accessdata.fda.gov/cdrh_docs/pdf13/P130009d.pdf. E. L. Videos, “Edwards SAPIEN XT Video,” YouTube, 26-Mar-2015. [online]. Available:https://www.youtube.com/watch?v=Lcsw6y21b2o. I. Miraoui, M. Boujelbene, and E. Bayraktar, “Analysis of roughness and heat affected zone of steel plates obtained by laser cutting,” Advanced Materials Research, vol. 974, pp. 169–173, 2014. P. E. Hammer and P. J. D. Nido, “Guidelines for sizing pericardium for aortic valve leaflet grafts,” The Annals of Thoracic Surgery, vol. 96, no. 1, pp. e25-e27,2013.

BIOGRAPHIES OF AUTHORS Angelique Oncale has her Master of Science in Systems Technology from the University of Louisiana at Lafayette, College of Engineering. Her areas of research interest are Design for Manufacturing, Computer Aided Design, and Medical Technology. She is currently employed as and automation engineer at Noble Plastics in Grand Coteau, Louisiana.

Charles Taylor, Ph.D., is an Assistant Research Professor of Engineering at College of Engineering of Louisiana State University and A&M College. His areas of research interest are the development of safety, risk and reliability practices for medical devices targeting the cardiovascular system. He has obtained degrees in bioengineering (BS) and biomedical engineering (PhD)

Method for cost-effective trans aortic valve replacement device prototyping (Angelique Oncale)


226

ISSN: 2252-8814 Erika Louvier has her Master of Science in Systems Technology from the University of Louisiana at Lafayette, College of Engineering. Her areas of research interest are Teaching, Computer Aided Design, Metal Technology, and Manufacturing. She is currently coordinator of Industrial Technology at South Louisiana Community College.

G. H. Massiha, Ph.D., is a Louisiana Board of Region Professor of Engineering and Systems Technology graduate coordinator at the University of Louisiana at Lafayette, College of Engineering. His areas of research interest are alternative energy, robotics, and automation manufacturing.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 220 – 226


International Journal of Advances in Applied Sciences (IJAAS) Vol. 9, No. 3, September 2020, pp. 227~239 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i3.pp227-239

227

A comparison of the carbon footprint of pavement infrastructure and associated materials in Indiana and Oklahoma Rachel D. Mosier1, Sanjeev Adhikari2, Saurav K. Mohanty3 1,3Construction 2Department

Engineering Technology, Oklahoma State University, USA of Construction Management, Kennesaw State University, USA

Article Info

ABSTRACT

Article history:

Although often overlooked, infrastructure has a significant role in modern society. It is necessary means of transportation for goods and services needed to support commerce. It is this need and the need for continued economic development that causes the continuous infrastructure construction and its’ associated greenhouse gas emissions. Infrastructure construction requires energy to process raw materials, transport, mix and final construction. Greenhouse gas emissions from pavement sections have previously been identified for pavement preservation techniques. This research further evaluates greenhouse gas emissions for typical pavement sections from Indiana and Oklahoma to determine the carbon footprint based on linear foot of pavement. The comparison of CO2e of two typical roadway sections finds the difference in carbon footprint since variation in their minimum roadway. The carbon footprint of typical utility pipe with HDPE produces minimum CO2e and steel produces maximum CO2e. Soil base remediation options produce minimum CO2e and stabilized aggregate base produces maximum CO2e. Carbon offsets are determined by choosing vegetative options, soil remediation methods and appropriate pavement. This study is limited to a few pavement sections with a small variety of typical anticipated carbon offsets that would be seen in roadway construction. The index presented allows users to simply quantify benefits of the carbon offsets.

Received Nov 21, 2019 Revised Feb 11, 2020 Accepted May 14, 2020 Keywords: Carbon footprint Carbon sequestration Construction carbon Greenhouse gas Sustainable pavement

This is an open access article under the CC BY-SA license.

Corresponding Author: Rachel D. Mosier, Construction Engineering Technology, Oklahoma State University, 570 Engineering North, Stillwater, OK 74078, USA. Email: rachel.mosier@okstate.edu

1.

INTRODUCTION The challenge of global climate change has inspired change in Greenhouse Gas (GHG) reduction strategies for the construction, maintenance and rehabilitation of transportation infrastructure [1]. The carbon footprint of infrastructure pavement projects is determined based on calculations performed using Carbon Dioxide equivalents (CO2e) of GreenHouse Gas (GHG) emissions in construction quantities. The primary GHG emissions include life cycle emissions in the raw material acquisition and manufacturing phase, transportation or hauling phase and the pavement construction phase. The secondary emissions include emissions due to vehicular use and maintenance operations during the service life of the pavements which are not included in this study.

Journal homepage: http://ijaas.iaescore.com


228

ISSN: 2252-8814

The typical GHG emissions associated with the construction and maintenance of infrastructure pavement are carbon dioxide (CO2), nitrous oxide (N2O) and methane (CH4) [2]. To compare construction project materials and components, the carbon footprint, a measure of GHG emissions expressed as equivalents of carbon dioxide emissions is determined. The case studies presented benchmark and estimate footprints to effectively reduce emissions in future projects. The carbon footprint identified is also evaluated using existing sustainability rating systems. Environmental emissions have begun impact pavement management decisions, partially in response to benchmark tools which identify GHG as a metric [1, 3]. Since the 1980s, transportation infrastructure management has been a topic of importance due to growing government expenditures and user costs [4]. However, little research has monetized environmental emissions [5]. The United States Department of Transportation (USDOT) Federal HighWay Administration (FHWA) has provides some direction through technical reports on Life Cycle Assessment of Pavement [6]. The FHWA has made a variety of tools available through their website like Benefit-Cost Analysis (BCA) and Life Cycle Cost Analysis (LCCA) software [7]. Previous studies include predominantly international applications including; an examination of the carbon footprint of asphalt and concrete pavements in Ontario, Canada. Brown [8] reviewed the carbon footprint of a 50-year life cycle of asphalt pavement built as a Perpetual Pavement. Previously the carbon footprint of roads in the United Kingdom has been measured using Calculator for Harmonised Assessment and Normalisation of Greenhouse-gas Emissions for Roads (CHANGER), an international assessment tool [9]. Other international research has been published on this topic [10-12]. Melanta et al [13] proposed the Carbon Footprint Estimation Tool (CFET) for the estimation of greenhouse gas (GHG) emissions and other air pollutants from construction projects that are associated with roadways and other components of the transportation infrastructure. Other case studies have been performed for the carbon footprint of infrastructure in China and South Africa, which focus on drinking water [14-16]. Mosier et al. [17] previously provided a cost index for various pavement preservation options, proposing criterion that integrates sustainability with initial cost to justify investing in higher cost treatments on a basis of enhanced sustainability using the carbon footprint as a metric. A cost index provides a simple way to enhance pavement sustainability by providing a “shopping list” of sustainable options for the decision-making process, using initial cost, life cycle cost, and carbon footprint. The case studies herein provide an extension to the carbon footprint cost index of the previous study. This research has focused on associating many pavement infrastructure materials with their carbon footprint based on the linear foot of pavement in the United States. Other research in this area has performed similar studies in Canada, China, Spain and the United Kingdom [9-12]. This allows a comparison of current bid price per linear foot of pavement to carbon footprint in linear feet. Pavement carbon footprint analysis has been performed in the past without making any determinations for subsurface treatment or the larger project [18]. The carbon dioxide equivalency for bridge design has previously been developed [19] and was applied to determine the embodied CO2e and estimate the performance of a bridge deck from a sustainability perspective. A ranking scale was identified by establishing a mathematical relationship between a bridges’ CO2e and its structure for parametric estimating of its embodied CO2e to gauge a bridge’s sustainability [19]. An additional note, carbon offsetting is a controversial task. There is very specific research on the carbon footprint of construction materials using trees plantation on carbon offsetting [20, 21]. However, when trying to get a clear understanding of trees to plant to offset greenhouse gasses as CO2e, the maintenance and longevity of the trees themselves must be a factor [22]. This research highlights on used of trees or alternative materials to reduce the carbon footprint rather than a purchased carbon offset or carbon tax. As illustrated through existing literature, there is still much to be known about the carbon footprint of infrastructure projects, more specifically pavement projects. Further to help best understand the actual carbon footprint, it would essential owners and engineers to consider all carbon offsets on the project. The index method assists owners and engineers for comparisons between two project elements. Carbon footprint values are utilized by infrastructure sustainability rating systems as discussed follows. Sustainability rating systems Green construction responds to rising concerns about pollution, population explosion and environmental degradation. The need for a strong economic, social and environmental benefit of green infrastructure has come to the forefront through sustainability benchmarks and attempts have been made to incorporate green elements into both project design and construction. Sustainability metrics such as Infrastructure Voluntary Evaluation Sustainability Tool (INVEST), Greenroads [23] and the United States Green Building Council (USGBC) Leadership in Energy and Environmental Design for Neighborhood Development (LEED-ND) are commonly used in highway construction. INVEST focuses on sustainable Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 227 – 239


Int J Adv Appl Sci

ISSN: 2252-8814

229

practices through state and regional level programs and may not apply to a single municipal project [24]. Greenroads has a group of credits focused on pavement materials and design. LEED-ND for neighborhood development which applies to pavement [25] through the recycled and reused infrastructure credit. The Envision rating system produced by the Institute for Sustainable Infrastructure (ISI) is a useful sustainability metric to apply infrastructure projects. This research uses the Envision rating system as it supports more infrastructure sustainability, specifically carbon footprint and greenhouse gas reduction [26]. The Envision rating system houses 60 sustainability criteria called “Credits” organized into 5 main categories: quality of life, leadership, resource allocation, natural world and climate and risk. As indicated above, there are a variety of choices for rating systems. Envision was chosen for evaluation due to its focus on the carbon footprint for infrastructure. This research attempts to utilize some of the credits listed in Envision to quantify the methodology to reach a better standing in creating a more sustainable approach in choosing construction materials and procedures affecting the carbon footprint of a pavement section, starting from design to operation. This research considers 5 different Envision credits namely RA1.1-Reducing Net Embodied Energy, CR1.1-Reducing Greenhouse Gas Emission, RA1.2-Supporting Sustainable Procurement Practices, RA1.3-Using Recycled Material and RA1.4-Using Regional Materials [26]. This paper uses CO2e as a proxy of embodied energy and greenhouse gases for simplicity in the calculations. For credits; RA1.1Reducing Net Embodied Energy and CR1.1-Reducing Greenhouse Gas Emissions, the net embodied energy of the infrastructure can be reduced into 2 ways, reducing the quantity of material or selecting material with lower embodied energy [26]. The case studies review how substituting a different material with a lower embodied energy affects the calculations. Choices of different subgrade stabilization methods with lower footprint in addition to a carbon offsetting like trees or utility pipes which reduce the carbon footprint significantly along with the embodied energy are made and are directly related in the greenhouse gas emission calculation. The calculations for two different types of roadway section in Fishers and OKC have shown a distinct difference of 70-80 (kilogram) kg of CO2e per linear feet of the roadway section which relates to 70-75% reduction in greenhouse gas emission which are shown in the methodology section. This reduction in greenhouse gas emission would earn a “Superior” badge for the project under the Envision rating system. By evaluating two case study locations also illustrates how the choices in the minimum section also affects the CO2e. The primary GHG calculation in this paper considers the raw material acquisition, manufacturing phase, transportation to the pavement construction phase [20]. Transportation is a significant consumer of fossil fuels and a source of greenhouse gas emissions. This paper completely utilizes the transportation distances identified in Table 1 which shows the distance requirements for each type of material procured. When at least 60% of the construction materials are procured within the specified distances as identified in RA1.4-Using Regional Materials [26] could earn an “Enhanced” badge under the Envision Rating System.

Table 1. Transportation distance estimates Material Soils and mulches Aggregates, Sands Concrete Plants Other materials (excluding equipment)

Distance Requirement 50 miles / 80 km 50 miles / 80 km 100 miles / 160 km 250 miles / 400 km 500 miles / 800 km

Envision credit RA1.3-Using Recycled Materials encourages reduction in the use of virgin materials and avoid sending useful materials to landfills which otherwise could be reused or recycled and used as a building material for a green project [26]. Three different chemical additives; fly ash, CKD and lime, typically used for subgrade stabilization and provide a good basis for reducing the carbon embodied energy along with GHG emission significantly. The calculations for soil stabilization are included in the methodology section. Choosing any of the stabilization techniques prescribed in this research paper could earn an “Improved” badge under the Envision Rating System. Pavement sustainability Due to the chemical processes that occur in Portland cement production, for every 1,000 kg of Portland cement, approximately 730 kg of carbon dioxide is produced. Heating the aggregate and clay used to produce Portland cement to a temperature of around 1,450°C in the kiln causes the dissociation of the limestone and the production of about 60 percent of the carbon dioxide, which is released to A comparison of the carbon footprint of pavement infrastructure and associated … (Rachel D. Mosier)


230

ISSN: 2252-8814

the atmosphere. While comparing 50-year life-cycle greenhouse gas production, concrete pavement produced about 1610 CO2e tons/km and asphalt pavement produced about 500 CO2e tons/km [27, 28]. The bulk specific gravity of compacted asphalt ranges from 2.29 to 2.35 [29]. As specific gravity for Hot Mix Asphalt (HMA) is based on the unit weight or solid density of the compacted mix, the Rice value or Gmm is used as a basis for the specific gravity. The Asphalt Institute also provides guidance on specific gravities, pointing to 2.5 being a typical value [30]. For this research, we utilize an estimated specific gravity of compacted asphalt to be 2.32 which multiplied times the density of water in pounds per cubic foot (pcf) (62.4 pcf) provides a density of 144.77 pcf which is rounded here for simplicity to 145 pcf. Similarly, the density or unit weight of Portland Cement Concrete Pavement (PCCP) is well known. However, an average value has been identified for this work. The unit weight of concrete is commonly known to be between 140-150 pcf [31]. For this work the value of 145 pcf will be used. Soil and subbase treatments Subgrade treatment consists of providing, placing and compacting one or more layers of soil along with chemical additives and water to achieve a stable subgrade, which are chosen based on the soil type, the ease of effort and efficiency. Chemical additives used to stabilize or modify the subgrade are either cementitious additives; fly ash or cement kiln dust, or lime additives. Aggregate base material may also be used instead of a chemical soil modification. Taking into consideration the engineering properties of soil are based on natural characteristics and the field or site conditions, therefore an average specific gravity value of 2.73 and density a of 170 pcf is taken for all calculations of the carbon footprint in this paper. The density of Portland cement is 1860 kilogram per cubic meter (kg/m3) [21] which converts to 115.87 pcf. The specific gravity of CKD typically ranges from 2.6-2.8 [32]. Using the average specific gravity of 2.7, the weight is approximately the same as soil or 170 pcf. Indiana Department of Transportation (InDOT) has provided soil modification specifications for CKD stabilization of sandy soils with suggested mix quantities of 4%-6% by weight [33]. An application rate of 5% by weight will be used here. Hammond and Jones simplified the calculations by providing a CKD soil stabilized base carbon footprint of 0.06 kg/kg which converts to 0.386 kg/sf/in of stabilization [21]. Fly ash is another frequently utilized additive for stabilizing soil for highway constructions. The specific gravity of flyash varies widely, from 2.0-2.6 [34]. The density of fly ash will be taken as 2300 kg/m3 [21] which converts to 143.52 pcf and will be rounded to 144 pcf for simplicity in calculations. The American Coal Ash Association (ACAA) provides guidelines for stabilization of soil subbase using fly ash, where the replacement level ranges from 12-15% to the weight of dry soil [33]. The Oklahoma Department of Transportation (OkDOT) soil stabilization mix design states an optimum replacement level of 14% in stabilization of soil subbase in Oklahoma, which typically applies to all soil types except A7 (organic soil material) under the American Association of State Highway and Transportation Officials (AASHTO) soil classification method [35]. Subgrade stabilization using lime additive which is preferred for soil types which are categorized under the AASHTO M145 soil classification of A6 (silt-clay fine soil material) and A7 soil where the density taken into consideration for the carbon footprint calculation is 1200 kg/m3 [21] which converts to 74.81 pcf and will be rounded to 75 pcf for simpler calculations. A range of application rates for lime has been established between 3%-6% by weight [36]. An application rate of 5% by weight will be used here. Localities may specify a variety of aggregates for base material. Aggregate base varies in density based on the material and compaction. For the localities included herein subbase improvements include No. 8 and No. 53 coarse aggregate base material blends as specified by InDOT [36]. An aggregate blend contains a variety of sieve size materials based on standard U.S. mesh or sieve opening sizes. A variety of densities have been identified for aggregate base materials from 100 pcf to 180 pcf. Hammond and Jones [21] provide a density and carbon footprint in their Inventory of Carbon and Energy (ICE). The density provided by ICE is 2,240 kg per cubic meter which converts to 139.8 pcf rounded to 140 pcf for simplicity herein. Potential carbon offsets for infrastructure construction For a 24’ roadway, the statutory right-of-way for most of Oklahoma is 66’ as identified in the Organic Act of 1890 as 4 rods wide with a rod being equal to 16.5 feet [37]. Although this is “shared” space by the property owner and the state, a clear zone [38] is required in the first 7’-10’ either side of the roadway section. Along with highway signs, some low planting occurs in this area, including turf grass. Indigenous plants and xeriscaping would provide the best outcomes with the least amount of carbon emissions associated with installation and care. In OKC and Fishers xeriscaping is not indigenous and not considered here. However, there is plenty of research identifying the carbon sequestration value of native soils and xeriscaping. Bouchard et al. [39] provides some insight into the ditch area on a section with no curb. As the vegetation acts as a filter and swale, it also provides some carbon footprint reduction. Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 227 – 239


Int J Adv Appl Sci

ISSN: 2252-8814

231

Potential carbon offsets should be identified, especially those behind the curb or outside the roadbed. Many roadway projects include a variety of landscape elements. and trees may provide a carbon offset on average of 19 kilograms per year at maturity, which is between 12 and 18 inches in trunk diameter and typically over 30 feet in height [40, 41]. Further other evidence provides carbon storage in trees and shrubs in grams per square meter based on land use. It is assumed that trees sequester carbon during growth. However, there is also some amount of loss due to lack of maintenance and death. Trees provide benefits in urban areas like shade and sequestering rainwater. Additional benefits include evapotranspiration cooling and wind speed reduction [42]. Turf grass and shrubs can also be used in carbon footprint calculations. Turf grasses are difficult to calculate for offsets due to fertilizer, irrigation and other maintenance like mowing [43]. In areas where other types of grasses or wildflowers are used, assumptions would change. Depending on the density and the life stage, Shrubs can provide 0.13-12.93 g/m2 of carbon storage based on density of shrubbery [44]. The vegetative ditch offsets should be compared to an underground utility pipe. Many utilities are outside the traditional project scope of government entities and are self-performed by others. Some utilities may be provided by local government, like storm sewer, water lines and sanitary sewer lines. An in-depth analysis of these utilities is not provided here, but some discussion is merited. An Inventory of Carbon and Energy (ICE), has been developed by [21] specific to construction materials. A comparison of concrete, iron, steel, High Density PolyEthylene (HDPE), PolyVinyl Chloride (PVC) and vitrified clay pipe can be performed as well to make determinations as to the least carbon footprint. Like any other comparison, the pipe cannot be considered as a manufactured product alone, the transportation, setting and bedding activities must be analyzed. Reductions can be contributed from other sources as well. Substituting fly ash or slag for PCCP can reduce associated GHG emissions [45]. Warm Mix Asphalt or Recycled Asphalt Paving can be used to reduce the carbon footprint as well. This is not an exhaustive list but meant to illustrate there are many alternatives to be considered. The study reviewed a variety of roadway types but is confined to a typical county road section with ditch. As such, there are no roadway lights or sidewalks. However, the framework can be extended and further applied to these additional items. Electrical items have continuing costs that are not considered here.

2.

RESEARCH METHOD A review of standard sections was performed for Fishers, IN (Fishers) and Oklahoma City (OKC), OK. Both municipalities publish typical sections online. This is unique to smaller government entities. A web search was performed for published standards throughout the United States. Departments of Transportation typically rely on design engineers for all of their highway sections. However, it is possible to find county standards, particularly for bridges. Published municipal roadway standards were found for cities in Florida, Indiana, Tennessee, Ohio, Oklahoma, and Washington. Locations in Fishers and OKC provided the most information online. An additional reason for focusing on these two locations is the location of the research team. As the research team already had knowledge of these locations, the locations became preferred for the case studies. Starting from the roadway sections, an area per linear foot was determined. Roadways are typically bid per linear foot. Using the area per linear foot, an easy correlation can be made to cost. The area per linear foot also allows the different materials to be indexed for comparison. For HMA sections, tack coat is not included as the pay item for tack coat is frequently in gallons and not in linear foot. A standard for the carbon footprint or greenhouse gas emissions should be determined for roadways which can be compared to bidding for monetization. If GHG is calculated in bidding quantities like linear foot (LF) or square foot (SF) then the change in cost between options can be compared to the change in GHG. Greenhouse gases are frequently measured in terms of energy used in British Thermal Units (Btus), Joules or megajoules (MJ). The carbon footprint can be measure through the embodied energy (carbon) of a production cycle, frequently referred to as CO2e. Hammond and Jones propose using a common idea of cradle to gate, which indicates the production energy prior to leaving the factory [21]. Shipping would be accounted for separately. Chevotis and Galehouse use a similar approach specifying an expected travel circuit [20]. For this research, the calculations are presented in one set of units. Because the carbon offset due to trees is presented in kg/tree, the appropriate choice of units is the carbon emission of the materials in question or kg of carbon per unit. Greenhouse Gas emissions were calculated for each of the pavement, stabilization and utility materials identified. The GHG for the different materials were converted to an appropriate biddable unit. In most cases the bidding unit is based on linear foot. The options are compared for the least carbon footprint. The two municipalities have similar roadway sections for width and drainage. This is not a comparison of A comparison of the carbon footprint of pavement infrastructure and associated … (Rachel D. Mosier)


232

ISSN: 2252-8814

the design of the two sections, but an illustration of choices that could be made. For OKC, a typical 24’ HMA section with a ditch, the section consists of 3" Type B HMA over 6" Compacted Subgrade, and over 6" Stabilized Aggregate Base or 10" Stabilized Soil. The similar section for Fishers is noted as Main St./Secondary St. and consists of 1.5" Type A HMA Surface over 2. 5" Type A HMA Intermediate and 2. 5" Type A HMA Base, over 3" Type A HMA Base and 14" Stabilized Subgrade or 6" Compacted Aggregate Base No. 53 on 14" Stabilized Subgrade. The narrative description is tabulated in Table 2 with the associated carbon footprint. 2.1. Carbon footprint Itemized list of carbon footprints has been determined by a variety of groups described in the introduction and those used for calculations here [20-21], which focus on typical items utilized in construction, although not exhaustive [46]. The carbon footprint of a linear foot of roadway construction has not been previously determined. The carbon footprint per linear foot of construction is necessary for engineers and owners for budget choices as compared with carbon footprint or greenhouse gas emissions. The carbon footprint of each of the individual layers of material is calculated based on volume of the overall section, a carbon footprint in kg/lf can be determined. The GHG or carbon footprint is given in kg/ton. From densities identified in the Pavement Sustainability section, the kg/ton of material can be found for either HMA or PCCP. Using standard conversions for weight per inch of thickness, the carbon footprint for inch of thickness is determined. This is a useful conversion as pavement thickness vary widely even in standard roadway sections. Adding a stabilized base adds multiple variables to the equation. There are three basic options for chemically stabilizing soil base, by adding fly ash, lime, or CKD. Some methods use a mix of two chemicals, but that will be outside the focus of this research. For simplicity only one chemical additive is evaluated at a time, based on the application rates given above. Comparing both OKC and Fishers, there are four different depth of soil stabilization; 6”, 8”, 10” and 14”. The carbon footprint for 1” of soil stabilization is based on the technique; with Fly-Ash providing 0.274 kg/SF/in, CKD providing 0.603 kg/SF/in and Lime providing 0.812 kg/SF/in. There are many options for reducing the carbon footprint of a roadway. Chevotis and Galehouse [20] have tabulated a variety of carbon footprints associated with roadway maintenance. Although the concrete, asphalt and base materials are considered additive in this paper, utilizing alternative methods like warm mix asphalt can be considered a potential reduction. Trees are likely to be second only to soil for carbon sequestration in an urban environment [47]. Calculations for carbon sequestration frequently consider trees as a group making it difficult to apply a carbon offset for a singular tree. However, some research has focused on individual trees and more particularly street trees as a carbon offset [40-48] From research in the Twin Cities, values on a per tree basis were determined [39] and is provided in Table 2 adapted from that research. The adapted table uses a street tree lifespan of 50-60 years as provided by Strohbach at al. [22]. A standard tree spacing must also be identified.

Table 2. Carbon sequestration of trees (adapted from Akbari [41]. Tree Type Norway maple Sugar maple Hackberry American and little-leaved linden Black walnut Green ash Species Average (Not including Robusta and Siouxland hybrid)

Carbon (kg) 160 145 135 265 150 180

Tree Type Robusta and Siouxland hybrid Kentucky coffee tree Red maple White pine Blackhills (white) spruce Blue spruce

Carbon (kg) 745 105 140 210 165 335

180

Average Oklahoma and Indiana Species

153.75

As not all trees are available in all places and some trees exhibit unusually high sequestration rates, two averages for calculations were determined. An average was taken without the Robusta and Siouxland hybrid which exhibits exceptionally high sequestration rates. Oklahoma native trees include Black Walnut which is italicized in Table 3. Indiana native trees include Green Ash, Sugar Maple and Red Maple which are shown in bold. These native trees to our case studies were also averaged. Spacing may be determined by the designer or engineer for a roadway project. A crown of 50 m2 or 538 SF, or approximately 26-foot diameter [41] is the basis for tree offsets. Using a slight overlap, trees will be assumed to be spaced 20’ apart. This is a typical street tree spacing. Using the average carbon sequestration and a 50-year life cycle, a carbon

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 227 – 239


Int J Adv Appl Sci

ISSN: 2252-8814

233

offset per tree can be estimated somewhere between 150-180 kg over the life of the tree. Based on a 20’ spacing the average carbon offset per linear foot would be 8.25 kg/lf. Adding turf grass through the use of a vegetative drainage channel or ditch instead of a concrete channel or underground storm sewer is another carbon offset alternative. Like any other system, there is a carbon footprint to the installation of the system itself. Some additional choices may be made. When using a vegetative “filter strip” or ditch, a value of 36 kg/SF. may be used, calculated for a variety of locations in North Carolina. These values may be increased when using a wetland area or area which is continually wet [38]. Although these results may not be considered complete and for extrapolation to all locations, it is important to note that data could be compiled at other locations to obtain a locally appropriate carbon offset. Another option is to reduce the carbon footprint of the associated utilities. Based on the Hammond and Jones inventory [21], the carbon footprint for a variety of pipes can be determined. Using a 12” diameter pipe and weight per linear foot as a basis for consideration, the carbon footprint of typical utility pipe is provided in Figure 1.

Figure 1. Carbon footprint of 12” diameter utility pipe per linear foot [49]. There are material choice limitations. In some areas CKD is required for sandy soils, in other locations, fly ash and lime is more appropriate. A similar requirement is true for water pipe versus storm water pipe, PVC may be required for water, while reinforced concrete pipe is acceptable for stormwater. By itemizing the carbon footprint to include all of the roadway section or the right-of-way limits, the total carbon impact can be determined. Since the right-of-way includes vegetation, the carbon offset will be examined.

3.

RESULTS AND ANALYSIS The purpose of this research was to determine a carbon footprint index. It is preferable to identify any greenhouse gas emissions and to identify opportunities for carbon sequestration. By providing both types of options, owners, designers and engineers can identify “shopping list” items for their roadway projects. The 24’ HMA Roadway with No Curb will be examined in further detail. Soil stabilization options will be added and a utility pipe section. Trees will be considered in the roadway section to reduce the overall carbon footprint. Tabulating all of the options, a maximum and minimum carbon footprint are found as shown in Table 3. The original pavement sections for both municipalities included options for base material. The soil stabilization methods are optional and may not be applicable in all locations. CKD is typically used in Indiana but may not be used in Oklahoma. However, CKD was the basis for calculation for the minimum carbon of both roadway sections AS shown in Figure 2. A subtotal was provided based on the roadway options only. To calculation the maximum including trees and utilities, only the maximum and minimum carbon footprint for pipe were considered, specifically steel and HDPE. Trees were subtracted to further reduce the minimum carbon footprint. The assumption is the worst-case for carbon footprint would be without street trees as an offset.

A comparison of the carbon footprint of pavement infrastructure and associated … (Rachel D. Mosier)


234

ISSN: 2252-8814

Figure 2. 24’ comparison of CO2e in kg/lf for two typical roadway sections. Table 3. Total carbon footprint for a 24’ hma roadway section OKC Typ HMA Section 102 - 24' 3" Type "B" Asphalt 6" Compacted Subgrade

CO2 (kg/lf) 21.67 1.48

*10" Stabilized Soil Fly-Ash (14%) CKD (5%) Lime (5%)

2.74 6.03 8.12

Or **6" Stabilized Aggregate Base No Curb Subtotal (Max.) Subtotal (Min.) Street Trees @ 20' o.c. Pipe (HDPE Min.) Pipe (Steel Max.) Total (Max.) Total (Min.)

11.43 0 34.58 25.89 -11.5 8.3 55.39 89.97 22.69

Fishers Main St/Secondary St 1.5" Type A HMA Surface 2. 5" Type A HMA Intermediate 2. 5" Type A HMA Base 14" Stabilized Subgrade Fly-Ash (14%) CKD (5%) Lime (5%) *3" Type A HMA Base Or **6" Compacted Aggregate Base No Curb Subtotal (Max.) Subtotal (Min.) Street Trees @ 20' o.c. Pipe (HDPE Min.) Pipe (Steel Max.) Total (Max.) Total (Min.)

CO2 (kg/lf) 10.83 15.43 15.43 3.33 7.88 10.80 18.52 11.43 0 71.65 56.46 -11.5 8.3 55.39 127.04 53.26

Figure 3 illustrates the comparison of CO2e of two typical roadway sections of OKC and Fishers. Table 2 has shown four options of base materials. These base materials are 10’’ stabilized soil with flyash, 10’’ stabilized soil with CKD, 10’’ stabilized soil with lime and 6” stabilized aggregate base. Figure 3, Figure 4, Figure 5 and Figure 6 compare the CO2e of these base materials. While comparing these four base materials, 10’’ stabilized soil with fly ash produces minimum CO 2e and 6” stabilized aggregate base produces maximum CO2e.

Figure 3. CO2e (kg/lf) using flyash

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 227 – 239

Figure 4. CO2e (kg/lf) using CKD


Int J Adv Appl Sci

ISSN: 2252-8814

Figure 5. CO2e (kg/lf) using lime

235

Figure 6. CO2e (kg/lf) using stabilized aggregate base

Maximum CO2e was calculated as 89.97 kg/lf by using 3” Type B asphalt, 6” stabilized aggregate base, 6” compacted subgrade and utility steel pipe in OKC. The minimum CO2e emission was calculated as 22.69 kg/lf by using 3” Type B asphalt, 6” stabilized aggregate base, fly ash stabilized subgrade, utility HDPE pipe and offset street trees in OKC. Figure 7 and Figure 8 shows the detail of maximum and minimum CO2e of typical roadway sections of OKC. Maximum worst case CO2e condition while the minimum CO2e is relatively

Figure 7. Detail of maximum CO2e in OKC

Figure 8. Detail of minimum CO2e of OKC A comparison of the carbon footprint of pavement infrastructure and associated … (Rachel D. Mosier)


236

ISSN: 2252-8814

In Fishers, the maximum CO2e was calculated as 127.04 kg/lf by using 1.5” Type A HMA surface, 2.5” Type A HMA intermediate, 5.5” Type A HMA base, 6” compacted aggregate base, and utility steel pipe. The minimum CO2e was calculated as 53.26 kg/lf by using 1.5” Type A HMA surface, 2.5” Type A HMA intermediate, 2.5” Type A HMA base, 6” compacted aggregate base, fly ash stabilized subgrade, utility HDPE pipe and offset street trees in Fishers. Figure 9 and Figure 10 shows the detail of maximum and minimum CO2e of typical roadway sections of Fishers. The CO2e for typical roadway sections of Fishers was higher than OKC because Fishers used extra HMA intermediate and base sections.

Figure 9. Detail of maximum CO2e of Fishers

Figure 10. Detail of minimum CO2e of Fishers

4.

CONCLUSION Although a large amount of research is now available quantifying the carbon footprint for a variety of construction materials, they do not convert them to match biddable units for U.S. infrastructure construction. However, very little has been published in the area of application of the collected carbon footprint values in U.S. infrastructure construction. This research provides further application of the carbon footprint in infrastructure construction, by applying known carbon footprint values to actual roadway sections in order to calculate a carbon footprint. The carbon footprint per linear foot of roadway construction was determined in GHG/lf, which can be used by owners and designers to make best choices for cost and sustainability. Reviewing Table 3, using a 12” diameter pipe and weight per linear foot as a basis for consideration, the carbon footprint of typical utility pipe with HDPE produces minimum CO 2e and steel produces maximum CO2e. While comparing base materials of fly ash, lime, CKD and aggregates, fly ash stabilized soil base produces minimum CO2e and stabilized aggregate base produces maximum CO 2e. While illustrating the comparison of CO2e of two typical roadway sections of OKC and Fishers, it is obvious that the two Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 227 – 239


Int J Adv Appl Sci

ISSN: 2252-8814

237

municipalities vary in their minimum roadway section and this also causes a dramatic difference in carbon footprint. The maintenance of the two different sections would be at different which would affect the lifecycle carbon footprint which is not considered here. A further look into maintenance would be an obvious next step for research. From the larger perspective, there has been enough information collected and calculated to start producing a carbon footprint for any infrastructure construction project.

REFERENCES [1] [2] [3] [4] [5]

[6] [7]

[8] [9] [10] [11]

[12] [13] [14]

[15]

[16] [17]

[18] [19]

[20] [21]

[22] [23] [24]

Santero, N. and Harvath, A. “Global warming potential of pavements,” Environmental Research Letters, vol. 4, pp. 034011, 2009. Collins, F., “2nd generation concrete construction: carbon footprint accounting”. Engineering Construction and Architectural Management, vol. 20, no. 4, pp. 330-344, 2013. Sathaye, N., Horvath, A., and Madanat, S. “Unintended impacts of increased truck loads on pavement supply-chain emissions,” Transportation Research Part A, vol. 44, pp. 1-15, 2010. American Society of Civil Engineers (ASCE), “Report card for America’s infrastructure,” pp. 153, Washington, DC, Mar 25, 2009, Retrieved from: www.asce.org/reportcard. Zhang, H., Lepech, M.D., Keoleian, G.A., Qian, S., and Li, C.V. “Dynamic life cycle modeling of pavement overlay systems: Capturing the impacts of users, construction, and roadway deterioration.” Journal of Infrastructure Systems, ASCE, vol. 16, no. 4, pp. 299-309, 2010. Harvey, J.T., J. Meijer, H. Ozer, I.L. Al-Qadi, A. Saboori, and A. Kendall. “Pavement life-cycle assessment framework,” Technical Report – FHWA-HIF-16-014; pp. 246, 2016. Center for Transportation and Planning, “Advancing a sustainable highway system: highlights of FHWA sustainability activities.” Washington, DC, 2014, Retrieved from: https://www.sustainablehighways.dot.gov/FHWA_Sustainability_Activities_June2014.aspx Brown, A. Carbon Footprint of HMA and PCC Pavements. Proceedings International Conference on Perpetual Pavements, Columbus, OH, 2009. Huang, Y., B. Hakim, and S. Zammataro, “Measuring the carbon footprint of road construction using CHANGER.” International Journal of Pavement Engineering, vol. 14, no. 6, 2013.. Cole, R. J. “Energy and greenhouse gas emissions associated with the construction of alternative structural systems,” Building and Environment, vol. 34, no. 3, pp. 335-348, 1998. Barandica, J.M., Fernández-Sánchez, G., Berzosa, A., Delgado, J.A., and Acosta, F.J. “Applying life cycle thinking to reduce greenhouse gas emissions from road projects,” Journal of Cleaner Production, vol. 57, pp. 79-91, 2013. Wang, X., Duan, Z., Wu, L., and Yang, D. “Estimation of carbon dioxide emission in highway construction: a case study in Southwest Region of China,” Journal of Cleaner Production, vol. 103, pp. 705-714, 2015. Melanta, S., Miller-Hooks, E. and Avetisyan, H.G. “Carbon footprint estimation tool for transportation construction projects.” Journal of Construction Engineering and Management, vol. 139, no. 5, pp. 547-555, 2012. Wu, L., Mao, X. and Zeng, A. “Carbon footprint accounting in support of city water supply infrastructure siting decision making: A case study in Ningbo, China,” Journal of Cleaner Production, vol. 103 no. 23, pp. 737-746, 2015. Mao, R., Duan, H., Dong, D., Zuo, J., Song, Q., Gang, L., Hu, M., Zhu, J., and Dong, B. “Quantification of carbon footprint of urban roads via life cycle assessment: Case study of a megacity-Shenzhen, China.” Journal of Cleaner Production,” vol.166, pp. 40-48, 2017. Friedrich, E., Pillay, S., & Buckley, C. “Carbon footprint analysis for increasing water supply and sanitation in South Africa: A case study,” Journal of Cleaner Production, vol. 17, no. 1, pp. 1-12, 2009. Mosier, R.D., D. Pittenger, and D.D. Gransberg, “Carbon footprint cost index: Measuring the cost of airport pavement sustainability,” Transportation Research Board Annual Meeting Compendium of Papers 2014, Paper #14-3214, 2014. Liu, X., Q. Cui and C. W. Schwartz, “Introduction of mechanistic-empirical pavement design into pavement carbon footprint analysis,” International Journal of Pavement Engineering, pp. 763-771, 2016. Gopi, V., B. Senior, J. van de Lindt, K. Strong, and R. Valdes Vasquez. “Carbon dioxide equivalency as a sustainability criterion for bridge design alternatives,” 53rd ASC Annual International Conference Proceedings, Seattle, WA, 2017. Chevotis, J. and Galehouse, L. “Energy usage and greenhouse gas emissions of pavement preservation processes for asphalt concrete pavements,” First International Conference on Pavement Preservation, pp. 27-42, 2010. Hammond, G., and C. Jones. “Inventory of carbon and energy (ICE), version 2.0”. Circular Ecology, 2011, Retrieved from: http://www.circularecology.com/embodied-energy-and-carbon-footprintdatabase.html#.WV_wd8bMx-U. Strohbach, M.W., Arnold, E., and Haase, D. “The carbon footprint of urban green space-a life cycle approach” Landscape and Urban Planning, vol. 104, pp. 220-229, 2012. Greenroads International., Greenroads Rating System v2. (J.L. Anderson and S.T. Muench, Eds.). Redmond, WA, 2017. FHWA, “INVEST (Infrastructure Voluntary Evaluation Sustainability Tool) 1.2,” Federal Highway Administration, [Online] Available: https://www.sustainablehighways.org, 2015.

A comparison of the carbon footprint of pavement infrastructure and associated … (Rachel D. Mosier)


238

ISSN: 2252-8814

[25] U.S. Green Building Council (USGBC) LEED v.4 for Neighborhood Development, USGBC, 2018, [Online] Available: https://www.usgbc.org/resources/leed-v4-neighborhood-development-current-version. [26] Institute for Sustainable Infrastructure (ISI), “Envision,” Retrieved from: https://sustainableinfrastructure.org, 2018. [27] Cement Industry of Canada, “Cement Industry Sustainability Report,” 2010. [28] Asphalt Pavement Alliance, “Carbon Footprint: How Does Asphalt Stack Up?” Asphalt Pavement Alliance, 2010. [29] Leng, Z., I.L. Al-Qadi, and S. Lahouar. “Development and validation for in situ asphalt mixture density prediction models,” NDT & E International, vol. 44, no. 4, pp. 369-375, 2011. [30] Asphalt Institute, “Asphalt Pavement Construction FAQs,” Asphalt Institute, Retrieved from: http://www.asphaltinstitute.org/asphalt-pavement-construction-faqs/, 2017. [31] Johnston, D. W. “Formwork for Concrete Chelsea,” MI, American Concrete Institute, pp 5-3, 2014. [32] Collins, R.J., and J.J. Emery, “Kiln Dust-Fly Ash systems for highways bases and sub-bases,” Federal Highway Administration, Report No. FHWA/RD-82/167, Washington DC, 1983. [33] Indiana Department of Transportation (InDOT), “Design Procedures for Soil Modification or Stabilization,” Office of Geotechnical Engineering - Production Division, 2008. [34] American Coal Ash Association (ACAA), “Fly ash facts for highway engineers,” Technical Report – FHWA-IF-0319; pp. 74, 2003. [35] Oklahoma Department of Transportation (OkDOT), Standard Specifications Book, 2009. [36] Solanki, P., Khoury, N.N. and M.M. Zaman, “Engineering properties of stabilized subgrade soils for implementation of the AASHTO 2002 pavement design guide,” Final Report - FHWA-OK-08-10; OkDOT SPR Item Number 2185:131, 2002. [37] Everett, D. “Organic Act (1890),” The encyclopedia of Oklahoma history and culture, [Online] Available: https://www.okhistory.org/publications/enc/entry.php?entry=OR004. Retrieved Feb. 23, 2019. [38] American Association of State Highway and Transportation Officials (AASHTO), “A policy on geometric design of highways and streets,” 7th Edition. Washington, D.C. American Association of State Highway and Transportation Officials, 2018. [39] Bouchard, N.R., D.L. Osmond, R.J. Winston, and W.F. Hunt, “The capacity of roadside vegetated filter strips and swales to sequester carbon,” Ecological Engineering; vol. 54, pp. 227-232, 2013. [40] Nowak, D. J. “Atmospheric carbon dioxide reduction by Chicago’s urban forest.” In E. G. McPherson, D. J. Nowak, & R. A. Rowntree (Eds.), Chicago’s urban forest ecosystem: Results of the Chicago urban forest climate project. United States Department of Agriculture, Forest Service, pp. 83–94, 1994. [41] Akbari, H. “Shade trees reduce building energy use and CO2 emissions from power plants,” Environmental Pollution, vol. 116, pp. S119–S126, 2002. [42] McPherson, E.G., and Simpson, J.R. “Carbon dioxide reductions through urban forestry: guidelines for professional and volunteer tree planters,” Gen. Tech. Rep. PSW-171. Albany, CA, USDA Forest Service, Pacific Southwest Research Station, 1999. [43] Townsend-Small, A. & Czimczik, C. I. Correction to “Carbon sequestration and greenhouse gas emissions in urban turf,” Geophysical Research Letters, vol. 37, no. 6, 2010. [44] McHale, M.R., Hall, S.J., Majumdar, A. and Grimm N.B. “Carbon lost and carbon gained: a study of vegetation and carbon trade-offs among diverse land uses in Phoenix, Arizona,” Ecological Applications, vol. 27, no. 2, pp. 644–661, 2017. [45] Collins, F. “Inclusion of carbonation during the life cycle of built and recycled concrete: influence on their carbon footprint,” The International Journal of Life Cycle Assessment, vol. 15, no. 6, pp. 549-556, 2010. [46] Mukherjee, A., B. Stawowy, and D. Cass, “Project emissions estimator (PE-2): Tool to Aid contractors and agencies in assessing greenhouse gas emissions of highway construction projects,” Transportation Research Record 2366. Transportation Research Board, Washington, DC, 2013. [47] Melson, S.L., M.E. Harmon, J.S. Fried, and J.B. Domingo, “Estimates of live-tree carbon stores in the Pacific Northwest are sensitive to model selection,”Carbon Balance and Management, vol. 6, no. 2, 2011. [48] Tang, Y., A. Chen, and S. Zhao, “Carbon storage and sequestration of urban street trees in Beijing, China,” Frontiers in Ecology and Evolution; vol. 4, no. 53, May 2016. [49] Mosier, R.D., Mohanty, S.K., and Adhikari, S. “Carbon footprint calculation for a typical roadway section” Conference Proceedings, Associated Schools of Construction April 2018.

BIOGRAPHIES OF AUTHORS With experience as a structural engineer and municipal project manager, Mosier has experience in both commercial and heavy construction. This combination has focused her research on sustainability in buildings and roadways. Mosier has been at Oklahoma State University for five years and has fifteen years of construction experience.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 227 – 239


Int J Adv Appl Sci

ISSN: 2252-8814

239

Dr. Sanjeev Adhikari is faculty from Kennesaw State University. Previously he was faculty at Morehead State University from 2009 to 2016 and faculty at Purdue University – Indianapolis from 2016 to 2019. He has completed Ph.D. degree in civil engineering, focusing on construction management from Michigan Technological University in 2008. He has an extensive teaching background with a total of 18 years of the academic experience at five different universities. To supplement his teaching and research, he has been involved in numerous professional societies, including ASCE, ACI, ASEE, ASC, ATMAE, and TRB. His research output has been well disseminated as he has published thirty journal papers and thirty-nine conference papers. His research interests are 1) Construction Sustainable and Resilient, 2) Structural BIM Integration, 3) Carbon Footprint Analysis on Roadways.

Saurav Kumar Mohanty has worked as a Graduate Research Assistant while seeking his Master’s Degree in Civil Engineering at Oklahoma State University. Saurav has excellent construction experience and has a zest for research. He has worked in the Construction Industry in India for approximately one and a half years as a Construction Project Engineer & is presently working as Project Controls Egineer at SoCalGas in Los Angeles. Further Saurav performed research as part of his undergraduate degree in Manipal Institute of Technology, which resulted in a publication. During his master’s degree, he has also published an additional paper, Carbon Footprint Calculation for a Typical Roadway Section."

A comparison of the carbon footprint of pavement infrastructure and associated … (Rachel D. Mosier)


International Journal of Advances in Applied Sciences (IJAAS) Vol. 9, No. 3, September 2020, pp. 240~254 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i3.pp240-254

240

A study secure multi authentication based data classification model in cloud based system Sakshi kaushal1, Bala buksh2 1Computer

science Engineering, Career Point University, Himachal Pradesh, India 2Computer Science Engineering, R N Modi Engineering College, Rajasthan, India

Article Info

ABSTRACT

Article history:

Abstract: Cloud computing is the most popular term among enterprises and news. The concepts come true because of fast internet bandwidth and advanced cooperation technology. Resources on the cloud can be accessed through internet without self built infrastructure. Cloud computing is effectively managing the security in the cloud applications. Data classification is a machine learning technique used to predict the class of the unclassified data. Data mining uses different tools to know the unknown, valid patterns and relationships in the dataset. These tools are mathematical algorithms, statistical models and Machine Learning (ML) algorithms. In this paper author uses improved Bayesian technique to classify the data and encrypt the sensitive data using hybrid stagnography. The encrypted and non encrypted sensitive data is sent to cloud environment and evaluate the parameters with different encryption algorithms.

Received Jul 2, 2019 Revised Apr 13, 2020 Accepted May 15, 2020 Keywords: Bayesian technique Cloud computing Data mining Internet

This is an open access article under the CC BY-SA license.

Corresponding Author: Sakshi kaushal, Computer science Engineering, Career point university kota Rajasthan, Aalniya, Rajasthan 324005, India. Email: sksakshi.kaushal@gmail.com

1.

INTRODUCTION The growth and use of internet services cloud computing becomes more and more popular in homes, academia, industry, and society [1]. Cloud computing is envisioned as the next-generation architecture of IT Enterprise, which main focus is to merge the economic service model with the evolutionary growth of many offered approaches and computing technologies, with distributed applications, information infrastructures and services consisting of pools of computers, storage resources and networks. Cloud computing has almost limitless capabilities in terms of processing power and storage [2]. Cloud computing offers a novel and promising paradigm for organization and provide information and communication technology (ICT) resources to remote users [3]. Client does not handle or control the cloud’s infrastructure so far, they have power over on operating systems, applications, storage space, and probably their components selection. A Cloud System defined at the lowest level as the difficult server, which having the substantial devices, processing unit and memory [4]. To give the distribution of existing services and applications, the Cloud server is auxiliary divided in multiple virtual machines [5]. Every virtual machine is definite with various specification and characterization. The logical partition of existing memory, resources and processing capabilities is done. The separation is based on the requirement of the application, user requirement and just before achieves the quality of services. In this type of environment, there can be multiple instances of related services, products and data. When a user enters

Journal homepage: http://ijaas.iaescore.com


Int J Adv Appl Sci

ISSN: 2252-8814

241

to the Cloud System, here is the main requirement of identification of useful Cloud service for Cloud user. As also there is the necessity to process the user request successfully and reliably. Types of cloud To given a secure Cloud computing key a main decision is to focus which type of cloud to be implemented as shown in Figure 1. There are four types of cloud deployment models are a public, community, private and hybrid cloud [6, 7].

Figure 1. Types of cloud [2]

a. Private cloud A private cloud is place up within an organization’s interior project datacenter. In the private cloud, virtual applications provided by cloud merchant and scalable resources are pooled together and presented for cloud users so that user share and use them. Deployment on the private cloud can be greatly more secure than that of the public cloud as of its specific internal experience only the organization and selected stakeholders have access to control on a specific Private cloud [8]. b. Hybrid cloud A hybrid cloud is a private cloud associated to one or other external cloud services, centrally provisioned, managed as a solitary unit, and restricted by a protected network. A hybrid cloud architecture combination of a public cloud and a private cloud [9]. It is also an open architecture which allows interfaces by means of other management systems. In the cloud deployment model, storage, platform, networking and software infrastructure are defined as services to facilitate up or down depending on the demand. c. Public cloud A public cloud is a model which gives users to access the cloud using interfaces by web browsers. A public cloud computing most commonly used cloud computing service [10]. It’s usually based on a pay-peruse model, like to a prepaid electricity metering system which is capable enough to supply for spikes in demand for cloud optimization. This helps client to enhance the match with their IT expenditure at operational level by declining its capital expenditure on IT infrastructure. Public clouds are not as much of secure than the other cloud models because it places an extra burden of ensuring that all data and applications accessed on the public cloud are not subjected to malevolent attacks.

A study secure multi authentication based data classification model in cloud based system (Sakshi Kaushal)


242

ISSN: 2252-8814

d.

Community cloud The Cloud System can exist situate up particularly for a firm, organization, institution [11]. The rules for authentication, authorization and usage architecture can be defined exclusively for the organization users. The policy, requirement and access methods are defined only for the organization users. Such type of organization secret Cloud System is called Community Cloud. Data classification By classifying the data using supervised machine learning algorithm into sensitive data and nonsensitive data in order to reduce the data hiding time [12]. Data classification is done using improved Boosting algorithm which will classify the data according to the security requirement. Data classification is a machine learning technique used to predict the class of the unclassified data. Data mining uses different tools to know the unknown, valid patterns and relationships in the dataset. These tools are mathematical algorithms, statistical models and Machine Learning (ML) algorithms. Consequently, data mining consists on management, collection, prediction and analysis of the data. ML algorithms are described in two classes: supervised and unsupervised. a. Supervised learning In supervised learning, classes are already defined. For supervised learning, first, a test dataset is defined which belongs to different classes. These classes are properly labelled with a specific name. Most of the data mining algorithms are supervised learning with a specific target variable. The supervised algorithm is given many values to compare similarity or find the distance between the test dataset and the input value, so it learns which input value belongs to which class. b. Unsupervised learning In unsupervised learning classes are not already defined but classification of the data is performed automatically. The unsupervised algorithm looks for similarity between two items in order to find whether they can be characterized as forming a group. These groups are called clusters. In simple words, in unsupervised learning, “no target variable is identified”. The classification of data in the context of confidentiality is the classification of data based on its sensitivity level and the impact to the organization that data be disclosed only authorized users. The data classification helps determine what baseline security requirements/controls are appropriate for safeguarding that data.

2.

LITERATURE REVIEW Singh et al. [13] clear a complete survey on different safety issues that affect communication in the Cloud environment. A complete discussion on main topics on Cloud System is given, which includes application, storage system, clustering technique. The Cloud System approval security and extra security concerns are too discussed by the authors. The module level deployment and related security impact are discussed by the authors. Authors also discussed the related faith and confidence with subject categorization. Different security threats and the future solutions are also suggested by the authors. The paper also recognized a few forensic tools used by previous researchers to path the security leakage [13]. The descriptive and relative survey is specified to recognize different security issues and threats. Several solutions set by previous researchers are too provided by the authors. The safety mixing to unusual layers of Cloud System is accessible and moderately provided the solutions and the concerns. Just the comparative and vivid review of work complete is given. No analytical explanations are provided by the authors. Faheem Z. et al. [14] explored a variety of types of inner and outer attacks so as to affect the Cloud network. Authors recognized the safety requirements in Cloud System in deepness and moderately possible mitigation methods are too defined based on previous studies. Authors had known the requirement of safety in Cloud System and virtual benefits. The assault impact and assault result are provided by the authors. Authors mostly provided a result against verification attack, sql injection attack, phishing attack, XML signature wrapping attack, etc. A lesson work on special attack forms and virtual solution methods is provided in the given work. The attack solutions or protection solutions are provided as the included layer to the Cloud System different exposure and avoidance-based approaches are too defined by the authors as the included Cloud service [15]. In future, a new attack detection or preventive method is required [14]. Abuhussein, Bedi, and Shiva proposed comparison between security and privacy attributes like backup, cloud employee trust, encryption, external network storage, access control, dedicated hardware and data isolation, monitoring, access computing services so consumers can make well educated choice cloud related insider threats lay in three Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 240 – 254


Int J Adv Appl Sci

ISSN: 2252-8814

243

groups: cloud provider administrator, employee in victim organization, who uses cloud resources to carry out attacks [16]. Derbeko et al. [17] defined the safety aspects for Cloud System beside a variety of attacks in actual time Cloud environment. The calculation is provided for MapReduce situation with communal and confidential Cloud specification. The privacy data computation, integrity analysis and accuracy of outcome are investigated by the authors. The constraint characterization and challenges of MapReduce scheme for data safety are discussed by the authors. The security and privacy control with master procedure are defined to attain superior safety aspects for Cloud System. Different safety methods, including authentication, authorization and access control observations are also provided by the authors [17]. With reference [18, 19] Tawalbeh, Darwazeh, Al-Qassas and Aldosari propose a secure cloud computing model based on data classification. The proposed cloud model minimizes the overhead and processing time needed to secure data through using different security mechanism with variable key sizes to provide the appropriate confidentiality level required for the data. They have stored data using three levels: basic, confidential and highly confidential level and providing different encryption algorithm to each level to secure the data. This proposed model was tested with different encryption algorithms and the simulation results showed the reliability and efficiency of the proposed framework. a.

Review on the basis of authentication Cusack and Ghazizadeh [20] recognized the service access threat with the human behavior study Authors known the best behavioral belief for Cloud server with single sign in approval the individuality management and comparatively optimization to human actions are also provided. Authors evaluated the special risk factors below security solutions recommended These acknowledged risks take in human user risk, disclosure risk and service security risk. Trust behavior-based trust analysis process is provided to manage the access behavior. Contribution: - Safety measure and belief observations are provided based on the client behavior analysis. - A solo sign in approval is provided for efficient identity confirmation and management. Scope: - Only the way of sign in approval is provided, but authors perform not provide the real time implementation or analysis. In future, such real time execution can be applied to confirm the work. Yang et al. (2016) clear a full study work on GNFS algorithm for the Cloud System. beside with method exploration and analysis, a fresh block Wiedemann algorithm is too provided by the authors. The process is based on strip and cyclic partitioning to do block encoding. The defined process works on a similar block processing to decrease the processing time. The chronological block processing can be complete to improve the information encoding by the improved strip block form as a result that the information security can be enhanced [21]. Contribution: - A narrative widemann algorithm is providing with enhanced strip block giving out for data encoding. - The similar block processing process has improved the processing for encoding elevated volume data. Scope: - The assessment of the process is provided below block size and competence parameter. No attack consideration is given. In the future, analysis relation to extra parameters such as file type or the difficulty measures can be provided. b.

Review on the basis of security assessment Modic et al. (2016) provided a learn of accessible Cloud security estimation methods for actual world applications. The document also defined a fresh security assessment way called Moving Intervals Process (MIP). The quantitative analysis process is also definite with processing method based on dissimilar security or accessibility parameters. A survey is also system to control different categories and structures based on the quantitative requirement. Authors identified the least maximum and actual time requirements and then comparatively perform the price specific measure. The manage measure is here defined in the form of scores [22]. Zhang (2014) has provided a better view of Cloud security below issue examination and financial impact on industry or the organization. The dynamic body-based livelihood control is discussed on static network. The domination specific threat and fulfillment rule are evaluated to get better the power of Cloud System. The danger evaluation-based value measures are provided to put in up long word practices to the Cloud System [23]. Kalloniatis (2013) has defined a reading work to know various security intimidation in Cloud System. The solitude and security issues are recognized with related properties. This paper also definite the process to A study secure multi authentication based data classification model in cloud based system (Sakshi Kaushal)


244

ISSN: 2252-8814

provide security over Cloud System. Unusual threats on different Cloud service models are known by the author. The necessity engineering for Cloud System is too identified with essential challenges and characterization. The admission criticality and challenges are explored with threat evaluation and impact analysis [24]. c.

Review on the basis of cloud security framework Ramachandran (2016) provided a broad descriptive study on various condition engineering ways and their management. The paper worked mostly on safety as a repair layer to get better the safety aspects and their sharing to the Cloud environment. The included service model with software development system is provided by the author. The logical research and obligation mapping are provided as an example which is being used in various models as integrated form. The safety privileges for every stakeholder is identified and provided the technique for the security requirement, method and maintenance [25]. Contribution: - A safety as service included Cloud System development representation is provided for scheme design and distribution. - The analytical copy is provided for the obligation of different stakeholders as well as procedure stages of Cloud System. Scope: - A generalized model is provided by risk rating and danger prioritization. - The business exact model with real time configuration is not provided. - In future, the job can be applied on on hand Cloud System application or environment. Chang et al. [21] defined an original security framework for Cloud System environment. The multi layered safety protection is provided next to different attacks with far above the ground level data concerns, including volume, veracity and velocity. Authors practical model for zero knowledge Cloud System that does not contain any user information. The information sharing, dependency and the data calculation are provided to attain effective block stage. The information encoding and the safe communication are provided by the authors. Authors also maintained the private fire storage and safe key management in storage space. The information sharing and approval is also going to through the key distribution methods. The concerns are also provided against a variety of attacks practical on unusual layers of Cloud System access. Contribution: - A hierarchy layer safety framework is provided to attain access control, attack preservation and encoded data storage. - The safe sharing of data is performed by means of key management and approval in Cloud file system. Scope: - The services can be extensive in the form of prototypes so that the use can be enhanced for different Cloud business models. Palmieri et al. [26] have done an adaptive sleuth energy-based analysis to identify DoS attack in Cloud data centers. Authors provided the service level analysis under availability, operation cost and energy parameters. Authors defined the problem to identify the DoS attack in network in early stage and to provide the attack resistive communication in Cloud network. The work is able to provide the analysis under availability and visibility parameters to give the analysis under pattern specific dynamic observation. The energy impact with potential effect is here analyzed for larger infrastructure to identify the attack in Cloud network. The attack ratio analysis and computation are defined to perform the attack detection. Authors estimated the attack effect with response time violation and determines the flow analysis-based service degradation. Authors provided the power management and consumption analysis to give the component level evaluation on Cloud environment. Authors provided an energy proportional system to reduce the peak power usage in attacked Cloud network and to reduce the effect of DoS attack. Chonka and Abawajy [27] have defined a work to detect and reduce the impact of DoS attack in web service driven Cloud network. Authors defined the security system to observe the channel communication under the common problem identification and to reduce the impact of DoS attack. A problem analysis for XML DoS attack is given as new defense system that can provide a solution with pre decision and learning based observation. The defined network attack is able to observe the network under training and testing criteria to provide the effective attack preserved solution in the DoS attack network. The scenario specific observation with specification of response 84 pattern is analyzed to generate the classification rules and to provide the attack preventive probabilistic solution. Michelin et al. (2014) used the authentication API to provide the Cloud communication solution against DoS attack. An unresponsive work behavior and relative protocol specific attack mapping are provided for REST applications. Authors identified the client behavior and relatively identified the malicious client in the network with response time observation. An attack specific Cloud management system is designed to define Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 240 – 254


Int J Adv Appl Sci

ISSN: 2252-8814

245

taxonomy for DDoS attack adaptation. An automated method is defined to analyze the network features and generate the attack features. Later on, all these features are combined to identify the victim type in the Cloud network. The attack scenario specific authentication measures have defined the solution against the DoS attack. At the early stage, the communication is monitored to identify the overload conditions and later on applied the filtration stage to generate the effective Cloud communication [21].

3. RESEARCH METHOD 3.1. Phases of proposed model The proposed display is executed as specified in the accompanying stages to meet every one of the targets portrayed that are vital piece of this contextual investigation. a. Creating virtual environment First stage is to make a virtual domain where there is introduction of cloud servers, information specialists, virtual machines and assignments. b. Authentication level Figure 2 For recovery of information, the client needs to enroll himself with organization/association to get a legitimate username and secret phrase which is additionally put away at database of the organization/association.

Figure 2. Process of registration

Figure 3 it portrays the transmission of client demand to cloud. The cloud checks if the client has entered amend username and secret phrase is validated into its database. In the event that yes then it then just the client is endorsed to utilize the cloud information. For confirmation process, the validated clients are coordinated with the as of now existed information put away at cloud catalogs. Client needs to give its username, secret key and answer the security question, if answers given by the client is right, at that point he is allowed to get to the cloud. c. Secure authentication using image sequencing To take care of the issue of security in distributed computing, we will send these two route strategies for stopping security ruptures on distributed computing. One is giving privacy at different levels of clients like proprietor, administrator and third-party utilizing image sequence base secret key that gives privacy from validation assaults at client end. Information Hiding Architecture use for safely transmitting the information over the cloud condition This secret key depends on the groupings of a few pictures. It is more secure in light of the fact that arrangement of pictures is change without fail. Essentially that password is basically use for confirmation reason. Just authentic client will permit entering in cloud, in the event that they enter the right arrangement of picture. After verification, amid access of information tasks this interface will again tell the client arrangement, this time pictures gets rearrange, in view of succession of pictures secret word will likewise be change. d. Proposed data classification architecture Figure 4 is showing classification of data is a machine learning system used to anticipate the class of the unclassified data. Information mining utilizes one of kind instruments to get a handle on the obscure, genuine examples and connections inside the dataset.

A study secure multi authentication based data classification model in cloud based system (Sakshi Kaushal)


246

ISSN: 2252-8814

Figure 3. Request accessing process

Fetch unclassified Data

Non-Sensitive Data

Improved Naïve Bayes

Sensitive Data

Edge based Image Steganography

VM 1

Storage 1

VM 2

Storage 2

Figure 4. Data classification architecture

These devices are numerical counts, authentic models and expectation and assessment of the information. Subsequently, information mining comprises of administration, gathering, forecast and examination of the information. ML calculations are depicted in to 2 classifications: directed and unsupervised. For directed contemplating, initial, an analysis dataset is characterized which has a place with unmistakable classes. These exercises are skillfully marked with a specific recognize. Bunches of the information mining calculations are regulated concentrate with a point by point objective variable. In unsupervised learning classes are not adequately described yet rather course of action of the data is performed naturally. The unsupervised calculation searches for likeness between two contraptions keeping in mind the end goal to observe whether Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 240 – 254


Int J Adv Appl Sci

ISSN: 2252-8814

247

they can be described as shaping a group. In basic words, in unsupervised adapting, "no objective variable is distinguished". The arrangement of data inside the setting of secrecy is the characterization of learning headquartered on its affectability level and the affect to the association that information be uncovered just authorized clients. The information grouping figures out what gauge security necessities/controls are proper for protecting that information. The data is ordered into two exercises, secret and non-private (non-elite) data. The arrangement of the data relies upon the qualities of the data. The estimations of the touchy traits are delegated "classified" and estimations of the non-delicate characteristics are sorted as "non-secret". By characterizing the information utilizing supervised machine learning calculation into delicate information and non-touchy information so as to lessen the information hiding time. Information classification is finished utilizing enhanced bayesian calculation. Sensitive or confidential data Private information comprises of exceptionally fundamental data of cloud clients' or affiliation. The illicit client can't get to classified/private information. Such information may incorporate the accompanying: - Individual Information: It contains individual acknowledgment, for example, societal security ID, visa ID, CC Identification, driver's allow number. - Monetary Accounts: Banking operational information, money related acc no. - Trade Data: Manufactured products, future arranged information. - Health check/Medical Data: Health concerned information of person. - Administration Information: comprises of organization future arranging, government sane papers, organization gather papers. Non-Sensitive or public data This is otherwise called non-confined information. It is utilized by average citizens by means of web. The data that is considered as non-delicate or open data comprises of information which isn't crucial to the substance or affiliation. Such information contains information promoting material, squeeze explanation or preliminary information of an affiliation. After finished with the order of information, the client became more acquainted with what dataset needs the security and what sort of dataset needn't bother with any security. To anchor the delicate or private informational collection, RSA encryption cryptographic system was utilized. The K-Nearest Neighbor comprises of 'n' name information tests. Here 'n' is the no. of information esteems in the datasets: this can be appeared as: D = {d1, d2, d3...... d (n)}

(1)

D= set of total number of samples. The Dmust have ‘n’ labelled value. d1, d2, d3...d (n) are diverse data samples. The set of n labelled samples can be represented as: D= {d1, d2, d3...... d (n) | C}

(2)

C = the data class for the target values. In this technique only one class is defined for sensitive or confidential data. K-NN Algorithm: Step 1: To discover the arrangement of n name test-set that is D Step 2: To discover the estimation of K Step 3: To process the space b/w the new I/P and aggregate no of preparing dataset. Step 4: Arrange the separations b/w the neighbour pixels and discover the K-NN which depends on the Kth remove measure. Step 5: Determine the neighbour class. Step 6: to discover the new class of inputted information dependent on larger part of votes. 3.2. Phase 3.data classification In this proposed work, the improved Naïve Bayes machine learning is utilized to improve the execution of existing KNN system. a. Combining naïve bayes with Decision table using Decision tree as Meta classifier.

A study secure multi authentication based data classification model in cloud based system (Sakshi Kaushal)


248

ISSN: 2252-8814

b.

Meta Learner is a learner scheme that combines the output of the naive bayes and decision table i.e. the base learner. The base learners’ level-0 models and the meta-learner is a level-1 model. The predictions of the base learners are input to the meta-learner. This will classify the data into: basic, confidential and highly confidential using the rules induced inthe learning algorithms which will identify which attributes of the data set are under vulnerability attacks. Troupe learning strategy includes an arrangement of various models are assemble together to enhance the forecast and soundness intensity of any model. It has two levels: base level-0 and Meta level-1. At base level a no. of calculations can run i.e, AdaBoost and packing calculation. At Meta level, otherwise called basic leadership calculation, irregular woodland tree is utilized. To use the preparation sets of information which is given by KNN display, the preparation set is registered with Euclidean separation work. To diminish the computational thickness, we enhanced the essential calculation of outfit learning. Base level - 0: Also known as LEVEL-0. There are number of calculations running at base level in particular, Naïve Bayes and Decision Table i.e. the base students. These are parts of Ensemble Learning strategy. The individual yield of Naive Bayes and Decision Table will additionally be given to the meta level. 3.3. Working of Naïve Bayes Naive Bayes algorithm is machine learning based approach. The fundamental requirement of machine learning based approach is a dataset that is already coded with sentiment classes. The classifier is modelled with the labelled data. For the purpose of this report, multinomial naïve Bayes is used as a baseline classifier because of its efficiency. We assume the feature words are independent and then use each occurrence to classify headlines into its appropriate sentiment class. Naive Bayes is used because this is easy to build and implement and to estimate the parameters it requires small amount of training data. It follows from that our classifier which utilizes the maximum a posterior decision rule can be represented as: =

(

)

(3)

Where denotes the words in each headline and C is the set of classes used in the classification, denotes the conditional probability of class c, P(c) denotes the prior probability of a document occurring in class c and denotes the conditional probability of word given class c. To estimate the prior parameters, equation (1) is then reduced to C= (

)

(4)

3.4. Working of decision table A decision table is used for representing conditional logic by making a list of process which depicts the business level rules. These tables can also be used when constant numbers of conditions are there which are needed to be calculated and where an exact set of events to be used when the following conditions are to be met. These tables are very similar to decision trees except that tables will have the similar number of conditions that needed for evaluation and actions that are needed to be taken. While decision tree, contain one branch with addition of conditions that are necessary to be evaluated than other branches on the tree. The main idea of a decision table is of structuring the logic so as to generate rules that are derived from the data which have already been entered into table. A decision table consists of lists causes i.e. business rule condition and effects i.e. business rule action, which have been denoted by the matrix where each column represents a single combination. If number of rules are there inside the business which are expressed by the use of some templates and data then we can refer the decision table technique to accomplish the particular task. Individual row in the decision table collects and stores its data uniquely and then bind the data with a particular or customized template to generate a rule. It is not preferable to use decision tables if the rules are not following a set of templates. a. Check the capability of classifier whether it can handle data or fails using getCapabilities method b. Removing instances with missing values c. Add each instance to decision table. d. Each Instance I is assigned a category by finding the line in the decision table that matches the non-class values of the data item e. Wrapper method is used to find a good subset of attributes for inclusion in the table. f. By eliminating attributes that contribute little or nothing to a test model of the dataset, the algorithm reduces the likelihood of over-fitting and creates a smaller and condensed decision table. Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 240 – 254


Int J Adv Appl Sci

ISSN: 2252-8814

249

g.

The attribute space is searched greedily either top to bottom or bottom to top. A top-to-bottom search adds attributes at each stage; this is called forward selection. A bottom-to-top search starts with a full set of attributes and deletes attributes one at a time; this is backward elimination. Decision table for data set D with n attributes A1, A2, ..., An is a table with schema R (A1, A2,..., An, Class, Sup, Conf). A row Ri= ( a1i, a2i, ..., ani, ci,supi, confi) in table R represents a classification rule, where aij(1 j n) can be either from DOM(Ai) or aspecial value ANY, ci{ c1, c2, ..., cm}, minsupsupi1, and minconfconfi1 and minsup and minconf are predetermined thresholds. The interpretation of the rule is if (A1 = a1) and (A2 = a2) and … and (An= an) then class= ci with probability confi and having support supi, where ai ANY, 1 j n. The decision table generated is to be used to classify unseen data samples. To classify an unseen data sample, u(a1u, a2u, ...,anu), the decision table is searched to find rows that matches u. That is, to find rows whose attribute values are either ANY or equal to the corresponding attribute values of u. Unlike decision trees where the search will follow one path from the root to one leaf node, searching for the matches in a decision table could result in none, one or more matching rows. 3.5. One matching row is found If there is only one row, ri(a1i, a2i, ..., ani, ci, supi, confi) in the decision table that matches u (a1u, a2u, ..., anu), then the class of u is ci. More than one matching row is found: When more than one matching row found for a given sample, there are a number of alternatives to assign the class label. Assume that k matching rows are found and the class label, support and confidence for row i is ci, supi and confi respectively. The class of the sample, cu, can be assigned in one of the following ways. a. based on confidence and support:

(5) b. based on weighted confidence and support:

(6) The ties are treated similarly. Note that, if the decision table is sorted on (Conf, Sup), it is easy to implement the first method. We can simply assign the class of the first matching row to the sample to be classified. In fact, our experiments indicated that this simple method provides no worse performance than others. c. No matching row is found: In most classification applications, the training samples cannot cover the whole data space. The decision table generated by grouping and counting may not cover all possible data samples. For such samples, no matching row will be found in the decision table. To classify such samples, the simplest method is to use the default class. However, there are other alternatives. For example, we can first find a row that is the nearest neighbor (in certain distance metrics) of the sample in the decision table and then assign the same class label to the sample. The drawback of using nearest neighbor is its computational complexity. Meta level - 1: Also known as LEVEL-1. Here we apply Decision Tree which is only a choice tree i.e. Meta classifier. The forecasts of the base students i.e., (Naïve Bayes and Decision Table) are contribution to the meta-student. d. Decision tree: (Basic principle) The decision tree classification algorithm is widely used classification algorithm in data mining. It operates in a divide and conquer manner, which recursively partitions the training data set based on its attributes until the stopping conditions are satisfied. The decision tree consists of nodes, edges, and leaves. A Decision Tree node has its corresponding data set this specifies the attribute to best divide the data set into its classes. Each node has several edges that specify possible values or value ranges of the selected attributes on the node. The decision tree algorithm recursively visits each decision node, selecting the optimal split, until no further splits are possible. The basic steps of j48 algorithm for growing a decision tree are given below: - Choose attribute for root node - Create branch for each value of that attribute A study secure multi authentication based data classification model in cloud based system (Sakshi Kaushal)


250

ISSN: 2252-8814

-

Split cases according to branches Repeat process for each branch until all cases in the branch have the same class A question that, how an attribute is chosen as a root node? At first, we calculate of the gain ratio of each attribute. The root node will be that attribute whose gain ratio is maximum. Gain ratio is calculated by (7).

(7) Where, A is an attribute whose gain ratio will be calculated. The attribute A with the maximum gain ratio is selected as the splitting attribute. This attribute minimizes the information needed to classify the tuples in the resulting partitions. Such an approach minimizes the expected number of tests needed to classify a given tuple and guarantees that a simple tree if found. The data set of the node is divided into subsets according to the specifications of the edges, and the Decision Tree creates a child node for each data subset and repeats the dividing process. When the node satisfies the stopping rules because it contains homogeneous data sets or no future distinguishing attributes can be determined, the Decision Tree terminates the dividing process and the node is labeled as following the class label of the data set. This labeled node is called a leaf node. In this way, the Decision Tree recursively partitions the training data set, which creates a tree-like structure. e.

Proposed system Flowchart of the proposed system is shown in Figure 5. In first step, create secure virtual cload environment, and continued authentification of image using image sequencing passwords. Next step is data classification with input dataset as explained in introduction section, encryption, sending and evaluation.

Create a secure virtual cloud environment

Secure authentication using Image sequencing passwords

DATASET

Classification of data into sensitive and nonsensitive using Improved Naïve Bayes Technique

Encrypting the sensitive data using edge-based Image steganography Algorithm

Sending the encrypted sensitive data and non-encrypting data to the cloud environment

Evaluate the performance by parameters like encryption time, Data uploading time, classification time, and classification accuracy and comparing the results with previous techniques

Figure 5. Proposed system flowchart

4.

RESULTS AND DISCUSSION The proposed technique is executed with the assistance of CloudSim and Netbeans IDE 8.0. CloudSim is the library that gives the recreation condition of distributed computing and furthermore give essential classes Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 240 – 254


Int J Adv Appl Sci

ISSN: 2252-8814

251

portraying virtual machines, server farms, clients and applications. NetBeans is where applications are created utilizing sections called programming modules. We utilize Cloudsim test system for the examination work. Cloudsim is a system for displaying and recreation of distributed computing administrations and foundation. 4.1. Classification phase Classification of articles is a basic subject of studies and of common-sense applications in various fields such as pattern recognition and statistics, artificial intelligence, vision analysis and medicine. An exceptionally smart method to anchor the information would be to initially arrange the information into secret and non-classified information and after that protected the delicate information as it were. This will decrease the overhead in encoding the whole information which will be especially expensive in association of both time and memory. For encoding the information numerous encryption strategies can be utilized and for ordering the information various grouping calculations are accessible in the field of information mining. 4.2. Results for KNN algorithm Figure 6 shows the classification results of KNN algorithm with total of 109 instances out of which 43 are classified correctly. Time taken to classify the data is 329 milliseconds. By using KNN algorithm, weighted average rate for FP and Rate is 0.394, Precision is 0.156, recall is 0.394, F-measure is 0.223 and ROC Area is 0.471.

Figure 6. Classification results of KNN algorithm

4.3. Results for improved Bayesian classifier Figure 7 shows the classification results of Improved Bayesian Classifier with total of 109 instances out of which 71 are classified correctly. Time taken to classify the data is 797 milliseconds. By using Improved Bayesian Classifier, weighted average rate for FP is 0.605, Rate is 0.182, Precision is 0.659, recall is 0.651, Fmeasure is 0.654 and ROC Area is 0.768. From Figure 7 and Figure 8, it is shown that the improved Bayesian Classifier performs better than the KNN algorithm, i.e it will classify the data more accurate Figure 8 shows that time taken to hide the data inside the image is 554 milliseconds which is less than the time taken by RSA algorithm by 8154 milliseconds. In this, sensitive data is hidden using hybrid steganography approach i.e. the data is first converted into binary format then this binary data is hidden inside the edges of the input image which is calculated using Canny Edge detection. From these set of edges, the randomization is applied and then edges are selected and the binary format of sensitive data is hidden in the least significant bit of that randomly selected edge. In this way whole data is hidden and the edge pixel position on which data is hidden is saved in file and sent that file to the cloud. The final encrypted image and the original image is compared using histogram equalization and it shows that both the images have same histogram i.e after hiding the data image is distorted this is because LSB method is used for hiding the data in which least significant bit is replaced with the data bit instead of most significant bit.

A study secure multi authentication based data classification model in cloud based system (Sakshi Kaushal)


252

ISSN: 2252-8814

Figure 7. Classification results of KNN algorithm

Figure 8. Sending random pixel position values to the cloud environment

In this proposed technique, with the increase in privacy no information loss is there. As results show that we got the complete sensitive data. Hence, data utilization is there. From Tabel 1 and Tabel 2 shows the performance analysis of data classification algorithms. Table 1 and Figure 9 shows the correctly and incorrectly classified instances by KNN and Improved Bayesian. Table 2 shows the performance of the algorithms on the basis of its error rate. Both the tables show that the proposed classification algorithm performs better than the existing KNN algorithm as showing in Figure 10.

Table 1. Performance of data mining algorithms KNN Correctly classified instances Incorrectly classified instances

43

Improved Bayesian Classifier 71

66

38

Figure 9. Correctly and Incorrectly Classified instances comparison of KNN algorithm with the proposed Algorithm

Table 2. Detailed comparison of accuracy based on error KNN Mean absolute error Root mean squared error Relative absolute error Root relative squared error

0.4355 0.4668 0.9993 1.00

Improved Bayesian Classifier 0.2646 0.434 0.607 0.929

Figure 10. Detailed comparison of accuracy based on error

From Figure 11, Figure 12 and Figure 13 shows the performance analysis of the proposed methodology with the previous method. In Figure 13, encryption time of existing technique is 8154 msec and encryption time taken by proposed technique is 554msec. It is clearly analyzed from the performance graphs that the proposed technique is better than the previous approach. Figure 11 shows the accuracy comparison of data classification algorithms KNN and proposed Improved Bayesian Algorithm. KNN algorithm is having accuracy 39% and improved boosting is having 65% i.e proposed algorithm has classified data more correctly Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 240 – 254


Int J Adv Appl Sci

ISSN: 2252-8814

253

and performs better than the KNN algorithm. Time taken to classify the data is 329 milliseconds. By using KNN algorithm, weighted average rate for FP and Rate is 0.394, Precision is 0.156, recall is 0.394, F-measure is 0.223 and ROC Area is 0.471. Time taken to classify the data is 797 milliseconds. By using Improved Bayesian Classifier, weighted average rate for FP is 0.605, Rate is 0.182, Precision is 0.659, recall is 0.651, Fmeasure is 0.654 and ROC Area is 0.768. Similarly, Figure 12 shows the data hiding time comparison between the proposed and previous approach. Proposed Hybrid technique takes 8000 milliseconds and the existing algorithm takes 6000 milliseconds to hide the sensitive data. Figure 13 shows the time taken to encrypt the unclassified and classified data; figure clearly shows that the encryption time of classified data is less as compare to the unclassified data. From the above analysis it is shown that the proposed methodology performs betters in respect to Accuracy, classification time and data hiding time.

Figure 11. Accuracy comparison of KNN algorithm with the proposed Bayesian Algorithm

Figure 12. Comparison of data hiding time of existing and the proposed Hybrid algorithm

Figure 13. Comparison of encryption time of existing and proposed technique

5.

CONCLUSION Cloud computing is effectively managing the security in the cloud applications. Data classification is a machine learning technique used to predict the class of the unclassified data. Data mining uses different tools to know the unknown, valid patterns and relationships in the dataset. These tools are mathematical algorithms, statistical models and Machine Learning (ML) algorithms. In this paper author uses improved Bayesian technique to classify the data and encrypt the sensitive data using hybrid stagnography. The encrypted and non encrypted sensitive data is sent to cloud environment and evaluate the parameters with different encryption algorithms. Proposed Hybrid technique takes 8000 milliseconds and the existing algorithm takes 6000 milliseconds to hide the sensitive data. Time taken to encrypt the unclassified and classified data; It is shows that the encryption time of classified data is less as compare to the unclassified data. On conclusion we can say that the proposed methodology performs betters in respect to Accuracy, classification time and data hiding time.

A study secure multi authentication based data classification model in cloud based system (Sakshi Kaushal)


254

ISSN: 2252-8814

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

[13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]

[24] [25] [26] [27]

Data, B., “For better or worse: 90% of world’s data generated over last two years,” SCIENCE DAILY, 2013. [Online]. Available: https://www.sciencedaily.com/releases/2013/05/130522085217.htm Botta, A., De Donato, W., Persico, V., & Pescapé, A., “Integration of cloud computing and internet of things: A survey,” Future generation computer systems, vol. 56, pp. 684-700, 2016. Mastelic, T., Oleksiak, A., Claussen, H., Brandic, I., Pierson, J. M., & Vasilakos, A. V., “Cloud computing: Survey on energy efficiency,” Acm computing surveys (csur), vol. 47, no. 2, pp. 1-36, 2014. Messerli, A. J., Voccio, P., & Hincher, J. C. U.S. Patent No. 9,563,480. Washington, DC: U.S. Patent and Trademark Office, 2017. Farahnakian, et al, “Using ant colony system to consolidate VMs for green cloud computing,” IEEE Transactions on Services Computing, vol. 8, no. 2, pp. 187-198, 2015. Botta, A., De Donato, W., Persico, V., & Pescapé, A., “Integration of cloud computing and internet of things: a survey,” Future generation computer systems, vol. 56, pp. 684-700, 2016. Wang, B., Zheng, Y., Lou, W., & Hou, Y. T., “DDoS attack protection in the era of cloud computing and softwaredefined networking,” Computer Networks, vol. 81, pp. 308-319, 2015. Puthal, D., Sahoo, B. P. S., Mishra, S., & Swain, S., “Cloud computing features, issues, and challenges: a big picture,” 2015 International Conference on Computational Intelligence and Networks, pp. 116-123, 2015. Li, J., Li, Y. K., Chen, X., Lee, P. P., & Lou, W., “A hybrid cloud approach for secure authorized deduplication,” IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 5, pp. 1206-1216, 2015. Chou, D. C., “Cloud computing: A value creation model,” Com. Standards & Interfaces, vol. 38, pp. 72-77, 2015. Ali, M., Khan, S. U., & Vasilakos, A. V., “Security in cloud computing: Opportunities and challenges,” Information sciences, vol. 305, pp. 357-383, 2015. Ang, J. C., Mirzal, A., Haron, H., & Hamed, H. N. A., “Supervised, unsupervised, and semi-supervised feature selection: a review on gene selection,” IEEE/ACM transactions on computational biology and bioinformatics, vol. 13, no. 5, pp. 971-989, 2016. Saurabh Singh, Young-Sik Jeong, Jong Hyuk Park, “A survey on cloud computing security: issues, threats, and solutions,” Journal of Network and Computer Applications, vol. 75, pp. 200-222, 2016. Faheem Zafar, et al, “A survey of Cloud Computing data integrity schemes: Design challenges, taxonomy and future trends,” Computers & Security, vol. 65, pp. 29-49, 2016. A Platform Computing Whitepaper, Enterprise Cloud Computing: Transforming IT, Platform Computing, 2009. NITS, “Guidelines on Security and Privacy in Public Cloud Computing,” [Online]. Available http://csrc.nist.gov /publications/nistpubs/800-144/SP800-144.pdf, retrieved on Sep 29, 2012. Philip Derbeko, Shlomi Dolev, Ehud Gudes, Shantanu Sharma, “Security and privacy aspects in mapreduce on clouds: a survey,” Computer Science Review, vol. 20, pp. 1-28, 2016. Ogigau-Neamtiu F., “Cloud computing security issues,” Journal of Defense Resources Management, vol. 3, no. 2, pp. 141-148, 2012. Wu J, Ping L, Ge X, Wang Y, Fu J, “Cloud storage as the infrastructure of cloud computing,” International Conference on Intelligent Computing and Cognitive Informatics (ICICCI), pp. 380-383, 2010. Brian Cusack, Eghbal Ghazizadeh, “Evaluating single sign-on security failure in Cloud services,” Business Horizons, vol. 59, no. 6, pp. 605-614, 2016. Laurence T. Yang, et al, “Parallel GNFS algorithm integrated with parallel block wiedemann algorithm for RSA security in cloud computing, information sciences,” Information Sciences, vol. 387, pp. 254-265, 2016. Jolanda Modic, Ruben Trapero, Ahmed Taha, Jesus Luna, Miha Stopar, Neeraj Suri, “Novel efficient techniques for real-time cloud security assessment,” Computers & Security, vol. 62, pp. 1-18, 2016. Christos Kalloniatis, Haralambos Mouratidis, Manousakis Vassilis, Shareeful Islam, Stefanos Gritzalis, Evangelia Kavakli, “Towards the design of secure and privacy-oriented information systems in the Cloud: Identifying the major concepts,” Computer Standards & Interfaces, vol. 36, no. 4, pp. 759-775, 2014. Chunming Rong, Son T. Nguyen, Martin Gilje Jaatun, “Beyond lightning: A survey on security challenges in cloud computing,” Computers & Electrical Engineering, vol. 39, no. 1, pp. 47-54, 2013. Lofstrand M, ‘The VeriScale Architecture: Elasticity and Efficiency for Private Clouds”, Sun Microsystems, Sun BluePrint, 2009. Vahid Ashktoraband Seyed Reza Taghizadeh. “Security threats and countermeasures in cloud computing,” International Journal of Application or Innovation in Engineering & Management, vol. 1, no. 2, pp. 234-245, 2012. Brian Cusack, Eghbal Ghazizadeh, “Evaluating single sign-on security failure in cloud services,” Business Horizons, vol. 59, no. 6, pp. 605-614, 2016.

Int J Adv Appl Sci, Vol. 9, No. 3, September 2020: 240 – 254


Institute of Advanced Engineering and Science Indonesia

: D2, Griya Ngoto Asri, Bangunharjo, Sewon, Yogyakarta 55187, Indonesia

Malaysia

: 51 Jalan TU 17, Taman Tasik Utama, 75450 Malacca, Malaysia

COPYRIGHT TRANSFER FORM (Please compile this form, sign and send by e-mail) Please complete and sign this form and send it back to us with the final version of your manuscript. It is required to obtain a written confirmation from authors in order to acquire copyrights for papers published in the International Journal of Advances in Applied

Sciences (IJAAS) Full Name and Title Organisation Address and postal code City Country Telephone/Fax E-mail Paper Title Authors

Copyright Transfer Statement The copyright to this article is transferred to Institute of Advanced Engineering and Science (IAES) if and when the article is accepted for publication. The undersigned hereby transfers any and all rights in and to the paper including without limitation all copyrights to IAES. The undersigned hereby represents and warrants that the paper is original and that he/she is the author of the paper, except for material that is clearly identified as to its original source, with permission notices from the copyright owners where required. The undersigned represents that he/she has the power and authority to make and execute this assignment. We declare that: 1. This paper has not been published in the same form elsewhere. 2. It will not be submitted anywhere else for publication prior to acceptance/rejection by this Journal. 3. A copyright permission is obtained for materials published elsewhere and which require this permission for reproduction. Furthermore, I/We hereby transfer the unlimited rights of publication of the above-mentioned paper in whole to IAES. The copyright transfer covers the exclusive right to reproduce and distribute the article, including reprints, translations, photographic reproductions, microform, electronic form (offline, online) or any other reproductions of similar nature. The corresponding author signs for and accepts responsibility for releasing this material on behalf of any and all co-authors. This agreement is to be signed by at least one of the authors who have obtained the assent of the co-author(s) where applicable. After submission of this agreement signed by the corresponding author, changes of authorship or in the order of the authors listed will not be accepted.

Retained Rights/Terms and Conditions 1. 2.

3.

Authors retain all proprietary rights in any process, procedure, or article of manufacture described in the Work. Authors may reproduce or authorize others to reproduce the Work or derivative works for the author’s personal use or for company use, provided that the source and the IAES copyright notice are indicated, the copies are not used in any way that implies IAES endorsement of a product or service of any employer, and the copies themselves are not offered for sale. Although authors are permitted to re-use all or portions of the Work in other works, this does not include granting third-party requests for reprinting, republishing, or other types of re-use.

Yours Sincerely,

Corresponding Author‘s Full Name and Signature Date: ……./……./…………

International Journal of Advances in Applied Sciences (IJAAS) email: ijaas@iaescore.com, iaes.editor@gmail.com


Guide for Authors International Journal of Advances in Applied Sciences (IJAAS) is a peer-reviewed and open access journal dedicated to publish significant research findings in the field of applied and theoretical sciences. The journal is designed to serve researchers, developers, professionals, graduate students and others interested in state-of- the art research activities in applied science, engineering and technology areas, which cover topics including: industrial engineering, materials & manufacturing; mechanical, mechatronics & civil engineering; food, chemical & agricultural engineering; telecommunications, computer science, instrumentation, control, electrical & electronic engineering; and acoustic & music engineering. Papers are invited from anywhere in the world, and so authors are asked to ensure that sufficient context is provided for all readers to appreciate their contribution.

The Types of papers The Types of papers we publish. The types of papers that may be considered for inclusion are: 1) Original research; 2) Short communications and; 3) Review papers, which include meta-analysis and systematic review

How to submit your manuscript All manuscripts should be submitted online at http://ijaas.iaescore.com

General Guidelines 1) Use the IJAAS guide http://iaescore.com/gfa/ijaas.docx) as template. 2) Ensure that each new paragraph is clearly indicated. Present tables and figure legends on separate pages at the end of the manuscript. 3) Number all pages consecutively. Manuscripts should also be spellchecked by the facility available in most good word-processing packages. 4) Extensive use of italics and emboldening within the text should be avoided. 5) Papers should be clear, precise and logical and should not normally exceed 3,000 words. 6) The Abstract should be informative and completely self-explanatory, provide a clear statement of the problem, the proposed approach or solution, and point out major findings and conclusions. The Abstract should be 150 to 250 words in length. The abstract should be written in the past tense. 7) The keyword list provides the opportunity to add keywords, used by the indexing and abstracting services, in addition to those already present in the title. Judicious use of keywords may increase the ease with which interested parties can locate our article. 8) The introduction should provide a clear background, a clear statement of the problem, the relevant literature on the subject, the proposed approach or solution, and the new value of research which it is innovation. It should be understandable to colleagues from a broad range of scientific disciplines. 9) Explaining research chronological, including research design and research procedure. The description of the course of research should be supported references, so the explanation can be accepted scientifically. 10) Tables and Figures are presented center. 11) In the results and discussion section should be explained the results and at the same time is given the comprehensive discussion. 12) A good conclusion should provide a statement that what is expected, as stated in the "Introduction" section can ultimately result in "Results and Discussion" section, so there is compatibility. Moreover, it can also be added the prospect of the development of research results and application prospects of further studies into the next (based on the results and discussion). 13) References should be cited in text by numbering system (in IEEE style), [1], [2] and so on. Only references cited in text should be listed at the end of the paper. One author should be designated as corresponding author and provide the following information: • E-mail address • Full postal address • Telephone and fax numbers Please note that any papers which fail to meet our requirements will be returned to the author for amendment. Only papers which are submitted in the correct style will be considered by the Editors.


International Journal of Advances in Applied Sciences (IJAAS) Institute of Advanced Engineering and Science (IAES) e-mail: ijaas@iaesjournal.com

IJAAS Journal Order Form Volume

Number

Amount

Price (Rp)

Price (USD)

Total

Name

:

Company

:

Address

:

City / State

:

Zip

:

Telephone/Fax

:

email

: ........................, ..........................

Signature: ..................................

Order form for subscription should be sent to the editorial office by fax or email

Payment by Bank Transfer Bank Account name (please be exact)/Beneficiary: LINA HANDAYANI Bank Name: CIMB NIAGA Bank Branch Office: Kusumanegara City: Yogyakarta Country :Indonesia Bank Account # : 5080104447117 Bank Swift : BNIAIDJAXXX >>> Please find the appropriate price in the price list on next page >>>


The price list for domestic and foreign subscribers

Volume

Number

Year

1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5 6 6 6 6 7 7 7 7 8 8 8 8 9 9 9 9

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4

2012 2012 2012 2012 2013 2013 2013 2013 2014 2014 2014 2014 2015 2015 2015 2015 2016 2016 2016 2016 2017 2017 2017 2017 2018 2018 2018 2018 2019 2019 2019 2019 2020 2020 2020 2020

Price (IDR) for domestic subscribers 290.000,00 290.000,00 290.000,00 290.000,00 290.000,00 290.000,00 290.000,00 290.000,00 319.000,00 319.000,00 319.000,00 319.000,00 319.000,00 319.000,00 319.000,00 319.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00 349.000,00

Price (USD) for foreign subscribers 36 36 36 36 36 36 36 36 40 40 40 40 40 40 40 40 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44

The price included the printing, handling, packaging and postal delivery fees of the hardcopy to the address of the authors or subscribers (with Registered Mail). For foreign subscribers, an additional fee is charged if you would your order is mailed via Express Mail Service (EMS): - $25 for ASIA Continent - $35 for AUSTRALIA Continent - $35 for AFRICA Continent - $39 for AMERICA Continent - $39 for EUROPE Continent (No additional fee for delivering your order by Registered Mail).


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.