Page 1

UK: Managing Editor International Journal of Innovative Technology and Creative Engineering 1a park lane, Cranford London TW59WA UK E-Mail: Phone: +44-773-043-0249 USA: Editor International Journal of Innovative Technology and Creative Engineering Dr. Arumugam Department of Chemistry University of Georgia GA-30602, USA. Phone: 001-706-206-0812 Fax:001-706-542-2626 India: Editor International Journal of Innovative Technology & Creative Engineering Dr. Arthanariee. A. M Finance Tracking Center India 261 Mel quarters Labor colony, Guindy, Chennai -600032. Mobile: 91-7598208700





From Editor's Desk


Our digital era has shown tremendous progress towards research, technology design and creative/innovative thinking. Computer Games like multiplayer Interactive Games and Interactive Media have emerged as some of the most vibrant elements of today’s entertainment and military industries respectively..

Following are some of the areas that are being worked on, in the current

technological market:

i) Expanding our computer technology in terms of hardware and software for different media. ii) Validating innovative procedures including algorithms and architectures for technological advancements. iii) Exploring novel applications of computer gaming technology for entertainment.

Apart from media and entertainment, when it comes to technology and workplace, we need to admit the fact that there is always a perceived gap in the way the skill sets map to the workplace requirements. We must have an integrated approach that blends the competencies related to the computer skills with those of Arts and Management so as to improve the employability. We need a connecting factor that ties both the ends of computer technology and arts/management discipline together, to bring out techy management people whose profile is most suitable for any workplace. This journal has many Ph.D qualified members who can help in promoting research on a continuous basis.

This issue has many innovative and creative research papers which make readers to gain more knowledge and lead them in right path.

Editorial Team IJITCE


Editorial Members Dr. Chee Kyun Ng Ph.D Department of Computer and Communication Systems, Faculty of Engineering, Universiti Putra Malaysia,UPM Serdang, 43400 Selangor,Malaysia. Dr. Simon SEE Ph.D Chief Technologist and Technical Director at Oracle Corporation, Associate Professor (Adjunct) at Nanyang Technological University Professor (Adjunct) at Shangai Jiaotong University, 27 West Coast Rise #08-12,Singapore 127470 Dr. sc.agr. Horst Juergen SCHWARTZ Ph.D, Humboldt-University of Berlin, Faculty of Agriculture and Horticulture, Asternplatz 2a, D-12203 Berlin, Germany Dr. Marco L. Bianchini Ph.D Italian National Research Council; IBAF-CNR, Via Salaria km 29.300, 00015 Monterotondo Scalo (RM), Italy Dr. Nijad Kabbara Ph.D Marine Research Centre / Remote Sensing Centre/ National Council for Scientific Research, P. O. Box: 189 Jounieh, Lebanon Dr. Aaron Solomon Ph.D Department of Computer Science, National Chi Nan University, No. 303, University Road, Puli Town, Nantou County 54561, Taiwan Dr. Arthanariee. A. M M.Sc.,M.Phil.,M.S.,Ph.D Director - Bharathidasan School of Computer Applications, Ellispettai, Erode, Tamil Nadu,India Dr. Takaharu KAMEOKA, Ph.D Professor, Laboratory of Food, Environmental & Cultural Informatics Division of Sustainable Resource Sciences, Graduate School of Bioresources, Mie University, 1577 Kurimamachiya-cho, Tsu, Mie, 514-8507, Japan Mr. M. Sivakumar M.C.A.,ITIL.,PRINCE2.,ISTQB.,OCP.,ICP Project Manager - Software, Applied Materials, 1a park lane, cranford, UK Dr. Bulent Acma Ph.D Anadolu University, Department of Economics, Unit of Southeastern Anatolia Project(GAP), 26470 Eskisehir, TURKEY Dr. Selvanathan Arumugam Ph.D Research Scientist, Department of Chemistry, University of Georgia, GA-30602, USA.

Contents 1. A Web Based Information & Advisory System for Agriculture ……….[1] 2. Performance Evaluation on the Basis of Energy in NoCs ……….[6] 3. Implementation of Authentication and Transaction Security based on Kerberos …..[10] 4. Cultural Issues and Their Relevance in Designing Usable Websites…[20] 5. Software Cost Regressing Testing Based Hidden Morkov Model …[30] 6. Handoff scheme to enhance performance in SIGMA……[40] 7. A Fast Selective Video Encryption Using Alternate Frequency Transform….[45] 8. Impact of Variable Speed Wind Turbine driven Synchronous Generators in Transient Stability of Power Systems………….[54]


A Web Based Information & Advisory System for Agriculture Shrikant G. Jadhav #1, G.N. Shinde *2 #1

Department of Computer Science, Yeshwant Mahavidyalaya, Nanded-431601 [MS], INDIA, *2 Principal, Indira Gandhi College, CIDCO, Nanded-431605 [MS], INDIA

Abstract: The business of farming has entered a new era – an age where key to success is perfect, timely information and careful decision- making. Now when the production is stagnating it has become essential that the farmers collect important and updated information about any of the crop and to get the proper advice regarding the farming.

quick decision making is therefore required to ensure profitable performance of the farmers [1,2]. II IT INITIATIVES IN INDIA FOR AGRICULTURE In the era of IT and globalization, Different Government bodies , NGOs and leading business territories have come forward for IT initiative that support the agricultural business and related activities. Some of these introduced below.

This paper introduces the IT initiatives in India for Agriculture like AGMARKNET, DACNET and also discusses a web based information and advisory system for agriculture which is implemented using HTML and JavaScript. The paper focus on the development methodology used and system functions, constraints and obstacles for the system

1) Agricultural Marketing Information System AGMARKNET: (


This initiative was taken by Department of Agriculture &Cooperation, Ministry of Agriculture Govt. of India. As a step towards globalization of agriculture, the Directorate of Marketing & Inspection (OMI) has embarked upon an IT project: NICNET based Agricultural Marketing Information System Network (AGMARKNET)" in the country, during the Ninth Plan, for linking all important APMCS (Agricultural Produce Market Committees), State Agricultural marketing Boards / Directorates and OMI regional offices located throughout the country, for effective information exchange on market prices. The advantages of AGMARKNET database accrue to the farmers, as they have choices to sell their produce in the nearest market at remunerative prices[3].

Keywords: Agriculture, AGMARKNET, DACNET, Advisory service, farmers guide, software engineering I INTRODUCTION Agriculture is one of the most important sector for human beings all over the world. In India near about 70% of population depend on agriculture. The credit of the increased production of the agriculture products in the past could be given to the efforts of farmers. Now when the production is stagnating it has become essential that the farmers collect important and updated information about any of the crop and to get the proper advice regarding the farming [1]. Keeping this in view, there is a need of farmer advisory system for farm entrepreneur which could help them in farming.

2) DACNET: ( The business of farming has entered a new era, i.e. an age where key to success is perfect, timely information and careful decision- making. International competition has resulted in a continued pressure on profit margins. Moreover, the farmer has to decide about various production options utilizing the results of latest developments of research and technology. Informed and

The department of Agriculture and Cooperation (DAC), Ministry of Agriculture and National Information Centre (NIC) has implemented this project. The aim of this project is to strengthen the infrastructure of ICT in all the Directorates, Regional Directorates and its field units



DACNET is an e-governance project to facilitate Indian ‘Agriculture-on-line’ It was built using the key criteria such as ease of use, speed of delivery, simplicity of procedure, single window access etc[4].

global one[7]. There are several objective of study as : •

3) iKisan Project : (

iKisan is the ICT initiative of the Nagarjuna group of companies, the largest private entity supplying farmers’ agricultural needs. iKisan was set up with two components, the website, to provide agricultural information online, and technical centers at village level. The project operates in Andhra Pradesh and Tamil Nadu[5].


To make an effort to present a solution to bridge the information gap by exploiting advances in Information Technology. To propose a framework of a costeffective agricultural information system to disseminate expert agriculture knowledge to the farming community to improve the crop productivity. To develop a web based farmer advisory system for farmers in Nanded, Marathwada region for Maharashtra state. (

4) Warana Wired Village project: IV THE METHODOLOGY The Warana cooperative complex in Maharashtra has become famous as a fore-runner of successful integrated rural development emerging from the cooperative movement. The Warana cooperative sugar factory, registered in 1956, has led this movement, resulting in the formation of over 25 successful cooperative societies in the region. The total turnover of these societies exceeds Rs. 60 million. Warana Nagar has an electronic telephone exchange, connecting nearly 50 villages, which has permitted dialup connections from village kiosks to the servers, located at Warana Nagar. There are many infrastructure facilities in and around Warana Nagar. About 80% of the population is agriculture-based and an independent agricultural development department has been established by the cooperative society. The region is considered to be one of the most agriculturally prosperous in India[6].

Software engineering’s classic life cycle method is used for developing proposed farmer advisory system. Classic life cycle is also called as linear sequential model and it is widely used paradigm for such system development [8].

Figure -1 : Linear Sequential Model


As shown in figure-1, the linear sequential model encompasses the following activities:

India possesses a valuable agricultural knowledge and expertise. However, a wide information gap exists between the research level and practice. Indian farmers need timely expert information to make them more productive and competitive.

System/information engineering and modelling:. System engineering and analysis encompass requirements gathering at the system level with a small amount of top level design and analysis

Concerning widespread nature of India in terms of whether & culture , it will be a better practice to establish farmer advisory systems in region wise manner. Such system will be beneficial for a particular region as it contains the local information rather than

Software requirements analysis: The requirements gathering process is intensified and focused specifically on software. To understand the nature of the program(s) to be built, the software engineer ("analyst") must


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 understand the information domain for the software, as well as required function, behavior, performance, and interface. Requirements for both the system and the software are documented and reviewed. Design: Software design is actually a multistep process that focuses on four distinct attributes of a program: data structure, software architecture, interface representations, and procedural (algorithmic) detail.

specifying functions , and constraints of the proposed system.

a) System Functions: •

The System should provide the fundamental geographical information of the region • The system should provide the information about agricultural products for the region • The information should contain basic product information, suitable conditions for the product and crop management and protection • The system should be able to provide the information to queries asked by end user. • The system should provide the other supporting information and links to the useful resources. b) System Constraints:

Code generation: The design must be translated into a machine-readable form. The code generation step performs this task. Testing: Once code has been generated, program testing begins. The testing process focuses on the logical internals of the software, ensuring that all statements have been tested, and on the functional externals; that is, conducting tests to uncover errors and ensure that defined input will produce actual results that agree with required results.

There are several constraints found with the system. The performance of the system depends on the advisor. It is necessary for the advisor to always check the user queries and provide the timely response. This will make the information useful for the end user, regular update of information like rainfall, climate changes and market prices etc. is essential for system administrator. If these constraints are followed system will be very useful for the farmers in the region.

Support: Software will undoubtedly undergo change after it is delivered to the customer. Support is the phase required to perform the changes required. Software support/maintenance reapplies each of the preceding phases to an existing program rather than a new one.


3 The Design Phase:

1 The Information Gathering Phase:

The design phase focuses on the development of framework and establishing the architecture of the system. The proposed system is integrated system of human and technology. So it becomes essential to understand the role and place of these components in the system. Figufe-2 shows the Schematic Outline of Structure of System.

The information gathering phase is an important in any system development as it establishes the foundation for the new system development. For our system development we have gathered the information from the different sources which include • • • •

Information Gathering through different Web Resources By visiting local APMC Nanded By interacting with the farmers in the region By collecting Historical data from Tahasil Office Nanded.

The proposed system has the following components • •

2 The Analysis Phase: •

The analysis phase bridges the gap between the system engineering and the system design phase. In this phase we have defined the scope of work by


Farmer should have easy access to information, Convenient facilities to post queries. System Administrator Should Continuously update the system and act as interface between farmer and Agricultural Experts Agricultural Experts: Continuously get feed backs, Be able to update information from his source and provide the response to the administrator


Figure 4: Snap shot of Home page

Figure -2 :- Schematic Outline of Structure of System

Figure 5: Snap shot of page where user can select the crop Figure -3 :- Schematic working of System

The system is developed using HTML and JavaScript. The main interface is ‘index.html’ file which is the home page for the system. From this home page the links are given to various functions like accessing information, posting query etc. Figure-4, figure-5 & figure-6 shows the few snap shot of the proposed system.

Above Figure 3 show schematic working model of System. The farmer (user) will interact with the system by using the url of the system. The home page of the system will provide the various options for the user and it interns contains the different types of farming information. The system interface is expected user friendly. The user can download the useful information if required. User can also use query interface to post query and ask for the advise. The query posted by user will be received in administrators mail box. The administrator then forward the query from user to agricultural expert . Agricultural expert will provide the suggestion to query and sent it back to the administrator and finally administrator will forward this reply to the user.

Figure 6 : Snap shot of page providing crop protection information


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 VI SYSTEM EVALUATION One of the reasons is that expert/scientific information is not reaching farming community. Indian farmers need timely expert information to make them more productive and competitive.

India is to be expected as “Knowledge Society” in coming few years by which any farmer in a remote village can access the information using IT resources[9]. To achieve “knowledge society ” in agricultural sector, it is necessary that there should be an agricultural information center in each village. but there are certain barriers in the achievement of this expectation [10].

Here an attempt is made by developing ‘’ –a web based farmer advisory system for farmers in Nanded, Marathwada region for Maharashtra state. Concerning widespread nature of India in terms of whether & culture , it will be a better practice to establish farmer advisory systems in region wise manner. Such system will be beneficial for a particular region as it contains the local information rather than global one. It will also be useful for removing the information gap that exists between the research level and actual business practice

Significant Obstacles are as follows. • Poor literacy rate. • Language barriers. • Unawareness of technology. • Unavailability of technical resources. • Unavailability skilled human resources. • Electricity problems. All above problems are foundational problems. There is a need that government organizations, NGOs, researchers and educational institutions should come forward , which decides the uniform policies and apply the efforts to solve these problems[7]. As long as such problems remain exist ,then it is very difficult to make efficient use of IT for agricultural development 1. 2.

3. 4. 5.


I.V.Subba Rao (2002), “Indian agriculture – Past laurels and future challenges, Indian agriculture: Current status, Prospects and challenges” Volume, 27th Convention of Indian agricultural Universities Association, December 9-11, 2002, pp. 58-77 2. J.C.Katyal, R.S.Paroda, M.N.Reddy, Aupam Varma, N.Hanumanta Rao (2000), “Agricultural scientists perception on Indian agriculture: scene scenario and vision”, National Academy of Agricultural Science, New Delhi, 2000. 3. Agmarknet Documentation (2008) “Marketing research and information network Revised operational guidelines for agmarknet” Retrieved on July 11, 2010 from 4. Dacnet Broucher (2009), Retrieved on July 11, 2010 from 5. Deepak Kumar (2005) “Private Sector Participation in Indian Agriculture An Overview” Business Environment, July 2005, PP -19-24. 6. Shaik. N. Meera, anita jhamtani, (2004), “Information and communication technology in agricultural development: a comparative analysis of Three projects from india” , Agricultural Research & Extension Network, Network paper no 135 Retrieved on July 11, 2010 from Http:// 7. P.Krishna Reddy (2004), “A Framework of Information Technology Based Agriculture Information Dissemination System to Improve Crop Productivity” , Proceedings of 22nd Annual Conference of Andhra Pradesh Economic Association, 2004, D.N.R. College, Bhimavaram, 14-15, Februaray 2004. 8. Pressman, R. (2000). “Software engineering: A practitioner’s approach” (5th ed.) McGraw-Hill Publications. 9. Rita Sharma (2002), Reforms in Agricultural extension: new policy framework. Economic and Political weekly, July 27, 2002, pp. 3124-3131.Shinde G.N., Jadhav S.G (2008), “The Role ICT In Agricultural Development - A Indian Perspective” Paper Presented in National Conference on “Advances in Information Communication Technology” Organized by Computer Society of India ,Allahabad chapter, at Allahabad on Mar. 15-16,2008 10. Vayyavuru Sreenivasulu and H.B. Nandwana (2001), “networking of agricultural information systems and services in india” , INSPEL (2001) Vol-4, pp 226-235.

Efforts should be made to increase the literacy rate. It has been seen that skilled people are not interested to work in rural areas, such people should be encouraged and promoted to work in the area. Necessary Funds for Resources should be availed. Efforts should be made to incorporate IT in all endeavors related to agricultural development. The organizations and departments concerned with agricultural development need to realize the potential of IT for the speedy dissemination of information to farmers. VII CONCLUSION

The business of farming has entered a new era – an age where key to success is perfect, timely information and careful decision- making. In this era, now when the production is stagnating it has become essential that the farmers collect important and updated information about any of the crop and to get the proper advice regarding the farming. From Indian farming perspective, farming community is facing a multitude of problems to maximize crop productivity. In spite of successful research on new agricultural practices concerning various areas in farming, the majority of farmers are not getting upperbound yield due to several reasons.



Performance Evaluation on the Basis of Energy in NoCs Lalit Kishore Arora #1, Rajkumar *2 #

MCA Dept, AKG Engg College,Ghaziabad, UP, India.


CSE Dept, Gurukul Kangri Vishva Vidyalaya, Haridwar, UK, India.

Abstract— The classical interconnection network topologies such as point-to-point and bus-based, recently has been replaced by the new approach Network-on-Chip (NoC). NoC can consume significant portions of a chip’s energy budget, so analyzing their energy consumption early in the design cycle becomes important for architectural design decisions. Although numerous studies have examined NoC implementation and performance, few have examined energy. This paper determines the energy efficiency of some of the basic network topologies of NoC. We compared them, and results show that the CMesh topology consumes less energy than Mesh topology.

memory system performance suggest that the relative importance of NoCs will increase in future CMP designs. As a result, there has been significant research in topologies [7], [16], [28], router microarchitecture [15], [21], wiring schemes [4], and power optimizations [32]. Nevertheless, there is a great need for further understanding of interconnects for large-scale systems at the architectural level. Previous studies also focused on CMPs [18], have used synthetic traffic patterns [7], [15], [21], or traces [28], or do not model the other components of the memory hierarchy [16]. In the previous paper [33] we determined the network energy efficiency for the Fat Tree and Mesh, and results shown that Mesh consumes less energy than the Fat Tree topology. Here we determine the network energy efficiency (in pJ/bit) as a function of network bandwidth for networks with a fixed size of 64 nodes running different-different traffic patterns. We also changed the network bandwidth by changing the channel width. The four data point for each topology corresponds to channel width of 16, 24, 48, 72 bits.

Keywords: Network-on-Chip, Interconnection Networks, Topologies, Multi-core processor.

I. INTRODUCTION In the design cycle of system-on-chip (SoCs) [20], the main emphasis is on the computational aspect. However, as the number of components on a single chip and their performance continue to increase, the design of the communication architecture plays a major role in defining the area, performance, and energy consumption of overall system. Furthermore, with technology scaling, the global interconnects cause severe on-chip synchronization errors, unpredictable delays, and high power consumption[27]. To remove these effects, the network-on-chip (NoC) approach emerged recently as a promising alternative to classical bus-based and point-to-point (P2P) communication architectures [31],[1],[25]. The remainder of this paper is organized as follows. Section 2 explains the related work and motivation behind this work. Section 3 describes the overview of the topologies which we have used in this experiment. In section 4 we describe the results of experiments for both topologies for energy consumption.

III. TOPOLOGIES FOR EVALUATION The topology defines how routers are connected with each other and the network endpoints. For a large-scale system, the topology has a major impact on the performance and cost of the network. our study aims to determine the energy consumed by network topologies across a range of network parameters including network bandwidth, traffic pattern, network frequency. In the experiments we study four realistic topologies the Mesh, the concentrated Mesh (CMesh). A. Mesh Topology Linear arrays are called 1-D meshes and they are incrementally scalable. When dealing with a mesh, we usually assume that its dimension n is fixed. If we want to change its size, we change the side lengths. The most practical meshes are, of course, 2-D and 3D ones [6]. In a mesh network, the nodes are arranged in a k dimensional lattice of width w, giving a total of wk nodes.[usually k=1 (linear array) or k=2 (2D array)

II. MOTIVATION To connect the increasing number of cores in a scalable way, researchers are evaluating packetswitched networks-on-chip (NoCs) [9], [10], [23]. The increasing disparity between wire and transistor delay [11] and the dependence between interconnect and


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 router imposes a minimum latency (e.g., 3 cycles) and is a potential point of contention. A large number of hops has a direct impact on the energy consumed in the interconnect for buffering, transmission, and control. Hence, meshes could face performance and power scalability issues for large-scale systems. To address this shortcoming, researchers have proposed meshes with physical [8] or virtual [17] express links.

e.g. ICL DAP]. Communication is allowed only between neighboring nodes. All interior nodes are connected to 2k other nodes. The most important mesh-based parallel computers are Intel's Paragon (2-D mesh) [14] and MIT J-Machine (3-D mesh). Also transputers used 2D mesh interconnection. Processors in mesh-based machines are allocated by submeshes and the submesh allocation strategy must handle possible dynamic fragmentation and compaction of the global mesh network, similarly to hypercube machines [30].

IV. EVALUATION Our network-on-chip (NoC) topology study aims to determine the energy efficiency of network topologies across a range of network parameters including network bandwidth, traffic pattern, network frequency. In the experiments we compared the Mesh and CMesh topologies. In this experiment we used a RTL based router model and spice based channel model to obtain the energy results. The router RTL were place and routed using a commercial 45 nm lower power library running at 200MHz. The channel model uses technology parameters from the same library. Figure-3 shows network energy efficiency (in pJ/bit) as a function of network bandwidth for networks with a fixed size of 64 nodes running uniform random traffic. We change the network bandwidth by changing the channel width. The four data point for each topology corresponds to channel width of 16, 24, 48, 72 bits. For each channel width configuration, the network is running at 50% of saturationbandwidth.

Figure-1. Mesh

B. CMesh Topology The 2D mesh is a popular interconnect choice in large-scale CMPs [5], [14]. Each of the T/C routers connects to its four neighbouring routers and C source or destination nodes. T represents the number of sources and destinations in the network and degree of concentration C. The degree of concentration C, in nodes per router, is typically applied to reduce the number of routers and therefore hops. In this way, mesh with a concentration factor, commonly referred to as CMesh.

Figure 3 and 4 shows the effect of varying traffic patterns on the energy efficiency of both network topologies. Each network configuration is running at 50% saturation throughput under the test traffic pattern. Both Mesh and CMesh network topology uses dimension order routing. In figure 5, using dimension order routing under transpose traffic, much of the network infrastructure is idle except for few heavily loaded channels. As a result the energy per bit of Mesh topology increases. In figure 5, nearest neighbour traffic heavily favours the mesh topology. Each node in the mesh has a dedicated channel to each of its immediate neighbours, this result in very high network bandwidth. For other one topology, nearest neighbour traffic under utilizes network resources such as the long channels of the Mesh. As a result, this under utilized resources decreases the energy efficiency of CMesh topology when compared to the Mesh.

Figure-2. CMesh

The major advantage of the mesh is its simplicity. All links are short and balanced and the overall layout is very regular. The routers are low radix with up to C + 4 input and output ports, which reduces their area footprint, power overhead, and critical path. The major disadvantage is the large number of hops that flits have to potentially go through to reach their final destination (proportional to "N for N routers). Each



5 4

topologies CMesh and FBFly with above traffic patterns.

3 Mesh Cmesh

2 1


0 20







Network Throughput (Gbps)

[2] Fig. 3. Network energy per bit sent under uniform random traffic vs. network bandwidth


Energy per Bit (pj)

10 8


6 Mesh Cmesh

4 2


0 10






Network Throghput (Gbps)

Fig. 4. Network energy per bit sent under transpose traffic vs. network bandwidth



Energy per Bit (pj)

2 1.8


1.6 Mesh Cmesh

1.4 1.2



[11] [12] [13]

60 70 80 90 100 150 200 250 300

Network Throghput (Gbps)


Fig. 5. Network energy per bit sent under nearest neighbour traffic vs. network bandwidth [15]

V. CONCLUSION AND FUTURE SCOPE As we discussed in [33] we have shown that Mesh gives better energy efficiency than the Fat Tree. Here we compared two another popular interconnection networks, Mesh and CMesh network topology. After evaluation Mesh and CMesh, in different traffic patterns we found that CMesh topology consumes less energy than Mesh topology as shown in different charts. In future we are trying to evaluate two more





Hemani, et al., “Network on Chip: An Architecture for Billion Transistor Era,” In Proc. IEEE NorChip Conf., Nov. 2000. ARG Database CD, Messmer, H. Bunke, “A Decision Tree Approach to Graph and bgraph Isomorphism Detection,” Pattern Recognition, Dec. 1999. BALASUBRAMONIAN, R., MURALIMANOHAR, N., RAMANI, K., AND VENKATACHALAPATHY, V. Microarchitectural wire management for performance and power in partitioned architectures. In Proceedings of the 11th International Symposium on High-Performance Computer Architecture. IEEE, Los Alamitos, CA, 2005. BELL, S., EDWARDS, B., AMANN, J., CONLIN, R., JOYCE, K., LEUNG, V., MACKAY, J., REIF, M., BAO, L., ET AL. TILE64 processor: A 64-core SoC with mesh interconnect. In Proceedings of the International Solid-State Circuits Conference. IEEE, Los Alamitos, CA,2008 Benini, L.; De Micheli, G., “Networks on chips: a new SoC paradigm”, IEEE Computer Society , Computer, 70 – 78, Jan 2002. BONONI, L., CONCER, N., GRAMMATIKAKIS, M., COPPOLA, M., AND LOCATELLI, R. NoC topologies exploration based on mapping and simulation models. In Proceedings of the 10th Conference on Digital System Design Architectures, Methods and Tools. IEEE, Los Alamitos, CA. 2007. DALLY, W. Express cubes: Improving the performance of kary n-cube interconnection networks. IEEE Trans. Comput. 40, 9, 1016–1023. 1991. DALLY, W. J. AND TOWLES, B. Route packets, not wires: On-chip interconnection networks. In Proceedings of the 38th Conference on Design Automation. ACM, New York, 2001. DE MICHELI, G. AND BENINI, L. Networks on chip: A new paradigm for systems on chip design. In Proceedings of the Conference on Design, Automation and Test in Europe. ACM, New York, 2002. HO, R., MAI, K., AND HOROWITZ, M. The future of wires. Proc. IEEE. 89, 4, 24. ,2001 Intel Tera-scale Computing Research Program. ,2008. KIM, J., PARK, D., THEOCHARIDES, T., VIJAYKRISHNAN, N., AND DAS, C. R. A low latency router supporting adaptivity for on-chip interconnects. In Proceedings of the 42nd Annual Conference on Design Automation. ACM, New York, 2005. KIM, M. M., DAVIS, J. D., OSKIN, M., AND AUSTIN, T. Polymorphic on-chip networks. In Proceedings of the 35th Annual International Symposium on Computer Architecture. ACM, New York.2008. KUMAR, A., PEH, L.-S., AND JHA, N. K. Token flow control. In Proceedings of the 41th Annual International Symposium on Microarchitecture. IEEE, Los Alamitos, CA, 2008.. KUMAR, R., ZYUBAN, V., AND TULLSEN, D. M. Interconnections in multi-core architectures: Understanding mechanisms, overheads and scaling. In Proceedings of the 32nd Annual International Symposium on Computer Architecture. ACM, New York, 2005.. LEISERSON, C E, “Fat-trees - University networks for hardware-efficient supercomputing” , IEEE Transactions on Computers. Vol. C-34, pp. 892-901. Oct. 1985.


[19] M. Kreutz, et. al. “Communication Architectures for SystemOn-Chip,” In 14th Symp. on Integrated Circuits and Systems Design, Sep. 2001. [20] MULLINS, R., WEST, A., AND MOORE, S. Low-latency virtual-channel routers for on-chip networks. In Proceedings of the 31st Annual International Symposium on Computer Architecture. ACM, New York, 2004.. [21] Ohring, S.R.; Ibel, M.; Das, S.K.; Kumar, M.J., “On generalized fat trees”, in proceedings Parallel Processing Symposium, 1995. [22] OWENS, J. D., DALLY, W. J., HO, R., JAYASIMHA, D. N., KECKLER, S. W., AND PEH, L.-S. Research challenges for

[28] Vernon , “Performance analysis of mesh interconnection networks with deterministic routing”, IEEE [29] Transactions on parallel and distributed systems, vol. 5, no3, pp. 225-246, 1994. [30] VS Adve, MK Vernon, “Performance analysis of multiprocessor mesh interconnection networks with wormhole routing”, Computer Sciences Technical Report #1001a, June ,1992. [31] W. Dally and B. Towles, “Route packets, not wires: On-chip interconnection networks,” In Proc. 38th DAC, June 2001. [32] WANG, H., PEH, L.-S., ANDMALIK, S. Power-driven design of router microarchitectures in onchip networks. In Proceedings of the 36th Annual International Symposium on Microarchitecture. IEEE, Los Alamitos, CA 2003.. [33] LALIT K. ARORA, RAJKUMAR, Network-on-Chip Evaluation Study for Energy, Proceeding of International conference on Reliability, Infocom Technology and Optimization, Lingaya's University, India, pg 314-320, Nov,2010

on-chip interconnection networks. IEEE Micro. 27, 5, 96–108. 2007. [23] P. Foggia, et al., “A Performance Comparison of Five Algorithms for Graph Isomorphism,” In Proc. 3rd IAPR TC-15 Workshop on Graphbased Representations in Pattern Recognition, May, 2001. [24] P. Guerrier, A. Greiner, “A Generic Architecture for On-Chip Packet Switched Interconnections,” In Proc. DATE, March 2000. [25] Pierre Guerrier, Alain Greiner, “A generic architecture for onchip packet-switched interconnections”, Proceedings of the conference on Design, automation and test in Europe,pp 250 – 256, 2000 [26] S. Murali, G. De Micheli, “SUNMAP: A Tool for Automatic Topology Selection and Generation for NoCs,” In Proc. 41st DAC, June 2004. [27] TOTA, S., CASU, M. R., AND MACCHIARULO, L. 2006. Implementation analysis of NoC: a MPSoC trace-driven approach. In Proceedings of the 16th Great Lakes Symposium on VLSI. ACM, New York.V.S. Adve, M.K.



Implementation of Authentication and Transaction Security based on Kerberos Prof R.P. Arora Head of the Deaprtment, Computer Sc and Engg. Dehradun Institute of Technology, Dehradun Ms. Garima Verma Asstt. Professor, MCA Department, Dehradun Institute of Technology, Dehradun

Abstract— Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography. Kerberos was created by MIT as a solution to network security problems. The Kerberos protocol uses strong cryptography so that a client can prove its identity to a server (and vice versa) across an insecure network connection. After a client and server have used Kerberos to prove their identity, they can also encrypt all of their communications to assure privacy and data integrity as they go about their business.

company assets. By "assets," means the hardware and software that constitute the company's computers and networks. The assets are comprised of the "information" that is housed on a company's computers and networks. • To gain a competitive advantage: Developing and maintaining effective security measures can provide an organization with a competitive advantage over its competition. Network security is particularly important in the arena of Internet financial services and e-commerce.

In this paper we tried to implement authentication and transaction security in a Network using Kerberos. This project is embedded with Authentication Server application and used to derive a 64 bit key from user’s password. This key is used by authentication server, to encrypt ticket granting ticket + session key. The key generated by authentication server will be used by the client at the time of transaction through the transaction server to authenticate that transaction client is valid or not.

• To comply with regulatory requirements and fiduciary responsibilities: Corporate officers of every company have a responsibility to ensure the safety and soundness of the organization. Part of that responsibility includes ensuring the continuing operation of the organization. Accordingly, organizations that rely on computers for their continuing operation must develop policies and procedures that address organizational security requirements.

Key Words : secret key , cryptography, authentication, ticket, session key etc.


• To keep your job: Finally, to secure one's position within an organization and to ensure future career prospects, it is important to put into place measures that protect organizational assets. Security should be part of every network or systems administrator's job. Failure to perform adequately can result in termination. One thing to keep in mind is that network security costs money: It costs money to hire, train, and retain personnel; to buy hardware and software to secure an organization's networks; and to pay for the increased overhead and


With the advent of computer the need for automated tools for protecting files and other information stored on the computer became evident [14]. This is specially the case for a shared system, such as time-sharing system, and the need is even more acute for systems that can be accessed over a public telephone network, data network, or the internet. Computer and network security is important for the following reasons [16].

• To protect company assets: One of the primary goals of computer and network security is the protection of


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 message from Alice really has come from Alice before he acts on its information. They decide to solve their problem by selecting a password, and they agree not to share this secret with anyone else. If Alice’s messages can somehow demonstrate that the sender knows the password, Bob will know that the sender is Alice.

degraded network and system performance that result from firewalls, filters, and intrusion detection systems (IDSs). As a result, network security is not cheap.

1.1 KERBEROS Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography [2][10]. The Internet is an insecure place. Many of the protocols used in the Internet do not provide any security. Tools to "sniff" passwords off of the network are in common use by malicious hackers. Thus, applications which send an unencrypted password over the network are extremely vulnerable. Worse yet, other client/server applications rely on the client program to be "honest" about the identity of the user who is using it. Other applications rely on the client to restrict its activities to those which it is allowed to do, with no other enforcement by the server.

The only question left for Alice and Bob to resolve is how Alice will show that she knows the password. She could simply include it somewhere in her messages, perhaps in a signature block at the end—Alice, Our$ecret. This would be simple and efficient and might even work if Alice and Bob can be sure that no one else is reading their mail. Unfortunately, that is not the case. Their messages pass over a network used by people like Carol, who has a network analyzer and a hobby of scanning traffic in hope that one day she might spot a password. So it is out of the question for Alice to prove that she knows the secret simply by saying it. To keep the password secret, she must show that she knows it without revealing it. The Kerberos protocol solves this problem with secret key cryptography. Rather than sharing a password, communication partners share a cryptographic key, and they use knowledge of this key to verify one another’s identity. For the technique to work, the shared key must symmetric—a single key must be capable of both encryption and decryption. One party proves knowledge of the key by encrypting a piece of information, the other by decrypting it.

Some sites attempt to use firewalls to solve their network security problems. Unfortunately, firewalls assume that "the bad guys" are on the outside, which is often a very bad assumption. Most of the really damaging incidents of computer crime are carried out by insiders. Firewalls also have a significant disadvantage in that they restrict how your users can use the Internet. (After all, firewalls are simply a less extreme example of the dictum that there is nothing more secure than a computer which is not connected to the network --- and powered off!) In many places, these restrictions are simply unrealistic and unacceptable. Kerberos was created by MIT as a solution to these network security problems. The Kerberos protocol uses strong cryptography so that a client can prove its identity to a server (and vice versa) across an insecure network connection [13]. After a client and server have used Kerberos to prove their identity, they can also encrypt all of their communications to assure privacy and data integrity as they go about their business. 1.1.1 Basic Concepts The Kerberos protocol relies heavily on an authentication technique involving shared secrets [14]. The basic concept is quite simple: If a secret is known by only two people, then either person can verify the identity of the other by confirming that the other person knows the secret. For example, let’s suppose that Alice often sends messages to Bob and that Bob needs to be sure that a



Figure 1: functional block dia. of Kerberos



2. Literature Review

I. Cervesato,A. D. Jaggard,A. Scedrov,C. Walstad (2004) they presented a formalization of Kerberos 5 cross-realm authentication in MSR, a specification language based on multiset rewriting. We also adapt the Dolev-Yao intruder model to the cross-realm setting and prove an important property for a critical field in a cross-realm ticket. They also documented several failures of authentication and confidentiality in the presence of compromised intermediate realms.

K. Aruna et. al (2010), The aim of this paper is to establish a collaborative trust enhanced security model for distributed system in which a node either local or remote is trustworthy. They have also proposed a solution with trust policies as authorization semantics. Kerberos, a network authentication protocol is also used to ensure the security aspect when a client requests for certain services. In the proposed solution, they have also considered the issue of performance bottlenecks.

2.1 Objective of the study When we see the overall functioning of the Kerberos there are various module that need to be made for implementing Kerberos as whole for any network. For authentication of any client there is a centralized Authentication server which will generate ticket for the client using password by applying encryption technique. Simultaneously authentication server will pass a copy of ticket to the respective data server. Ticket will be unique for every data server as well as valid for only one session. Whenever client wants to perform any transaction through the server it has to send message with that ticket, the server will authenticate whether client’s ticket is right or wrong, if the ticket is right it will accept the message or data sent by the client.

Steve Mallard(2010), he has defined various authentication method in order to protect the assets on your network like username and password, Biometric systems, Kerberos etc.

Dr.Mohammad N. Abdullah & May T. Abdul-Hadi (2009) they tried to establish a secured communication between the clients and mobile-bank application server in which they can use their mobile phone to securely access their bank accounts, make and receive payments, and check their balances.

Hongjun liu et. al(2008), This paper has discussed potential server bottleneck problem when the Kerberos model is applied in large-scale networks because the model uses centralized management. They have proposed an authentication model based on Kerberos,which tries to overcome the potential server bottleneck problem and can balance the load automatically

3. Research Methodology

To implement this project we have used Java and NetBeans 5.5 because we found Java as most suitable language to do the network programming. We have divided the whole project into three modules client who is a user wants to access data server, authentication server is the module which is used to generate ticket and return back to the client who accesses data server so that data server can easily check whether the client who is coming with ticket is correct or not, and data server is the site where data is stored and can be utilized by the clients. We have used the concept of socket programming to implement client, authentication server and data server.

Frederick Butler , Iliano Cervesato, Aaron D. Jaggard, Andre Scedrov and Christopher Walstad (2006) Analysed Kerberos 5 protocol, and concluded that Kerberos supports the expected authentication and confidentiality properties, and that it is structurally sound; these results rely on a pair of intertwined inductions.


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 ResultSet rs=stmt.executeQuery("select * from user where userid='"+usertxt.getText()+"'");

For the generation of ticket from the authentication server we have used Data Encryption Standard (DES) in authentication server which will use 64 bit plain text and 56 bit key.

String u,p;

4. Implementation;

The whole project is divided into three modules- Client site, authentication server and data server.

u=rs.getString(1); p=rs.getString(2);

4.1 Client Module if(p.compareTo(passtx.getText())!=0) {

Client is any user who can apply to any data server for service. The obvious security risk is that of impersonation. An opponent client can pretend to be another client and obtain unauthorized privileges on the data server sites. To counter this threat, data servers must be able to confirm the identities of clients who request service. We followed following steps:

JOptionPane.showMessageDialog(this,"Password Wrong"); }

Client will logon to its own terminal by using user name and password. These user and passwords and predefined and assigned to every client on the network. Every client have unique user name with two passwords, one password is used to logon to the client terminal and another is called as transaction password which he will submit to the authentication server.

Code Section

After the successful login client will submit his details with transaction password to the authentication server. Details include – username, transaction password and name of the data server.

Again entered transaction password will be checked into the client database then finally sent to the authentication server.

Class.forName("com.mysql.jdbc.Driver").newInstance() ; Code Section Connection con = DriverManager.getConnection("jdbc:mysql://localhost/t est?"+"user=root&password=garima");

Class.forName("com.mysql.jdbc.Driver").newInstan ce();

Statement stmt=con.createStatement();



Connection con = DriverManager.getConnection("jdbc:mysql://localh ost/test?"+ "user=root&password=garima");

}catch(Exception e) { JOptionPane.showMessageDialog(this,e.t oString()+" :he");

Statement stmt=con.createStatement(); ResultSet rs=stmt.executeQuery("select * from user where userid='"+txtuser.getText()+"'");

} String u,p; } if(rs==null) }catch(Exception e) { JOptionPane.showMessageDialog(this,"User Name is Wrong");

{ JOptionPane.showMessageDialog(this,"User Name is Wrong");


} //writing data to authentication serveru=rs.getString(1);


p=rs.getString(3); if(p.compareTo(txtpas.getText())!=0) { JOptionPane.showMessageDialog(this,"Password Wrong"); •

} else

After receiving ticket from the authentication server the client will send message + ticket to the data server.

{ Code section

try {


Socket clientSocket = new Socket("", 6789); PrintStream ps=new PrintStream(clientSocket.getOutputStream());

{ clientSocket2 = new Socket("", 7211);

DataInputStream dis=new DataInputStream(clientSocket.getInputStream());



PrintStream pserver=new PrintStream(clientSocket2.getOutputStream());


DataInputStream diserver=new DataInputStream(clientSocket2.getInputStream());

ps.println(serverip.getText()); s=dis.readLine().toString(); JOptionPane.showMessageDialog(this,s);



// sending message to data server

includes the user’s ID, server’s ID and user’s password. The AS checks its database to see if the user has supplied the proper password for this user ID and whether this user is permitted access to server V. If both tests are passed, the AS accepts the user as authentic and creates a Ticket. This ticket is then sent back to C. For the encryption we have used DES algorithm.

pserver.println(servermsg.getText()); pserver.println(s);

The Following steps are included in this module:

JOptionPane.showMessageDialog(this, adLine());


After Start of authentication server as well as data server

}catch(Exception e){}

After starting the authentication server it can accept any request coming to its port address from client side.

4.2 Authentication Server Module

Authentication server is central authorities that knows the passwords of all clients and store these in a centralized database. In addition, the AS shares a unique secret key with each server [14]. These keys have been distributed physically or in some other secure manner. For example – the user logs on to a workstation and requests access to server V: the client module C in the user’s workstation requests the user’s password and then sends a message to AS that

Code Section welcomeSocket = new ServerSocket(6789); connectionSocket = welcomeSocket.accept();



Generation of ticket using DES algorithm and store a copy of ticket in its own database as well as send a copy on server site.

Code Section key = KeyGenerator.getInstance("DES").generateKey();

System.out.println("Client :"+ connectionSocket); PrintStream pserver=new PrintStream(cs.getOutputStream()); 4.3 Data Server Module

PrintStream ps=new PrintStream(connectionSocket.getOutputStream()); DataInputStream dis=new DataInputStream(connectionSocket.getInputStream());

Now after receiving ticket client can now apply to Server for service. Client send a message to server containing its ID and ticket. Server decrypts the ticket and match it with ticket stored in the database. If these two match, the server considers the user authenticated and requested service.

clientname=dis.readLine(); clientSentence=dis.readLine(); servername=dis.readLine();

Following steps we have included in this moduleDesEncrypter ds=new DesEncrypter(key,clientSentence);

Client will send a message with ticket to the data server after receiving ticket from authentication server. Code Section

String enc= ds.encrypt(clientSentence); pstm.setString(1,clientname);

clientSocket2 = new Socket("", pstm.setString(2,enc);





PrintStream pserver=new PrintStream(clientSocket2.getOutputStream()); DataInputStream diserver=new DataInputStream(clientSocket2.getInputStream());

pstm1.setString(1,clientname); pstm1.setString(2,enc);




ps.println(enc); 17


JOptionPane.showMessageDialog(this,diserver.readLi ne()); clientSocket2.close();

5. Conclusion

Authentication is critical for the security of computer systems. Without knowledge of the identity of a principal requesting an operation, it is difficult to decide whether the operation should be allowed. Traditional authentication methods are not suitable for use in computer networks where attackers monitor network traffic to intercept passwords. The use of strong authentication methods that do not disclose passwords is imperative. The Kerberos authentication system is well suited for authentication of users in such environments.

Now the data server will verify the ticket, after verification the data server will send a message to client whether he is authentic or not.

Code Section


If we talk about the unprotected environment, any client can apply to any server for service. This has a security risk of impersonation. An opponent can pretend to be another client and obtain unauthorized privileges on server machines. In the above scheme the transaction will be highly secured in the sense that Authentication server creates a ticket which is further encrypted using the secret key shared by the server and authentication Server. This ticket then sends back to client. Because the ticket is encrypted, it cannot be altered by client or by an opponent.

k1=dclient.readLine(); ResultSet rs=stmt.executeQuery("select * from dser where keyid='"+k1+"'"); if(rs!=null) ps.println("authenticated client"); else ps.println(“Not authorized”); System.out.println(msg); new job1(consoc); ps.close();



References [1] K. Aruna et. al (2010), “A new collaborative trust enhanced security model for distributed systems”. International Journal of Computer Application, No-26 [2] Steve Mallard(2010), “Methods of authentication”, Bright Hub [3] Hongjun liu et. al(2008), “A distributed expansible authentication model based on Kerberos” Journal of Network and Computer Application, Vol.31, Issue 4 [4] Dr.Mohammad N. Abdullah & May T. Abdul-Hadi, “A Secure Mobile Banking Using Kerberos Protocol”, Engg & Technology Journal, Vol 27, No 6, 2009. [5] “How Kerberos Authentication Works”, Network on line magazine, Jan 2008 [6] “How Kerberos Authentication Works“,Learn Networking on line magazine, Jan’2008 [7] Frederick Butler,, Iliano Cervesato, Aaron D. Jaggard, Andre Scedrov and Christopher Walstad, “Formal Analysis of Kerberos 5”, Sep 2006 [8] Rong Chen, Yadong Gui and Ji Gao, “Modification on Kerberos Authentication Protocol in Grid Computing Environment”, vol 3032, 2004. [9] I. Cervesato,A. D. Jaggard,A. Scedrov,C. Walstad, “Specifying Kerberos 5 cross-realm authentication”,vol 3032, 2004. [10] “Security of Network Identity: Kerberos or PKI”, System News (2002), Vol.56, Issue-II [11] Ian Downnard, “Public-key cryptography extensions into Kerberos”. IEEE Potentials 2002. [12] B. Clifford Neuman and Theodore Ts'o, Kerberos: An Authentication Service for Computer Networks, IEEE Communications 32 (1994), no. 9, 33--38. [13] MIT Kerberos Website, “”. [14] William Stallings, “Cryptography and Network Security”, Third Edition. [15] Ravi Ganesan, “Yaksha’ : Augmenting Kerberos with Public Key cryptography”. [16] John E. Canavan, “Fundamentals of Network Security”. [17] Chris Brenton with Cameron hunt , “ ACTIVE DEFENCE A comprehensive guide to network security”



Cultural Issues and Their Relevance in Designing Usable Websites Alao Olujimi Daniel1, Awodele Oludele2, Rehema Baguma3, and Theo van der Weide4 1. Computer Science & Mathematics Department, Babcock University, Illishan-Remo, Nigeria* 2. Computer Science & Mathematics Department, Babcock University, Illishan-Remo, Nigeria* 3. Faculty of Computing & Information Technology, Makerere University, Kampala, Uganda 4. Radboud University, Institute for Computing and Information Sciences. Nijmegen, The Netherlands.

Abstract— Cultural characteristics of users play a significant role in their interactions and understanding of web based systems. Hence consideration of cultural issues in the design of a web based system can improve the usability of such a system. The relation between culture and the internet is symbiotic, that is, experience obtained from using the internet (with its rich cultural diversity) can also have an influence on the local culture. This makes culture a moving target. However to-date, not much research has been done about what cultural issues influence the usability of websites and the level of influence. This paper examines theoretically the cultural issues that influence web design/usability and the significance of this influence to the general usability of a website and also establish how culture can be utilized to develop more usable websites. Thus the main contribution of this study is to identify what characterizes usable websites with reference to cultural needs of the user, specific web features applicable to cultural dimension that can enhance cultural understanding and help web designers to customize the web sites to specific cultures.

communicate and use the Internet. This knowledge is particularly crucial for people in international business, technology professions, and other work areas that require people from different cultures to interact online (Sapienza, 2008).

According to the International Telecommunications Union as of December 31, 2009,the number of users interacting with internet increased 399.3 percent since year 2000. A survey by Forrester Research indicated that North American consumers alone spent $172 billion shopping online in 2005, up from $38.8 billion in 2000. By 2010, consumers are expected to spend $329 billion each year online.

With the number of online consumers on the Web steadily increasing, there is a need to seek a better understanding of user cultural preferences in the design elements. The results of an on-line experiment that exposed American and Chinese users to sites created by both Chinese and American designers indicated that users perform information-seeking tasks faster when using web content created by designers from their own cultures (Faiola and Matei, 2005).

Keywords: Human Computer Interaction (HCI), Web Usability, Culture/User Centered Design, Cultural dimensions.


Evers and Day (1997) In examining user satisfaction, found that 67.9% of a user interface would be satisfied using an interface with technology adapted to their culture.

As the World Wide Web spreads across countries, it has become increasingly important for designers to respect and understand cultural differences in how people



Web site usability is to a large extent affected by culture of the user, or there is a relationship between culture and usability, or "culturability" as it is known or termed by Barber and Badre ( 2001). They argue that the success of an interface is achievable when the user interface design reflects the cultural characteristics of the target audience. Ease of use with cultural acceptability has become the pre-eminent requirement of designing software and other computer applications. To meet this necessity, “culturability” has emerged as a serious field of research. According to Nantel and Glaser (2008), a “culturally adapted website results in greater ease of navigation and a more positive attitude towards the site”. Thus indicating ease of use.

what is user-friendly for one culture can be vastly different for another culture, and usability must therefore take on a cultural context”,1

Presently, few information systems such as application software with graphical user interfaces, government websites, online shopping sites and even corporate websites satisfy usability and cultural criteria, resulting in a lot of frustration among users, the reason for this is that the design of these information systems are technology-entered, in which the cultural needs of the users have not been taken into consideration during the development process. Interacting with a website is a form of communication. For a website to achieve a successful communication with the users, two variables need to be considered, the language in which it is coded and the context in which the information was embedded. If these are not shared by the system designer and the users, their meaning will differ thus not achieving efficient communication (Mantovani, 2001). While language can be easily determined, context identification can be a complex task. Language does not mention what is commonly known. So at least culture provides extra context, that what is commonly known by people sharing that culture. Furthermore, we communicate by using symbols. But symbols are very culture dependent. Finally, how to go about this has a cultural component. The look and feel of a website is derived from the common strategies to solve a problem. A way to do this is based on culture, because it allows clustering people in groups that share common characteristics and traits. This paper discusses cultural that influence web usability and how culture can be utilized to develop more usable websites. It will explore the meaning of usability, culture and investigate in which ways objective and subjective cultural issues affects the usability and design of websites.

1.1 Objectives of this study

Wendy Barber and Albert Badre, Graphics, Visualization & Usability Centre/ Georgia Institute of Technology, Atlanta. 1

The merging of culture and usability in website design or culturability as termed by Barber and Badre (2001), challenges the idea of usability as being culturally neutral by claiming that cultural values such as thought patterns and customs are expected to directly affect the way a user interact or affect the usability of a website.

The major goal of this paper are as follows: • To find out cultural issues that influences Web usability • To establish how websites can be adapted to meet cultural needs of users • To establish how culture can be utilized to develop more usable website.


Web Usability and Culture

2.1 Website Usability There are many definitions of usability proposed by various individuals, but there is no common definition of usability, which is generally accepted within the HCI community. Precce et al (1994) defined usability as "a measure of the ease with which a system can be learned or used, its safety, effectiveness and efficiency, and attitude of its users towards it". Nielsen(1993) defined the usability of a computer system in terms of the following attributes: learnability, efficiency, memorability, errors, and satisfaction. On the other hand, ISO 9241-11 defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use”. From the above definitions, it can be concluded that usability of a website is generally concerned with making website interfaces that are easy to use or user friendly.

“No longer can issues of culture and usability remain separate in design for the World Wide Web. Usability must be re-defined in terms of a cultural context, as


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 2.2 Culture There is a wide range of culture definitions that vary throughout authors and time. As Kluckhohn (1962) states, culture is a set of definitions of reality, including language, values and rules that set the limits for behavior, held in common by people who share a distinctive way of life. Evers and Day (1997) affirms that culture shapes the way people behave, view the world, communicate and think. It is formed by historical experiences and values, traditions and surroundings. Hall(1959) sustains that culture stands for a frame of reference developed by a group of people used to understand each other. For him, key issues for developing this frame are ways of life, behavioral patterns, attitudes and material objects. When a group of people, no matter its scale, start sharing common ways of thinking, feeling and living, culture emerges (Keiichi Sato & Kuosiang Chen 2008). The word culture also come from the Latin word "colere" (to inhabit, cultivate). The original meaning was used in the biological sciences (for example, a bacterial culture). In the mid-to-late 19th century, the term came to be applied to the social development of humans (Sapienza, 2008). Ernest Gellner (1997), gave the most commonly accepted meaning who calls culture "the socially transmitted and sometimes transformed bank of acquired traits”. Although culture is a social phenomenon, biological characteristics are often connected to it. For example, we see people of a particular gender, age, skin color, or body type (height, weight, etc) and we assume they must belong to a particular culture (Sapienza, 2008).

Fig 1: Classification of Culture 2.2 Cultural Models Cultural models consist of cultural variables, which can focus on easy-to-research objectives like political and economic contexts, reading directions and formats for dates and numbers. Cultural variables can also focus on subjective information, like value-systems and behavioral patterns.


Trompenaars Universalism vs. Particularism

Power Distance Neutral or emotional Masculinity vs. Femininity Individualism vs. Collectivism Individualism vs. Collectivism Specific vs. Diffuse Uncertainty Avoidance Achievement vs. Ascription Time Orientation Time Environment

2.2 Classification of Culture Victor

Culture can be broadly categorized into objective and subjective culture as shown in figure 1. Objective culture is the visible, easy to examine and tangible, aspect of culture represented in terms of text orientation, metaphor, date and number formats, page layout, color and language while Subjective culture is “the psychological feature of a culture, including assumptions, beliefs, values, and pattern of thinking”(Hoft, 1996).

Hall Speed of Messages

Language Context Environment and Technology Space Social Organisation Time Contexting Information Flow Authority Conception Action Chains Nonverbal Behaviour Temporal Conception

Table 1. Cultural dimensions and their definitions



Hoft (1996) identified four models of culture developed by Hofstede, Hall, Trompenaars and Victor. Hofstede’s model: This model is about patterns of thinking, feeling, and acting that form a culture’s mental model. Edward T. Hall: Dealt with the purpose of determining what releases the right responses for effective communication. Fons Trompenaar’s model: Developed a model of culture with the purpose of determining the way in which a group of people solve problems. David A. Victor’s model: This is about an aspects of culture that affects communication in a business setting. These models identified a number of cultural dimensions that are used to illustrate their various models of culture. Due to space limitation and because some of the dimensions are common in some of the models, a description of a few of the cultural dimensions and their definitions are as shown in Table1 above, and the cultural models and their dimensions are as shown in table 2.


Power-distance PD (Hofstede)

The extent to which people accept unequal power distribution in a society.

Collectivism/ Individualism IC (Hofstede)

the extent to which people prioritize or weigh their individuality versus their willingness to submit to the goals of the group.

Feminine/Masculine MASFEM

the extent to which a culture exhibits traditionally masculine or feminine value.

(Hofstede) Uncertainty Avoidance UA

The extent to which a society willingly embraces or avoids the unknown.

(Hofstede, Trompenaars) Time Orientation

Present in all four models: is about peoples concern for past,

(Hofstede, Trompenaars Hall,Victor)

UniversalismParticularism (Trompenaars)

present and future., stands for the fostering of virtues oriented towards future rewards, in particular, perseverance and thrift. Degree to which people in a country compare generalist rules about what is right with more situation-specific relationship obligations and unique circumstances

Neutral vs. Emotional Degree to which people in a country compare Relationship Orientations ‘objective’ and ‘detached’ interactions with (Trompenaars) interactions where emotions is more readily expressed.

Achievement vs.

Table2. Cultural models and their dimensions (adapted from Hoft, 1996)

Degree to which people in a country compare cultural groups which make their judgments of Ascription (Trompenaars) others on actual individual accomplishments (achievement oriented societies) with those where a person is ascribed status on grounds of birth, group membership or similar criteria.

Specific vs. Diffuse Orientations (Trompenaars)

Degree to which people in a country have been involved in a business relationships with in which private and work encounters are demarcated and ‘segregated-out’

Context (Hall, Victor)

Context refers to the amount of information given in a message. A high context is one in which much is said and information is detailed. And in a low context, little is said and the information is distorted.



This paper will adopt the five dimensions of culture from hofstede for the investigation of subjective cultural aspect of this study.

sites according to their language, nation and genre and manually inspecting each cluster looking for recurrent design preferences. They concluded that web sites that contain the cultural markers of their target audience are considered more acceptable by users of their underlying culture.

Hofstede's dimensions of culture are often quoted in relation to cultural usability. It has gained wide acceptance among anthropologists, and has been proposed as a framework for cross-cultural HCI design (Vöhringer-Kuhnt, 2001). Hofstede viewed culture as 'programming of the mind' in the sense that certain reactions were more likely in certain cultures than in other ones, based on differences between basic values of the members of different cultures. Hofstede proposed that all cultures could be defined through the following dimensions: Power distance (PD), Individualism vs. Collectivism IC, Masculinity vs. Femininity (MASFEM), Uncertainty avoidance (UA) and Longterm orientation (LTO) vs. Short term Orientation. (See table 1 above for explanation of the dimensions).

Evers and Day (1997) in a more comprehensive study of usability and culture found culture to be an important factor regarding the perceptions of efficiency, effectiveness, satisfaction, and user behavior when using a software application. They discovered that there is a difference between Chinese and Indonesian in terms of user interface acceptance. They concluded that culture is likely to influence many elements affecting the usability of a product. Nantel and Glaser (2008) demonstrated that perceived usability of a website increased when the website was originally conceived in the native language of the user. Translation, even of excellent quality, created a cultural distance which impacted users’ evaluation of site usability. A similar result from Information Retrieval is that documents are best searched in the language in which they were written. While evaluating the quality of an offer on the web, however, language had little or no impact on the evaluation.

3 Related research Marcus and Gould (2001) in their paper on cultural dimensions and global web design discussed the impact of culture on websites design. They examined how Hofstede’s five dimensions of culture might affect user interface design. By drawing from the Internet sites of several corporate and non-corporate entities of differing nationalities (e.g., Britain, Belgium, Malaysia, the Netherlands, Costa Rica, and Germany), the authors concluded that societal levels of power distance, individualism, masculinity, uncertainty avoidance, and long-term orientation are reflected in several aspects of user-interface and web design.

Vohringer-Kuhnt (2001) investigated cultural influences on the usability of globally used software products. The survey was conducted online by way of the internet. The overall results revealed differences in the attitude towards usability across members of different national groups. The study concluded that only Hofstede´s Individualism/Collectivism was significantly connected to the attitude towards product usability. But further research is needed to deepen the value of Hofstede’s cultural specific variables to cultural design and evaluation of software and web applications.

Barber and Badre (2001) posited the existence of prevailing interface design elements and Web site features within a given culture, called cultural markers. These are interface design elements and features such as color preference, fonts, shapes, icons, metaphors, language, flags, sounds, motion, preferences for text vs. Graphics, directionality of how language is written, help features, and navigation tools that are prevalent, and possibly preferred, within a particular cultural group. Such markers signify a cultural affiliation. They examined the cultural markers of web sites from different nations and cultures, by grouping several web

Andy Smith et al (2003) posited the concept of cultural attractors to define the interface design elements of a website that reflects the sign and their meanings to match the expectations of a local culture. This cultural attractors are colours, banner adverts, trust signs, use of metaphor, navigation controls and similar visual elements that together create a look and feel to match the cultural expectations of the users for that particular domain.



Shen et al (2006) suggested Culture centered design CCD, in which the design process should be concentrated around the target user and his/her specific cultural conditions. The design process needs to be characterized by iterative analyses. These analyses checked design choices in each phase in the design process on cultural appropriateness, relevance, semiotics, functionality and usability. They also introduced the idea of a ‘cultural filter’ derived from the book ‘Psychoanalysis and Zen Buddhism’ by Erich Fromm (German philosopher 1900–1980).

4.1 Influence of Objective Culture on Web Design and Usability Objective culture is the visible, easy to examine and tangible, aspect of culture represented in terms of text orientation, metaphor, date and number formats, page layout, color and language (Hoft, 1996). The impact of objective cultural design elements such as languages, colors, metaphor, and page layout will be discussed next as it is not possible to discuss all aspect of objective cultural elements in the present study.

The main gaps that were found in this few previous researches are: •

4.1.1 Color Most of the studies could not conclude whether their various dimension of culture applied for their research has an influence on overall usability of a website or an interface. The result of their numerous researches on culture and web design did not recommend how culture can be utilized to develop usable website.

An objective cultural factor that should be considered when designing a website is the use of color. Color is connected to feelings of people and it has different meanings in different cultures. “Colors also have important meanings in web design. Color could be used for background, frame, images, hyperlink, etc. Website designers need to take into consideration the color preferences and the meaning of various colors for the targeted audience. Barber and Badre (2001) gave an example of the color-culture of different countries. For example, the red color means different things to different people: for the Chinese it means happiness; for the Japanese, anger/danger; for Egyptians, death; and for Americans, danger/stop. The use of color can also be associated with religion. For example the JudeoChristian tradition is associated with red, blue, white, and gold; Buddhism with saffron yellow and Islam with green. Therefore, when designing a large-scale website, it would be very helpful to conduct a survey and an analysis of the color preferences of the target audience and the meanings of color for the market before designing the website.

The next section discusses cultural issues that influence Web usability and how understanding the culture of a given community can be utilized to develop more usable websites.

4 Cultural Issues in Web Design and Usability Several frameworks (Barber & Badre, 2001), (Sapienza, 2008), (Tanveer et al, 2009),(Smith et al 2003) to mention a few exists to show that there is a linkage between culture and web design/usability. Over the last few years, more and more localized versions of websites have been developed in order to address target national or cultural user groups. Culture is a huge consideration when designing websites. Not everybody reads or understands information the same way, and culture especially plays a very big role in how we view websites. Even the most basic understanding of this principle is needed before designing sites that may be viewed by people from different cultures. When designing a website the culture of the target audience is a major factor in the design process.

4.1.2 Metaphor One of the most important aspects in designing a culturally relevant interface is the accurate and deliberate use of the metaphor. The metaphor is a powerful tool for translating the technical happenings that take place beyond the interface into a concept that makes sense to the average user, appearing on the interface itself. The majority of software are developed



in, or contracted by the USA, and its interfaces have therefore been based primarily on American metaphors (Shen et al. 2006). Often a metaphor applied out of context is open to misinterpretation. For example, the ‘my computer’ icon of MS Windows has proved to have lead to much confusion as it suggests ownership which often is not the case. In some cultures the idea of something that can be retrieved from the trash bin after it has been deleted seems illogical and degrading (Shen et al. 2006). Successful interface metaphors should be developed or adapted through cultural requirements by, or with reference to, representatives of the culture for which they are intended (Shen et al. 2006).

4.1.4 Page Layout This is the physical arrangement of text elements and graphical elements on a web page, this also vary from one culture to another. it can therefore be described as a cultural component. Also the flow direction of a page either horizontally or vertically varies from one culture to another. A good design layout will enhanced a better understanding and hence usability of a website. For example, France has a centered orientation, suggesting that features on a French site would most likely be centered on the page (Cyr and Trevor-Smith, 2003). While in the Islamic countries, page layout will flow from top to bottom. The design of a website must take into account text flow which also varies from one culture to another. The direction in which text in some languages is written can be unidirectional, such as English, or bidirectional such as Arabic. Also, some languages are read from left to right, others right to left, this must also be taken into consideration when designing a web page layout.

4.1.3 Language The most distinctive cultural symbol is language and language indicates the speech used by a particular group of people including dialect, syntax, grammar and letterform (Tong and Robertson, 2008). Language is the building block from which users gain information from a website (Cyr and Trevor-Smith, 2003). Even though most websites users can speak English, they are almost always more comfortable in their native languages. In a study conducted by Marlow et al, 2007) on the multilingual needs of website visitors to the Tate Online, the web site for Britain's Tate art galleries, they found out that many individuals would appreciate having more content available in their own language, either due to necessity or out of

4.2 Influence of Subjective Culture on Web Design and Usability Hoft (1996) defined subjective culture as “the psychologi cal feature of a culture, including assumptions, beliefs, values, and pattern of thinking”. Its influence on usability is a contentious issue in the field of Human Computer Interaction HCI as some members of the discipline regard the lack of accommodation of subjective culture into the design of interfaces as an important cause for decrease in usability (Ford, 2005). Most researches done on the influence of subjective culture on usability have been inconclusive or without adequate result. The influence of subjective culture aspect of this study will be based on Hofstede’s framework as applied by Marcus and Gould (2001) to web and user interface design. (Marcus & Gould, 2001) applied Hofstede’s cultural dimensions to web and user interface design. They mentioned each of Hofstede’s five cultural dimensions and the aspects of user interface design that can be influenced by that particular dimension resulting in specific design recommendations that can influence usability for each dimension. Due to space limitation see

preference. However, the best means of providing this content depends on a variety of factors, including the pragmatic consideration of resources available for translation. While some countries, especially Asian or developing countries, like to display their English speaking abilities, other countries prefer to maintain their own native language for reasons of national pride. This is especially true in some European countries. Due to the fact that English is one of the most popular languages all over the world, it is advisable to design a site in English and then incorporate a translator to translate to the local language of the intended users.



Marcus & Gould (2001). The influence of each Hofstede’s cultural dimension on web design and usability are as follows:

society web document will contain information on charitable causes and family oriented images.

4.2.4 Uncertainty Avoidance 4.2.1 Power Distance Low UA societies like Denmark and Sweden conditions their members to handle uncertainty and ambiguity with relative ease and little discomfort Sudhair et al(2007), while members of high UA cultures (such as New Zealanders) will like web site navigation that will prevent the user from being lost (Marcus and Gould, 2001), this can also be seen in a high uncertainty avoidance society like Japan and Belgium where they attempt to create as much certainty as possible in the day to day lives of people through the imposition of procedures, rules and structure. Therefore web document in a high UA society will contain references to precise and detailed information, references to relevant rules and regulations.

Marcus and Gould (2001) uncovered that members of high Power Distance (PD) cultures such as Chinese, generally prefer a clear hierarchical navigational structure and generally exhibit a strong preference for symmetry in web design. Marcus and Gould’s study also found that on a Malaysian university Web site, for example, they point out evidence of high power distance. This characteristic is displayed on the Web site through a concentration on the power structure of the university: the prominent area of the site devoted to the university’s seal, graphics of items such as faculty, buildings, and administration. compared to a Web site of a university in the Netherlands, a low power distance culture. This site displayed pictures of students rather than leaders, and reveals a stronger use of asymmetrical layout meaning that there is a lessstructured power hierarchy.

4.2.5 Time Orientation Long Time Orientation is about being thrifty and sparing with resources, and perseverance towards slow result. Short time orientation societies lives in the present with little or no concern for tomorrow. Long Time Orientation (LTO) societies such as China and Hong Kong tend to save more and exhibit more patience in reaping the results of their actions whereas Short Time Orientation (STO) societies like most West African nation and Norway want to maximize present rewards and are relatively less prone to saving or anticipating long term rewards (Sudhairet al, 2007). Web document in LTO culture will emphasize perseverance, future orientation, resources for conversation, respect for the demands of virtue, and de-emphasize truth and falsity as a strictly binary, black-and-white relationship (Zahedi et al, 2001). While web document from STO societies like Nigeria will show clean functional design aimed at achieving goals quickly.

4.2.2 Individualism/Collectivism According to Sudhair et al(2007), in an individualist societies such as the US and Australia, “I consciousness” prevails and the individual tends to have fairly weak ties with others, they will place great salience on website personalization but in a collectivist societies such as Taiwan and Pakistan, people regard themselves as part of a larger group such as the family or clan and would be more favorably disposed towards websites that make references to the appropriate in-groups or slogan to emphasis a national agenda.

4.2.3 Masculinity/Feminism Masculine societies such as Japan and Austria tend to be hero worshippers whereas feminine societies such as Sweden and the Netherlands tend to sympathize with the underdogs (Sudhair et al, 2007), therefore web document in a masculine society should contain references to such characteristics as success, winning, strength, and assertiveness whereas in a feminine

5 Recommendations for designing to meet cultural needs 1. Understand the local culture








Study the local culture specific demands in a website for the target culture. • Identify culturally specific metaphors, visual and representational aspects of local culture. Language factor Even though most websites users can speak English, they are almost always more comfortable in their native languages. It is advisable to design a site in English and then incorporate a translator to translate to the local language of the intended users. Basic Web Design Elements (visual) Simple symbols or icons that are commonly understood in the U.S. may confuse, or even insult, visitors from other regions. Icons and other visual elements are very specific to each country, Thus when using this visual elements on the web pages, country-specific understanding is needed. An example is the mail box with raised flag conveying "email." Many local users may not recognize this little mailbox; an envelope would serve to convey the same message to them. It is possible for symbols to have "unintended" or "hidden" meanings in other cultures as well. Contact Information Names, postal addresses, phone numbers, fax numbers, etc are important pieces of contact information. Website forms need to accommodate longer names, addresses, phone number, fax numbers and zip codes to satisfy the local needs of website users. Currency If a website offers any product or service for purchase, currency issues may arise with local visitors. If you were targeting a product to a specific audience, it will be a good practice to give a rough estimate of the price in their local currency. Dates, Time, and Place Dates are often critical pieces of information to be communicated on-line and the American convention of using month-day-year is not universally accepted, as day-month-year is used in many parts of the world. Time can be referenced by the 24 hour time system internationally, so that 8:52 p.m. becomes a standardized 20:52. Time references, such as the hours of office operations, should be accompanied by the appropriate time zone or reference to Greenwich Mean Time.

5 CONCLUSION Cultural characteristics of website users is a key factor to determining the user acceptance of a website, current design practice take little account of cultural issues during the design process. It is evident from the views presented in this paper that culture has a significant impact on how the user perceives a website. Incorporation of cultural factors in web design process is critical in achieving the high quality of human-website interaction between users and the websites. That is why a better approach to designing website should involve taking into consideration the cultural and usability needs of the users.




4. 5. 6.




Mantovani, G (2001) “The psychological construction of the Internet: From information foraging to social gathering to cultural mediation,” CyberPsychology & Behavior vol. 4 pp.47 – 56. Dianne Cyr, Joe Ilsever, Carole Bonanni, and John Bowes (2004?): Website Design and Culture: An Empirical Investigation. Barber, W., & Badre, A.N. 2001. Culturability: The merging of culture and usability. 4th Conference on Human Factors and the Web. Basking Ridge, New Jersey, USA Conference Proceedings Gellner, Ernest. (1997).Nationalism. New York: New York University Press. Hall, E.T. (1976). Beyond Culture. Garden City, NY: Doubleday. Hofstede, G. (1980).Culture's Consequences: International Differences in Work-Related Values.Beverly Hills, CA: Sage Publications. Sapienza, F (2008) Culture and Context: A Summary of Geert Hofstede’s and Edward Hall’s Theories of Cross-Cultural Communication for Web Usability. (Usability Bulletin, Issue No. 19). . Evers, V. and Day, D. (1997): The role of culture in interface acceptance. In S.Howard, J. Hammond and G. Lindegaard (Ed), Human Computer Interaction INTERACT'97. Chapman and Hall, London. Hall, E. (1959): The Silent Language. Doubleday, New York.

10. Tanveer Ahmed, Haralambos Mouratidis, David Preston (2009): Website Design Guidelines: High Power Distance and HighContext Culture. International Journal of Cyber Society and Education. Pages 47-60, Vol. 2, No. 1, June 2009 11. Aaron Marcus and Emilie W. Gould (2001): Cultural Dimensions and Global Web Design: What? So What? Now What? Proceedings of the 6th Conference on Human Factors and the Web, Austin, Texas.


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 12. Andy Smith, Lynne Dunckley, Tim French, Shailey Minocha, Yu Chang (2003): A process model for developing usable crosscultural websites. Interacting with Computers, pp 63-91. Elsevier. 13. Del Galdo, E. (1990). Internationalisation and translation: some guidelines for the design of human - computer interfaces. In J. Nielsen (Ed), Designing User Interfaces for International Use, 110. New York: Elsevier. 14. D. Fink and R. Laupase, ‘Perceptions of Web site design characteristics: a Malaysian/Australian comparison’, Internet Research: Electronic Networking Applications and Policy, 10, 2000, pp. 44–55. 15. Vanessa Evers 2001: Cultural Aspects of User Interface Understanding, Phd Thesis. 16. C. Kluckhohn, Culture and Behaviour, University of Arizona Press: Tucson, 1962. 17. Keiichi Sato & Kuosiang Chen,”Special Issue Editorial: Cultural Aspects of Interaction Design. Vol. 2 No 2, 2008. 18. Steve Wallace and Hsiao-Cheng Yu: The effect of culture on usability: Comparing the Perceptions and Performance of Taiwanese and North American MP3 Player Users:Journal of Usability Studies, Volume 4, Issue 3, May 2009, pp. 136-146. 19. Ford and Kotzé (2003): Designing Usable Interfaces with Cultural Dimensions. Retrieved on June,27 2010 from 20. Nantel, J., Glaser, E., 2008. The impact of language and culture on perceived website usability, Journal of Engineering and Technology Management 25(1/2), 112-122. 21. Faiola, A., and Matei, S. A. (2005). Cultural cognitive style and web design: Beyond a behavioral inquiry into computer-mediated communication. Journal of Computer-Mediated Communication, 11(1), article 18. 22. Vöhringer-Kuhnt, T. 2001. The influence of culture on usability. Master’s thesis (paper draft). URL: results.pdf [retrieved July 24, 2009]. 23. Zahedi, F. M., Van Pelt, W. V., & Song, J. (2001). A Conceptual Framework for International Web Design. IEEE Transactions On Professional Communication, 44(2), 83-103. Retrieved June 23, 2010, from 24. Sudhair H Kale et al 2007 cultural adaptation on the web. Working Paper: Global development working Center, Bond University Australia. 25. Nielsen, J. (1993). Usability engineering. New York: Academic Press. 26. Ali H. Al-Badi and Pam J. Mayhew: A Framework for Designing Usable Localised Business Websites. Journal of Communications of the IBIMA from 27. Ford G (2005): Researching the effects of culture on usability Msc Thesis. 28. Siu-Tsen S, Woolley M, Prior S (2006): Towards culture-centred design. Interacting with Computers xx p.1–33 Published by Elsevier B.V. 29. PREECE, J., ROGERS, Y., SHARP, H., BENYON, D., HOLLAND, S. & CAREY, T. 1994. Human-computer interaction, Workingham, England, Addison-Wesley. 30. Hoft, N 1996: Developing a cultural model. In E del Galdo and J Nielsen (Eds.), International User Interfaces. New York: John Wiley and Sons. 31. Tong. M. C., & Robertson, K. (2008). Political and cultural representation in Malaysian websites. International Journal of Design, 2(2), 67-79. 32. Marlow, Jennifer; Clough, Paul; Dance, Katie (2007): Multilingual needs of cultural heritage website visitors: A case study of Tate Online, International Cultural Heritage Informatics Meeting ICHIM07, Toronto, Ontario, Canada.



Software Cost Regressing Testing Based Hidden Morkov Model 1 1

Mrs. P.Thenmozhi, 2Dr. P. Balasubramanie,

Assistant Professor, Kongu Arts and Science College, Erode – 638 107, Tamil Nadu, India, 2 Professor & Head, Department of Computer Science and Engineering, Kongu Engineering College,Perundurai – 638052, Tamil Nadu,

performs comparably on the others. This paper shows Abstract— Maintenance of software system accounts for

that the proposed framework can help testers improve the

much of the total cost associated with developing

cost effectiveness of their regression testing tasks.

software. The nature of the modifying the software is a highly error-prone task which is the main reason for the

Keywords: Software testing, Testing tools, Regression

cost. Correcting fault by changing software or add new

testing, Software maintenance

functionality can cause existing functionality to regress, introducing new faults. To avoid such defects, one can re-

1. Introduction

test software after modifications, a task commonly known as







The nature of Software systems is to evolve with

developed for previous versions is typically called

time and specially as a result of maintenance tasks.

Regression test. However, is often costly and sometimes even infeasible due to time and resource constraints. Re-

Software maintenance is defined as “The modification of

running test cases that do not exercise changed or

a software product after delivery to correct faults, to

change-impacted parts of the program carries extra cost

improve performance or other attributes, or to adapt the

and gives no benefit. This paper presents a novel

product to a modified environment”.

framework for optimizing regression testing activities, based on a probabilistic view of regression testing. The

The presence of a costly and long maintenance

proposed frame- work is built around predicting the

phase in most software projects, specially those

probability that each test case finds faults in the

manipulating large systems, has persuaded engineers

regression testing phase, and optimizing the test suites

that software evolution is an inherent attribute of

accordingly. To predict such probabilities, we model

software development. Moreover, maintenance activities

regression testing using a Hidden Morkov Model Network

are reported to account for high proportions of total

(HMMN), a powerful probabilistic tool for modeling

software costs, with estimates varying from 50% in the

uncertainty in systems. We build this model using

80s to 90% in recent years. Reducing such costs has

information measured directly from the software system. The results show that the proposed framework can

motivated many advancements in software engineering

outperform other techniques on some cases and

in recent decades. The objective of maintenance is “to modify the existing software product while preserving its integrity”. The later part of the stated objective, preserving integrity, refers to an important issue raised as a result of software evolution. One need to ensure that the modifications made to the product for



maintenance have not damaged the integrity of the








modeling and reasoning in detail. Conclusions will be drawn in section 5. From theory and practice that changing a

system in order to fix bugs or make improvements can

2. Literature survey

affect its functionality in ways not intended. These potential side effects can cause the software system to

In this section, research areas related to the

regress from its previously tested behavior, introducing

topic of this paper are elaborated. The subject to start

defects called regression bugs. Although rigorous

with is that of the problem in question, “Software

development practices can help isolate modifications,

Regression Testing�. There exists an extensive body of

the inherent complexity of modern software systems

research addressing this problem using many different

prevents us from accurately predict the effects of a

approaches. This section takes a critical look at this line

change. Practitioners recognized such a phenomenon

of research, trying to find strong points and ideas as well

and hence are reluctant to change their programs in fear

as the gaps. Through this examination, many terms and

of introducing new defects. Researchers have tried to

concepts related to software testing area will be

find ways of analyzing the impact of a change on

introduced as well.

different parts of a system and predicting the effects. In 2.1 Software Regression Testing

absence of formal presentations of software systems, however, such attempts, although helpful, will not

Research in regression testing spans a wide

provide the required confidence levels.

range of topics. Earlier work in this area investigated different environments that can assist regression testing. Such environments particularly emphasize automation

Unless we are able to find regression bugs,

of test case execution in the regression testing phase.

once they occur, Software maintenance remains a risky

For example, techniques such as capture playback have

task. Despite the introduction and adaptation of other

been proposed to help achieve such an automation.

verification methods (such as model checking and peer

Furthermore, test suite management and maintenance

reviews), testing remains the main tool to find defects in

have been addressed by much research. Measurement

software systems. Naturally, retesting the product after

of regression testing process has also been researched

modifying it is the most common way of finding

extensively and many models and metrics have been

regression bugs. Such a task is very costly and requires

proposed for it. Most of the research work in this area,

great of organizational effort. This has motivated a great

however, has focused on test suite optimization.

deal of research to understand and improve this crucial aspect of software development and quality assurance.

Test suite optimization for regression testing

This paper is organized as follows. Literature

consists of altering the existing test suite from a previous

surveys are given in section 2. In section 3 we will



version to meet the needs of regression testing. Such an

different programs and situations). These four criteria,

optimization intends to satisfy an objective function,

in principle, capture what we expect form a good test

which is typically concerned with reducing the cost of

case selection approach. These four criteria inherently

retesting and increasing the chance of finding bugs

impose a trade-off situation where proposed techniques


usually satisfy one of the criteria in expense of the



a variety of


addressing this problem. Most of these techniques can


be categorized into two families of test case selection Main Approaches

and test case prioritization. Regression test selection techniques reduce testing costs by including a subset of

An early trend in test selection research evolved

test cases in the new test suite. These techniques are

around minimizing test cases selected for regression.

typically not concerned with the order in which test

This approach, often called test case minimization, is

cases appear in the test suite. Prioritization techniques,

based on a system of linear equations to find test suites

on the other hand, include all test cases in the new test

that cover modified segments of code. Linear equations

suite but change their order in order to optimize a score

are used to formulate the relationship between test

function, typically the rate of fault detection. These two

cases and program segments (portions of code through

approaches can be used together; one can start with

which test execution can be tracked, e.g., basic-blocks

selecting a subset of test cases and then prioritize those

or routines). This system of equations is formed based

selected test cases for faster fault detection. The rest of

on matrices of test-segment coverage, segments-


segment reachability and (optionally) definition-use








approaches from the literature and then touches up to


an existing techniques for test case prioritization.

programming algorithm is used to solve the equations







(an NP-hard problem) and find a minimum set of test cases that satisfies the coverage conditions. This 2.1.1 Test Case Selection

approach is called minimization in the sense that it selects a minimum set of test cases to achieve the

Test case selection, as the main mechanism of

desired coverage criteria. In doing so, test cases that do

selective regression testing, have been widely studied

cover modified parts of code can be omitted because

using a variety of approaches. In a survey of techniques

other selected test cases cover same segments of the

proposed up to 1996, Rothermel and Harrold[12]


propose an approach for comparison of selection techniques and discuss twelve different family of

A different set of approaches have focused on

techniques form the literature accordingly.They evaluate

developing safe selection techniques. Safe techniques

each technique based on four criteria: inclusiveness (the

aim to select a subset of test cases which could

extent to which it selects modification revealing tests),

guarantee, given certain preconditions, that the left-out

precision (the extent to which it omits tests that are not

test cases are irrelevant to the changes and hence will

modification revealing), efficiency (its time and space


required), and generality (its ability to function on

conditions as described in are:








Cost Effectiveness

the expected result for test cases have not changed from the last version to the current

Many empirical studies have evaluated the

version; •





performance of the test case selection algorithms. In


general, these empirical studies show that there is an

different executions results in identical execution

efficiency-effectiveness (or inclusiveness-precision in


terminology) tradeoff between different approaches to selection. Some (such as safe) techniques reduce the

Safe techniques first perform change analysis to

size of test suite by a small factor but find most (or all)

find what parts of the code can be possibly affected by

bugs detectable with existing test cases. Others (such

the modifications. Then, they select any test case that


covers any of the modification-affected areas of the

dramatically but can potentially leave out many test

code. Safe techniques are inherently different from

cases that can in fact reveal faults. Other techniques are

minimization techniques in that they select all test cases

somewhere in between; they may miss some faults but

that have a chance of revealing faults. In comparison,

they reduce the test suite size significantly. The

safe techniques usually result in a larger number of

presence of such a tradeoff situation renders the direct

selected test cases but also achieve a much better

comparison of techniques hard.











Many techniques are neither minimizing nor

regression testing techniques requires answering one

safe. These techniques typically use a certain coverage

fundamental question: is the regression effort resulting

requirement on modified or modification affected parts of

from the use of a technique justified by the gained

code to decide whether a test case should to be

benefit? To answer such a question one needs to

selected. For example, the so-called dataflow-coverage-

quantify the notions of costs and benefits associated

based techniques. select test cases that exercise data

with each technique. To that goal, researchers have

interactions (such as definition-use pair) that have been


affected by modifications. These selections techniques

modelstry to capture the cost encountered as a result of

are different in two aspects: the coverage requirement

missing faults, running test cases, running the technique

they target and the mechanism the use to identify of

itself including all the necessary analysis, etc. The most

modification-affected code. For example, Kung et al[10]

recent of all these models is that of Do et al[5]. Their

propose a technique which accounts for the constructs

approach computes costs directly and in dollars and

of object-oriented languages. In performing change

hence is heavily dependent on good estimations of real

analysis, their approach takes into account object-

costs from the field. An important feature of their model

oriented notions such as inheritance. The relative

is that it can compare not only test case selection but

performance of these selection techniques tend to vary

also prioritization techniques. Most interestingly, it can

from program to program, a phenomenon that could be

compare selection techniques against prioritization

understood only through empirical studies.









The existence of the mentioned trade-off has also

functions render different instances of the problem, a

encouraged the researchers to seek multi-objective

handful of which have been investigated by researchers.

solutions to the test selection problem. Yoo and

Besides targeted objective functions, the existing body

Harman[15] have proposed pareto efficient multi-

of prioritization techniques typically differs in the type of



information they exploit. The algorithm employed to

algorithms to find the set of pareto optimal solutions to

optimize the targeted objective function, also, is another

two different representations of the problem: a 2-

source of difference between the techniques.






dimensional problem of minimizing execution time and Conventional Coverage-based Techniques

maximizing code coverage and the 3-dimensional problem of minimizing time and maximizing both code

Test case prioritization is introduced in [16] by

coverage and history of fault coverage. The authors

Wong et al. as a flexible method of selective regression

compare their solutions to those of greedy algorithms

testing. In their view, RTP is different from test case

and observe that greedy algorithms surprisingly can

selection and minimization in that it provides a means of

outperform genetic algorithms in this domain. Coverage

controlling the number of test cases to run. They


propose a coverage-based prioritization technique and







techniques, can be measured only if the underlying code

specify cost per additional coverage

is available and its instrumentation is cost effectively

function of prioritization. Given the coverage information

possible. To be able to address more complex systems,

recorded from a previous execution of test cases, this

where those conditions do not hold, some recent

coverage-based technique orders test cases in terms of

techniques have shifted their focus to artifacts other than

the coverage they achieve according an specific

code, such as software specification and component

criterion of coverage (such as the number of covered

models. These techniques typically substitute code

statements, branches, or blocks). Because the purpose

based coverage information with information gathered

of RTP in their work is selective regression testing, they

from formal (or semi-formal) presentations of the

compare its performance against minimization and

software. Orso et al[11]., for example, use component

selection techniques. The coverage-based approach to

meta data to analyze the modifications across large

prioritization is built upon by Rothermel et al. in [13].

as the objective

component-based systems. The trend in current test case selection research seems to be that of using new

They refer to early fault detection as the objective of test

sources of information or formalizations of a software

case prioritization. They argue that RTP can speed up

system to understand the impacts of modifications.

fault detection, an advantage besides the flexibility it provides for selective regression testing. Early detection makes faults less costly and hence is beneficial to the testing process. They introduce Averaged Percentage of

2.1.2 Test Case Prioritization The




Faults Detected (APFD) metric to measure how fast a


particular test suite finds bugs. They also introduce

problem seeks to re-order test cases such that an objective function is optimized. Different objective



many variations of coverage-based techniques, using

approach [13], most of which use APFD measure for

different criteria for coverage such as branch coverage,

comparison. These studies show that coverage-based

statement coverage, and fault-exposing-potential. These

techniques can outperform control techniques (including

coverage-based techniques differ not only on the

random and original ordering) in terms of APFD but

coverage information they use, but also on their

have a significant room for improvement comparing to

optimization algorithm. When ordering test cases

optimal solutions. They also indicate that in many cases,

according to their coverage, a feedback mechanism

feedback employing techniques tend to outperform their

could be used. Here, feedback means that each test

non-feedback counterparts, an observation which could

case is placed in the prioritized order taking into account

not be generalized to all cases. Indeed, an important

the effect of test cases already added to the order. A

finding of all these studies is that the relative

coverage-based technique with feedback prioritizes test

performance of different coverage-based techniques

cases in terms of the numbers of additional (not-yet


covered) entities they cover, as opposed to total number

characteristics of its test suite. Inspired by this

of entities. This is done using a greedy algorithm that

observation, Elbaum et al.[6] have attempted to develop

iteratively selects the test case covering the most not-

a decision support system (using decision trees) to

yet-covered entities until all entities are covered, then


repeats this process until all test cases have been

product/process characteristics. Many research works

prioritized. For example, assume we have a system with

have enhanced the idea of coverage-based techniques

six elements: e1. . . e6and the coverage relations

by utilizing new sources of information. Srivastava

between test cases and elements are as follows:t1→ {e2,[1] propose the Echelon frame work for change-

e5}, t2→ {e1, e3}, t3→ {e4, e5, e6}. According to a

based prioritization. Echelon first computes the basic

coverage based technique, the first chosen test case is

blocks modified from the previous version (using binary

t3 because it covers three elements, while the others

codes) and then prioritizes test cases based on the

cover two elements each. After selecting t3, two test

number of additional modified basic blocks they cover. A

cases are left, both of which cover two elements. In the

similar coverage criteria used in the context of

absence of feedback, we would choose randomly


between the remaining two. However, we know that e5is

Coverage (MCDC) is utilized in. Elbaum et al.[6] use

already covered by t3; therefore t1has merely one

metrics of fault-proneness, called fault-index, in order to

additional coverage, whereas t2 has two. After adding t3,

guide their coverage-based approach to focus on the

we can update the model of coverage data such that the

parts of code more prone to containing faults. Recently,

already tested elements do not effect subsequent

in, Jeffery and Gupta[4] propose incorporating to

selections. This allows choosing t2before t1based on its

prioritization a concept extensively used in test selection

additional coverage. The notion of using additional

called relevant slices, modified sections of the code

coverage is what feedback

which also impact the outcome of a test case. Their




















approach prioritizes test cases according to the number

additional. Many empirical studies have been conducted

of relevant slices they cover. Most recently, Zhang et

to evaluate the performance of coverage-based

al.[17] propose a technique which could incorporate








varying test coverage requirements and prioritize

hard to evaluate or rely on such approaches, it should

accordingly. They work also takes into account different

be understood that it is the subjective nature of

costs associated with test cases.

requirement engineering that imposes such properties. Also, their framework is not concerned with specifics of

Recent Approaches

regression testing but prioritization in general.

Walcott et al.[15] formulate a time-aware version

Bryce et al. have proposed a prioritization

of the prioritization problem in which a limited time is

technique for Event-Driven Software (EDS) systems. In

available for regression testing and also the execution

their approach, the criteria of t-way interaction coverage

time of test cases are known. Their optimization problem

is used to order test cases. The concept of interactions

is to find a sequence of test cases that could be

is defined in terms of events and the approach is tested

executed in the time limit and also maximize speed of

on GUI-based systems and against traditional coverage

code coverage. They use genetic algorithms to find

based systems. Based on a similar approach, Sampath

solutions to this optimization problem. Their objective

et al[1]. target prioritization of test cases developed for

function of optimization is based on summations of

web applications. Their technique prioritizes test cases

coverage achieved, weighted by execution times.

based on different criteria such as test lengths,

Their approach could be thought of as a multiobjective




frequency of appearance of request sequences, and


systematic coverage of parameter-values and their

coverage in minimum time is required.

interactions. Taking a different approach from coveragetechniques

based techniques, Kim and Porter[9] propose using

assume the availability of source/byte code. They also

history information to assign a probability of finding bugs

assume that the available code can be instrumented to

to each test case and prioritize accordingly. Their

gather coverage information. These conditions do not


always hold. The code could be unavailable or

techniques, can be adjusted to account for different



history-based criteria such as history of execution,

researchers have explored using other source of

history of fault detection, and history of covered entities.

information for test case prioritization.

These criteria, respectively, give precedence to test













cases that have not been recently executed, have Srikanth et al. [7] have proposed PORT

recently found bugs, and have not been recently

framework which uses four different requirement-related

covered. From a process point of view, history-based

factors for prioritization: customer-assigned priority,

approach makes the most sense when regression



testing is performed frequently, as opposed to a one-

proneness. Although the use of these factors is

time activity. Kim and Porter evaluate their approach in

conceptually justifiable and based on solid assumptions,

such a process model (i.e., considering a sequence of

their subjective nature (especially the first and third

regression testing sessions) and maintain that comparing to


selection techniques and in the presence of time/resource










perceptions of customers and developers. While it is

constraints, it finds bugs faster.



Most recently, Qu et al.[2] use the history of test execution

distribution of these random variables also needs to be

for black-box testing and build a relation matrix between

estimated. The events in the real systems and hence the

test cases. This matrix is used to move the test cases up or

corresponding random variables can be dependent on each

down in the final order. Their approach also includes some

other. Bayes theorem provides a basis for modeling the

algorithms for building and updating such a matrix based

dependency between the variables through the concept of

on outcome of test cases and types of revealed faults. In

conditional probability. The probability distribution of



random variables could be conditioned on others. This

prioritization problem directly, there are research closely

makes modeling systems more elaborate but also more

related to this area but from different perspectives. Saff and


Ernst use behavior modeling to infer developers’ beliefs

developed to facilitate such a complex task.











and propose a test reordering schema based on their Probabilistic graphical model are one family of such

models. They propose running test cases continuously in

modeling techniques. A probabilistic graphical models aims

background while software is being modified. They claim

to make modeling system events more comprehensible by

their approach leads to reducing the wasted time of

representing independencies among random variables. A

development by approximately 90%. Leon and Podgurski[3]

probabilistic graphical model is a graph in which each node

compare coverage-based techniques of regression testing

is a random variable, and the missing edges between the

with another family called distribution-based. Distribution-

nodes represent conditional independencies. Different

based approaches look at the execution profile of test


cases and use clustering techniques to locate test cases








used in this research work, is Hidden Morkov Model

distribution based approach can be as efficient or more compared


structures. One well-known family of graphical networks,

that can reveal faults better. The experiments indicate that efficient




Podgurski, then, suggest combining these two approaches

3.1 Hidden Morkov Model Networks

and report achieved improvement using that strategy. Hidden Morkov Model Networks (HMMN) is a special type 3. Probabilistic Modeling and Reasoning

of probabilistic graphical model. In a HMMN, like all graphical models, nodes represent random variables and

The probability theory provides a powerful way of

arcs represent probabilistic dependency among those

modeling systems. It is especially useful for situations

variables. The missing edges from the graph, hence,

where the effects of events in a system are not fully

indicate that two variables are conditionally independent.

predictable and a level of uncertainty is involved. The behaviors






Intuitively, two events (e.g., variables) are conditionally


independent if knowing the value of some other

sometimes hard to precisely model and hence probabilistic approaches

















events is


fundamental notion here because the idea behind the In the center of modeling a system with probability

graphical models is to capture these independencies.

theory is to identify events that can happen in the system

What differentiates a HMMN from other types of

and model them as random variables. Moreover, the



graphical models (such as Markov Nets) is that it is a

Designing a HMMN model is not a trivial task. There are

Directed Acyclic Graph (DAG). That is, each edge has a

two facets to modeling a HMMN, designing the structure

direction and there should be no cycles in the graph. In

and computing the parameters. Regarding the first

a HMMN, in addition to the graph structure, the

issue, the first step is to identify the variables involved in

Conditional Probability Distribution (CPD) of each

the system. Then, the included and excluded edges

variable given its parent nodes should be specified.

should be determined. Here, the notions of conditional

These probability distributions are often called the

independence and casual relation can be of great help.

“parameters� of the model. The most common way of


representing CPDs is using a table called Conditional

independent variables are not connected to each other.

Probability Distribution Table (CPT) for each variable

One way to achieve that is to design based on causal

(node). Each possible outcome of the variable forms a

relation: an edge from a node to another is added if and

row, where each cell gives the conditional probability of

only if the former is a cause for the latter. For computing

observing that outcome, given a combination of the


outcomes of the parents of the node. That is, these

estimations, and statistical learning can be used. The

tables include the probabilities of outcomes of a variable

learning approach has gained much attention in the

given the values of its parents .The inference problem

literature due to its automatic nature. Here, learning

can get very hard in complex networks. Two types of

means using an observed history of variable values to

inference, forward (causal) inference, an inference in

automatically build the model (either parameters or the

which the observed variables are parent of the query

structure). There are numerous algorithms proposed to


learn a HMMN based on history data, some of which are

















resented in.

backwards(diagnostic), from symptoms to causes. The inference algorithms typically perform both type of

One situation faced frequently when designing a

inference to propagate the probabilities from observed

HMMN is that one knows the conditional distribution of a

variables to the query variables. Researchers have

variable given each of its parents separately, but does not

studied the inference problem in depth. It is known that

have its distribution conditioned on all parents. In these

in general case the problem is NP-hard. Therefore,

situations, Noisy OR assumption can be helpful. The NoisyOR assumption gives the interaction a graph with at most

researchers have sought different algorithms that

one undirected path between any two vertices. between the

perform better for special cases. For example, if the

parents and the child a causal interpretation and assumes

network is a polytree, inference algorithms exist that run

that all causes (parents) are independent of each other in

in linear time with the size of the network. Also

terms of their influence on the child.

approximate algorithms have been proposed which use iterative sampling to estimate the probabilities. The


sampling algorithms sometimes run faster but do not give the exact right answer. Their accuracy is dependent

This paper presented a novel framework for

on the number of samples and iterations, a factor which

regression testing of software using Hidden Morkov Model

in turn increases the running-time.

Networks (HMMN). The problem of software regression test



optimization is targeted using a dynamic Bayesian network.

14. Rothermel. G., Untch . R. H.Chu,.C and Harrold. M. J., “Test case prioritization: An empirical study”, In Proceedings ICSM 1999, pages 179–188, Sept. 1999. 15. Rothermel.G et al., “On Test Suite Composition and Cost-

The framework models regression fault detection and as a set of random variables that interact through conditional

Effective Regression Testing,” ACM Trans. Software Eng. and

dependencies. In future Software measurement techniques

Methodology, vol. 13, no. 3, 2004, pp. 277–331.

are used to quantify those interactions and Hidden Morkov

16. Shin Yoo and Mark Harman (2007), “Pareto efficient multi-

Model Networks are used to perform probabilistic inference

objective test case selection “, proceeding of 2007 International

on the distributions of those random variables. The

symposium on software testing and analysis, ISBN 978-1-59593734-6

inference gives the probability of each test case finding

17. Walcott.K.R, Soffa. M. L., Kapfhammer G. M. and Roos. R. S.,

faults; this data can be then used to optimize the test suite

“Time-Aware Test Suite Prioritization”, In Proceedings of the

for regression.

International Symposium on Software testing and Analysis, pages 1-12, 2006.


18. Wong W.E. Horgan .J. R., London. S., and Agrawal. H., “A Study of Effective Regression Testing in Practice,” Proc. 8th Int’l Symp.


Software Reliability Eng., 1998, pp. 264–274.

Amitabh Srivastava and Jay Thiagarajan, “Effectively Prioritizing Tests in Development Environment”,

19. Xiaofang Zhang, Changhai Nie, Baowen Xu and Bo Qu, “Test

In Proceedings of the

Case Prioritization based on Varying Testing Requirement

International Symposium on Software Testing and Analysis,

Priorities and Test Case Costs”, Proceedings of Seventh

pages 97-106, 2002. 2.

International Conference on Quality Software (QSIC’07), 2007.

Bo Qu, Changhai Nie, Baowen Xu and Xiaofang Zhang, “Test

Brief Bio-data of P.Thenmozhi

Case Prioritization for Black Box Testing”, 31st Annual 3. 4.






10. 11. 12.


P.Thenmozhi has completed her M.Phil degree in Mother teresa women’s University kodaikanal in 2004.She has completed 9 years of service in teaching. Currently she is Assistant Professor, Department of Computer Science, Kongu Arts and Science college, Tamilnadu, INDIA. She has guided 2 M.Phil students.She has presented 5 papers in various conferences.

International Computer Software and Applications Conference (COMPSAC 2007), 2007. David Leon and Andy Podgurski, “A Comparison of CoverageBased and Distribution-Based Techniques for Filtering and Prioritizing Test Cases”, Proc. Int’l Symp. Software Reliability Eng., pp. 442-453, 2003. Dennis Jeffrey and Neelam Gupta, “Test Case Prioritization Using Relevant Slices”, In Proceedings of the 30th Annual International Computer Software and Applications Conference, Volume 01, 2006, pages 411-420, 2006. Do.H, Rothermel.G, and Kinneer.A, “Empirical studies of test case prioritization in a JUnit testing environment”, In Proc. Of 15th ISSRE, pages 113-124, 2004. Elbaum.S, Mailshevsky. A.G., and Rothermel. G., “Prioritizing Test Cases for Regression Testing,” Proc. Int’l Symp. Software Testing and Analysis, ACM Press, 2000, pp. 102–112. Hema Srikanth and Laurie Williams, “Requirements-Based Test Case Prioritization”, North Carolina State University, ACM SIGSOFT Software Engineering, pages 1-3, 2005. Jung-Min Kim, Adam Porter and Gregg Rothermel, “An Empirical Study of Regression Test Application Frequency”, ICSE2000, 2000. Jung-Min Kim and Adam Porter, “A History-Based Test Prioritization Technique for Regression Testing in Resource Constrained Environments”, In Proceedings of the International Conference on Software Engineering (ICSE), pages 119–129. ACM Press, 2002. Kung, D., Suchak, N., Hsia, P., Toyoshima, Y., and Chen, C., “ On object state testing”, In Proceedings of COMPSAC’94, IEEE Computer Society Press, 1994. Orso, A., Harrold, M. J., Rosenblum, D., Rothermel, G., Soffa, M. L., and Doo, H., “Using Component Metadata to support the regression testing of component-based software”, In Proceedings of the International Conference on Software Maintenance (ICSM2001), pp 716-725, November, Florence, Italy, 2001.

Brief Bio-data of Dr.P.Balasubramanie Dr.P.Balasubramanie has completed his M.Phil degree in Anna University Chennai in 1990. He has Qualified for National level eligibility test conducted by Council of Scientific and Industrial Research(CSIR) and Joined as a Junior Research Fellowship(JRF) in Anna University, Chennai. He has completed his Ph.D degree in Theoretical Computer Science in 1996. He has completed 15 years of service in teaching. Currently he is Professor, Department of Science & Engineering, Kongu Engineering College, Tamilnadu, INDIA. He is the recipient of Best Staff Award consecutively for two years in Kongu Engineering College. He is also the recipient of Cognizant-Technology Solutions(CTS) Best faculty award 2008 for outstanding performance. He has published more than 80 research articles in International and National Journals. He has authored 7 books with the reputed publishers. He has guided 6 part time Ph.D scholars and number of scholars is working under his guidance on various topics like image processing, data mining, networking and so on. He has organized several AICTE sponsored National seminar/ workshops.



Handoff scheme to enhance performance in SIGMA 1



B.Jaiganesh , Dr.R.Ramachandran Research Scholar, ECE Department, Sathyabama University, Chennai 2 Principal, Sri Venkateswara College of Engineering, Chennai

Abstract—Mobile Internet Protocol(MIP), an industry standard for handling mobility, suffers from high handover latency and packet loss, in addition to change in network infrastructure. To overcome these problems, we proposed a new approach called Seamless IP diversity based Generalized Mobility Architecture (SIGMA). Although SIGMA achieved a low latency handoff, use of IP diversity resulted in some instability during handoff. In this paper, we propose a new handoff policy, called HANSIG-HR, to solve the instability problem of SIGMA. HANSIG-HR is based on Signal to Noise Ratio (SNR), hysteresis and route cache flushing. Our experimental results show that HANSIG-HR improves the stability of SIGMA. Keywords: Hand off Latency, MIP, SIGMA, Throughput, SNR, HANSIG, HANSIG-H,and HANSIG-HR.

I. INTRODUCTION Mobile IP Perkins [1] is the standard proposed by IETF to handle mobility of Internet hosts for mobile data communication. Mobile IP suffers from a number of problems, such as high handover latency, high packet loss, and requires change in network infrastructure. To solve these problems, we are earlier proposed a transport layer based mobility management scheme called Seamless IP diversity based Generalized Mobility Architecture (SIGMA). SIGMA exploits multiple addresses available to most mobile hosts to perform a seamless handoff. Stream Control Transmission Protocol (SCTP) [2], a transport layer protocol being standardized by IETF was used to validate and test the concepts and performance of SIGMA. Use of multiple interface cards in our previous studies on SIGMA, resulted in some instability during handoff due to the handoff latency. The instability was due to of excessive number of handoffs in the overlapping region. There are previous work on reducing number of handoffs and handoff latencies for Cellular IP, Mobile IP, and Layer 2 handoffs. For example, work on Cellular IP [3], [4] used average receiving power, receiving window, bit error ratio and signal strengths. Portoles et al. [7] reduced Layer 2 handoff latency by using signal strength and buffering techniques. Aust et al. [8] used Signal-to-Noise Ratio for Mobile IP handoffs. It should be noted the above work deal with either link layer handoffs, or are designed for specific architectures (like cellular IP and Mobile IP). The authors are not aware of any work which studied handoff schemes for transport layer based mobility management schemes.

The objective of this paper is to remove the instability observed in previous studies of SIGMA by proposing a handoff 40

scheme for SIGMA. Initiation of handoff, also known as handoff trigger, is a crucial part of any handoff policy. Signalto-Noise Ratio, Signal-to-Interference Ratio, Bit-Error-Rate and Frame Error Rate (FER) are generally used for link layer handoff triggers [9]. Since our experimental environment has noise and negligible interference, we use Signal-to-Noise Ratio (SNR) as the handoff trigger in our proposed handoff policy of SIGMA. We designed three HANdoff schemes for SIGMA, (i) HANSIG, with SNR alone, (ii) HANSIG-H, with SNR and Hysteresis and (iii) HANSIG-HR, with SNR, Hysteresis and Route cache flush. Results from experimental testbed of SIGMA are collected for these three schemes and compared. The rest of this paper is organized as follows. Sec. II is a brief introduction to SIGMA. Instability of SIGMA, the motivation for this work, is illustrated in Sec. III. Previous work on handoff schemes, their methods, advantages, and disadvantages are described in Sec. IV. Our proposed handoff scheme is described in Sec. V. Experimental setup for testing proposed handoff schemes is described in Sec. VI, followed by experimental results and concluding remarks in Secs. VII and Sec. VIII, respectively.

II. INTRODUCTION TO SIGMA SIGMA is a transport layer based seamless handoff scheme which is based on IP diversity offered by multiple interfaces in mobile nodes to carry out a soft handoff. Stream Control Transmission Protocol’s (SCTP) multi-homing feature is used to illustrate the concepts of SIGMA. SCTP allows an association (see Fig. 1) between two end points to span multiple IP addresses of multiple network interface cards. Addresses can be dynamically added and deleted from an association by using ASCONF chunks of SCTP’s dynamic address reconfiguration feature [2]. One of the addresses is designated as the primary while the others can be used as a backup in the case of failure of the primary address. In Fig. 1, a multi homed Mobile Node (MN) is connected to a Correspondent Node (CN) through two wireless networks. The various steps of SIGMA(see Fig. 1) are given below. 1) STEP 1: Obtain new IP address: The handoff procedure begins when the MN moves into the overlapping radio coverage area of two adjacent subnets. Once the MN receives the router advertisement from the new access point (Access Point 2), it should initiate the procedure of obtaining a new IP address (IP2 in Fig. 1).


Wireless Network




Movement of Mobile Node


Eth0 Eth1 Gateway2

One SCTP Association ACCESS POINT 1




Mobile Node

Overlapping Region of Wireless Network 1 and 2

Fig. 1.

Experimental testbed.

2) STEP 2: Add IP addresses to association: When the SCTP association was initially setup, only the CN’s IP address and the MN’s first IP address (IP1) were exchanged between CN and MN. After the MN obtains another IP address (IP2 In the meantime, a large number of Set Primaries are issued in STEP 1), MN binds IP2 into the association (in addition to IP1), and notify CN about the availability of the new IP Address 3) STEP 3: Redirect data packets to new IP address: When MN moves further into the coverage area of Wireless Network 2, Data Path 2 becomes increasingly more reliable than Data Path 1. CN can then redirect data traffic to IP2 to increase the possibility of data being delivered successfully to the MN. MN accomplishes this task by sending an ASCONF chunk with the Set Primary Address parameter, which results in CN setting its primary destination address to MN as IP2. The MN’s routing table is also changed so that packets leaving MN are routed through IP2. 4) STEP 4: Updating the location manager: Location management of SIGMA is implemented by a location manager that maintains a database of correspondence between MN’s identity its current primary IP address. MN can use any unique information as its identity, such as the home address (as in MIP), domain name, or a public key defined in the Public Key Infrastructure (PKI). 5) STEP 5: Delete or deactivate obsolete IP address: When MN moves out of the coverage of Wireless Network 1, no new or retransmitted data packets are directed to IP1. MN notifies CN that IP1 is out of service for data transmission by sending an ASCONF chunk to CN. Once received, CN deletes IP1 from its local association control block and acknowledges to MN indicating successful deletion. The actual handoff takes place in Step 3; the handoff scheme for SIGMA has to consider the exact time at which MN should send Set Primary, the objective being to reduce the number of 41

Sec. II-.3). In Fig. 2, Indicates the time where the initial Set Primary is being sent from the MN to the CN during handoff. This Set Primary request will be processed by the CN, and CN will start sending data to the new IP. At the same time, the MN will change its routing table but it will take its effect only during 2 due to route cache (see Sec.V). and the routing table is changed due to ping pong effect. So even at 2 of Fig. 2, the MN might not route through the new IP, since routing table has already changed. So MN will send SACKs through old IP when CN sends data to new IP. Moreover after the initial Set Primaries, the subsequent Set Primaries requests are ignored by the CN, because of many Set Primaries arriving in a short interval of time. So, only at 3 , the last routing table change will have its effect, and the data and SACK both will go through new IP. Therefore we call the time from 4 to 5 as Unstable State, where the MN uses one IP to receive data and another IP to send SACK. Mobile Node

Correspondent Node

This routing table 1 change doesn’t take its effect immediately due to the routing cache, this will take its effect sometime 2 later ,say here, but then the routing table is changed Change already. routing table So MN cannot route through new IP due to this frequent change of routing table and route cache, so SACK are still sent through old IP till here This route change here will take its 3 after here, since there are no more routing table change


Many Setprimaries are being sent and MN routing table is changed many times due to ping pong effect


Wireless Network


Data Path 2



Data Path 1

In this section, we illustrate the instability of SIGMAusing the timeline shown in Fig. 2. When MN moves between regions of the wireless networks it is in one of the two states: (i) Stable state is where the MN receives data and sends SACK through same IP address; (ii) Unstable state where the MN receives data through one IP address and sends SACK through another IP address. In SIGMA each Set Primary sent.

This lag is again due to route cache


Only this set primary request is processed, CN starts sending data to new IP These set primaries are not processed since these are issued continuousl y without much time interval

From here onwards Data and SACK through new IP, since MN crossed the overlapping region and completely under new network

handoffs and avoid instability. Timeline for SIGMA explaining the unstable region. To illustrate the instability of SIGMA in real data transfer, we use Fig. 3 that shows the throughput of SIGMA in our experimental setup (given in Sec. VI) with HANSIGscheme. Fig. 2.

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 Fig. 3 shows the throughput as a function of time. The figure is divided into five regions representing that the MN alternates

between the stable and unstable states as given below. 1) From time 0 to 23 seconds, the MN is in Wireless Network 1 during which data are received and SACK are sent through IP1; this is the stable state for MN. 2) From 23 to 36 seconds, MN is in unstable state, where data are received from IP2 and SACKs are sent from IP1 which is due to excessive number of handoff and route cache. 3) After MN enters Wireless Network 2 completely, where the MN is again in stable state. 4) When MN moves back from Wireless Network 2 to Wireless Network 1, it is in an unstable state between 38 to 52 seconds. 5) From 52 seconds, the MN is completely under Wireless Network 1 and in the stable state. We can see from Fig. 3, the MN is in unstable state for a longer period, which is due to the number of Set Primaries that are being sent to the CN (discussed in Sec. II) because of a large number of handoff and due to the route cache (see Sec. V). The unstable state for SIGMA can also be called handoff latency, because for other schemes, such as Mobile IP which uses single interface, handoff latency has been defined in previous work as the time taken by the MN to completely switch from between networks. In SIGMA the MN is completely under the new network, i.e., uses new IP for both data and SACK, only after the unstable state. So our aim is to reduce the time during which MN is in unstable state, thus reducing the handoff latency. Reducing the unstable state is important because packet losses will occur if an access point becomes unavailable while MN is using both interfaces. We remove this unstable state by using an efficient handoff scheme. In the next section, we discuss previous work on reducing handoff latency.

describes various criteria that can be used to trigger Layer 2 handoffs. The criteria include Relative Signal Strength (RSS), RSS with Threshold (T ), RSS with Hysteresis (H) and RSS with Threshold (T ) and Hysteresis (H). There are many previous work on reducing the handoff latency. Most of them depend on architectural features such as Mobile IP, Cellular IP etc. Hua et al. [10] have designed a scheme for Mobile IP which makes use of concept called Multi-tunnel where the HA copies an IP packet destined to the MN and sends them to multiple destinations through multitunnel. Belghoul et al. [10] present pure IPv6 Soft Handover mechanisms, based on IPv6 flows duplication and merging in order to offer pure IP-based mobility management over heterogenous networks by using Duplication & Merging Agent (D&M). In Polimand (Policy based handoff policy) Aust et al. [7] reduce handoff latency to accelerate the handoff process through a combination of MIP signaling and link layer hits obtained from General Link Layer. Portoles et al. [7] try to reduce Layer 2 handoff latency by buffering Layer 2 in the driver and card of the AP1 and forwarding them to AP2. Shin et al. [11] reduce the MAC layer handoff latency by selective scanning (a well-selected subset of channels will be scanned, reducing the probe delay) and caching (AP built a cache table which uses the MAC address of the current AP as the key). RSS and BER based algorithms have been reported by Chia et al. [5] for Cellular IP. They compiled a radio propagation and BER database for handover simulation in typical city microcellular radio systems, so as to provide realistic data for handover simulation, thus minimizing inaccuracies due to inadequacies in propagation modeling. Austin et al. [4] studied velocity adaptive algorithm for Cellular IP. They use average receiving power, i.e., calculate signal strength time averages from N neighboring base stations and reconnect the mobile subscriber to an alternate BS whenever the signal strength of the alternate BS exceeds that of the serving BS by at least H dB. As discussed above, most of the previous work focused on techniques to reduce the number of handoffs and handoff latency. The techniques used to reduce number of handoffs [9] can be applied to SIGMA, since SIGMA can get the Layer 2 information. However previous work to reduce the latency per handoff is not applicable to SIGMA because, work such as [5], [11] are for reducing handoff latency at Layer 2, whereas SIGMA is based at Layer 4. Other work such as [9], [10] are based on architectures like Mobile IP and Cellular IP which are different from SIGMAarchitecture. Considering the above facts, we develop our own handoff scheme to avoid to enhance stability in SIGMA by making use of the architectural features of SIGMA. V. HANDOFF SCHEME TO ENHANCE PERFORMANCE IN SIGMA The instability of SIGMA described in Sec. III depends on two factors: 1) Fluctuation of signal strength, which increases the number of handoffs due to ping pong effect. Ping pong

IV. PREVIOUS WORK One of the first work to reduce the number of handoffs [9] 42


effect can be reduced by using one of the techniques to reduce number of handoff as discussed in Sec. IV. 2) Route cache effect, where the kernel first searches the route cache for a matching entry for the destination of a packet, followed by search in the main routing table (also called Forwarding Information Base (FIB)). If the kernel finds a matching entry during route cache lookup, it forwards the packet immediately and stops traversing the routing tables. Because the routing cache is maintained by the kernel separately from the routing tables, manipulating the routing tables may not have an immediate effect on the kernel’s choice of path for a given packet. We use IP route flush cache to avoid a non-deterministic lag between the time that a new route is entered into the kernel routing tables and the time that a new lookup in those route tables is performed. Once the route cache has been flushed, new route lookups (if not by a packet, then manually with IP route get) will result in a new lookup to the kernel routing tables. Our proposed handoff scheme, called HANSIG-HR is designed to remove both the ping pong and route cache effects as described below. HANSIG-HR: Our proposed HANSIG-HR scheme makes use of Signal-to-Noise Ratio (SNR) (discussed in Sec. I), hysteresis to reduce the number of handoff (discussed in the Sec. IV), and route cache flush (discussed in Sec. V). The pseudo code for HANSIG-HR is given below, where SNR1 and SNR2 are the Signal to Noise Ratios of AP1 and AP2, respectively, and Hysteresis is the hysteresis value.

The HANSIG, HANSIG-H and HANSIG-HR discussed in Sec. V was implemented in the testbed shown in Fig. 1. The testbed consists of MN, CN and gateways (used to form Wireless Network 1 and Wireless Network 2).

The gateways and CN are Dell Desktops running RedHat Linux 9 with kernels 2.4.20 and 2.6.6, respectively. The MN is a Dell-Inspiron 1100 Laptop with two wireless NIC cards (Avaya PCMCIA and Netgear USB wireless cards) running RedHat Linux 9 kernel 2.6.6.

while(1) { Calculate SNR1 = (SignalStrenth/NoiseStrength) for AP1 Calculate SNR2 = (SignalStrenth/NoiseStrength) for AP2 If (SNR2 > SNR1) and (SNR2 - SNR1 > Hyst) Issue Set\_Primary to set IP2 as primary address in CN If (SNR1 > SNR2) and (SNR1 - SNR2 > Hyst) Issue Set\_Primary to set IP1 as primary address in CN Change routing table of MN and flush route cache }

VII. RESULTS FOR T HE HANDOFF SCHEME In this section, we present results to demonstrate the effectiveness of different handoff schemes we proposed using our experimental test bed described in Sec. VI. The effectiveness of handoff schemes of HANSIG, HANSIG-H and HANSIG-HR are presented and compared. We use throughput and handoff frequency as measures of effectiveness of our proposed handoff schemes.

The pseudo code for HANSIG and HANSIG-H is similar to HANSIG-HR, but for HANSIG-H there is no flush route cache. Similarly for HANSIG, the Hysteresis value is zero and there is no flush route cache. Optimum value of Hysteresis: Based on signal strength fluctuations, we now determine an optimum hysteresis value. Fig. 4 shows the variation of SNR, as measured in our testbed, as the MN moves at a uniform speed from Wireless Network 1 to Wireless Network 2. In Fig. 4, we can see that the maximum difference between the access point’s SNRs is 3 dB in the ping pong region. For example, if the hysteresis value is less than 3 dB, then many unnecessary handoff would have taken place between 45 and 46 seconds in Fig. 4. We, therefore, assigned a hysteresis value of 4 in our experimental test bed. This HANSIG, HANSIG-H, and HANSIG-HR are implemented in the MN, and results are obtained using the experimental setup discussed in the next section.

A. Effect of hysteresis on number of handoffs We observed the number of handoffs for different values of hysteresis. It was observed that for a hysteresis value of 0, 1, 2, 3 and 4, the average number of handoffs were 15, 11, 6, and 1, respectively. Therefore, for the rest of the results, we used a hysteresis value of 4. B. Effect of hysteresis on data flow The effect of hysteresis on the throughput of SIGMA is shown in Fig. 5 implementing HANSIG-H. As shown in the Fig. 5, the graph is divided into five regions where the MN will be in these two states alternatively. From time 0 to 20.57 seconds, the MN is in Wireless Network 1 during which data are received and SACKs sent through IP1. From time 20.57 to 22.18 seconds, MN is said to be in unstable state, where the data are received from IP2 and SACKs are sent from IP1 due to the excessive number of 43

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 handoff due to ping pong effect resulting from signal strengths variation. Between 22.18 to 73.97 seconds MN enters Wireless Network 2 completely, during which the MN is in stable state receiving data and sending SACKs through a single IP i.e., IP2. When MN moves back from Wireless Network 2 to Wireless Network 1, it again goes to unstable state; from time 74.97 to 75.50 seconds data are being received from IP1 and SACKs are

SIGMA. VIII. CONCLUSION AND FUTURE WORK We have proposed an new handoff policy for SIGMA, and analyzed the effect of the policy on enhancement of stability of SIGMA. We observed that the new handoff

Fig. 6. Throughput for HANSIG-HR. Fig. 5. Throughput for HANSIG-H.

sent from IP2, which is again due to the number of handoffs resulting from ping pong effect. The MN is then completely under Wireless Network 1 and is in stable state from 75.50 seconds onwards. The MN is unstable during the periods from 20.57 to 22.18 seconds and 74.97 to 75.50 seconds even with hysteresis implemented. The instability is due to the caching effect of the routing table, even though there was only one handoff. C. Effect of hysteresis and route cache flush on data flow

Policy HANSIG-HR, which is based on signal to noise ratio, hysteresis and route cache flush, significantly improved the performance of SIGMA. Future work consists of improving the handoff policy by using dwell timer, threshold, and dynamically determining the value of the hysteresis based on the characteristic of signal fluctuations.

R EFERENCES [1] C.E. Perkins, “Mobile Networking Through Mobile IP,” IEEE Internet [2]


The number of handoffs in the overlapping region and throughput were measured for HANSIG and HANSIG-HR. The throughput for the HANSIG is shown in Fig. 3. The durations of the unstable state are 5 seconds and 20 seconds. These are due to the excessive number of handoffs which have taken place without using the hysteresis, when the MN is in the overlapping region. So we can see that the duration of time, for which the MN receive data from one IP and send SACK through another IP, depends on the number of handoff taking place, when MN moves between wireless networks. Packets will be lost if the MN loses contact with one of the access points during which the MN is using both interfaces (one for receiving data and another for sending SACK). The throughput of HANSIG-HR is shown in Fig. 6 with only three regions: From 0 to 19 seconds, MN sends and receives data through IP1, from 19 to 39 seconds MN receives and sends data through IP2, and from 39 seconds onwards it receives and sends data through IP1. These three regions were identified by analyzing the ethereal captures during the data transfers. From this we can infer that at any point of time MN will always be in stable state. We can, therefore, see that the MN is in unstable for a longer time when no hysteresis is used (Figs. 3) as compared to when hysteresis is used (Fig. 6) i.e. hysteresis and route cache flushing (HANSIG-HR) improves the performance of 44





[8] [9]



Computing, vol. 2, no. 1, pp. 58 – 69, January - February 1998. R. Stewart, “Stream Control Transmission Protocol (SCTP) dynamic address configuration.” IETF DRAFT,draft-ietf-tsvwg-addip-sctp-12.txt, June 2005. M.D. Austin and G.L. Stuber, “Velocity adaptive handoff algorithm for microcellular systems,” IEEE Trans. Veh. Technol., vol. 43, no. 3, pp. 549 – 561, August 1994. S. Chia and R.J. Warburton, “Handover criteria for city microcellular radio systems,” Proc. IEEE Veh. Tech. Conf., Orlando, FL USA, pp. 276– 281, 6 - 9 May 1990. M. Portoles, Z. Zhong, S. Choi, and C.T. Chou, “IEEE 802.11 link-layer forwarding for smooth handoff,” Proc. 14th IEEE Personal, Indoor and Mobile Radio Communications, Beijing, China, pp. 1420 – 1424, 7 - 10 September 2003. S. Aust, D. Proetel, N.A. Fikouras, C. Pampu, and C. Gorg, “Policy based mobileip handoff decision (POLIMAND) using generic link layer information,” 5th IEEE International Conference on Mobile and Wireless Communication Networks, Singapore, 27 - 29 Ocber 2003. A. Festag, “Optimization of handover performance by link layer triggers in ip-based networks: Parameters, protocol extensions and APIs for implementation,” tech. rep., Telecommunication Networks Group, Technische University, Berlin, July 2002. G.P. Pollini, “Trends in handover design,” IEEE Communications Magazine, vol. 34, no. 3, pp. 82 – 90, March 1996. Y.M. hua, L. Yu, and Z. Hui-min, “The MobileIP handoff between hybrid networks,” The 13th IEEE International Symposiuim on Indoor and Mobile Radio Communications, Portugal, pp. 265 – 269, 15 – 18 September 2002. F. Belghoul, Y. Moret, and C. Bonnet, “Performance analysis on IP- based soft handover across ALL-IP wireless networks,” IWUC, PORTO, Portugal, pp. 83 – 93, 13 - 14 April 2004. S. Shin, A. Forte, A. Rawat, and H. Schulzrinne, “Reducing MAC layer handoff latency in IEEE 802.11 Wireless LANs,” ACM MobiWAC,2004, Philadelphia, PA USA, pp. . 1 9 – 26 September 26 – October 1 2004.


A Fast Selective Video Encryption Using Alternate Frequency Transform Ashutosh Kharb Department of ECE , USIT, New Delhi, India.110006 Seema *Department of CSE,BMIET, Sonipat, Haryana, India.131001 Ravindra Purwar GGSIPU, Sonipat, New Delhi, India.110006

the data before transmission. Digital video signals get compressed using some coding standards MPEG 14, H.264 / AVC before transmission over the wired or wireless channel. These standards do not provide security to the multimedia data. So, various encryption schemes are proposed to secure the data. Traditional solution to [1,2] provide confidentiality is to scramble the data in frequency or temporal domain but these days these techniques are vulnerable to attacks. Another way is to encrypt either uncompressed data or to compressed data (bit stream level) using the conventional cryptosystems like DES and AES that works on the blocks of data therefore known as block ciphers. These procedures provide highest security but also require high processing time that is undesirable for real time applications. Also the video data is voluminous than text data so this results in a decrease in speed. Also the information density is lower in multimedia data than text data so whole video data encryption is unnecessary. Hence the focus shifts from complete encryption schemes to the partial or selective encryption schemes that provides lower computational costs and increases speed by reducing the processing time. The basic concept of partial i encryption is to select the most important coefficients and encrypt them with conventional cryptographic ciphers. The non selected coefficients are sent to the transmission channel with no encryption. Since selected coefficients are protected it is impossible for an attacker to recover any information from these coefficients. The rest of the paper is organized as follows. In section 2,we discuss the basic concept of video compression. Section 3, introduces the partial video encryption technique. In section 4, proposed modified technique is discussed. The results of experiments are detailed in section 5, where we present comparison results with Yengs et al algorithm [1].

Abstract— With commercialization of multimedia data over public networks security of multimedia data is a challenging issue. Further multimedia data is generally very large therefore it requires efficient compression to save transmission cost. In this manuscript a modified 4 point butterfly method is proposed to compute DCT for encoding of frames in video data. It has been experimentally compared with existing technique based on parameters like PSNR, compression ratio, execution time of each frame, time taken for evaluating DCT method. Also it has been shown theoretically that the proposed technique take lesser time than the existing method. Keywords: DCT, motion estimation, selective encryption, spatial compression, video encoding. I.


With commercialization of multimedia data over public networks security of multimedia data is a challenging issue. Further multimedia data is generally very large therefore it requires efficient compression to save transmission cost. In this manuscript a modified 4 point butterfly method is proposed to compute DCT for encoding of frames in video data. It has been experimentally compared with existing technique based on parameters like PSNR, compression ratio, execution time of each frame, time taken for evaluating DCT method. Also it has been shown theoretically that the proposed technique take lesser time than the existing method. Now a days public networks like internet is heavily used for various multimedia based applications like video on demand, video conferencing, pay per TV etc. as the data size in such applications is very large in comparison to text data it is necessary to compress 45

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 Finally in section 6, conclusions are drawn and future studies are explored. II.


A brief introduction to the process of video compression is given in this section. Video compression comprises of two levels. Firstly, spatial compression takes place when there is high correlation between pixels (samples) of same frames and is equivalent to that of JPEG compression. And then Temporal compression is used to remove temporal redundancy between adjacent blocks by using the concept of motion estimation. Video sequence is a collection of group of pictures or still images called frames. There are three such types of frames. I frame (intra frame): This is the first frame that represents the beginning of a scene and followed by P and B frames. Spatial compression process is only applied to I frame. P frame (predicted frame): This frame is predicted by the past reconstructed frame. B frame (bidirectional frame): These frames are predicted from the I frame and P frames. The general sequence of frames in a GOP can be illustrated as in figure1:

Figure 2: General Block Diagram of Video Compression [4]

Pixels in a video exhibit a certain level of correlation with the adjacent or neighboring pixels in the same frame and in the neighboring frames. The correlation in consecutive frames within a video is high. So in transform encoding phase a transformation from spatial (correlated) domain to uncorrelated domain takes place. This phase results in a transformation that maintains the relative relationship between the pixels but the redundancies are revealed. Some of transforms that can be used [3] are image based transform (DWT (this is best suited for still images)), block based transform (DCT, KLT etc). The choice of transform depends on following factors: • The data in transformed domain should be uncorrelated and compact (most of the energy should be concentrated into small number of values) • Transform should be reversible. • Transform should be computationally tractable. The block based transform are best suited for compressing the block based motion compensated residuals. The 1-D DCT (unitary transform) is applied on 1 D sample values and can be evaluated using the formula

Figure 1: A sequence of GOP

The overall video compression process can be depicted as in figure 2. The main components of compression are: Transform encoding Quantization Motion compensation and estimation Zigzag reordering and RLE (Run Length Encoding) Entropy encoding.

…………. (1) Where, c(x) =



INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 then created from the blocks of image from the reference frame. The motion vectors for blocks used for motion estimation are transmitted, as well as the difference of the compensated image with the current frame is also encoded. The main purpose of motion estimation based video compression is to save on bits by sending encoded difference images which have less energy and can be highly compressed as compared to sending a full frame. This is the most computationally expensive operation in the entire compression process. The matching of one block with another is based on the output of a cost function. The block that results in the least cost is one that matches the closest to current block. There are various cost functions, of which the most popular and less computationally expensive is Mean Absolute Difference (MAD) given by equation (6). Another cost function is Mean Squared Error (MSE) given by equation (7).

and, IDCT (inverse DCT) can be evaluated as,

…………… (2)

for n=0, 1, 2, …………, N-1. The first value at x=0 is known as DC coefficient that is the average value of the pixels, as at x=0,

…………………… (3)

and , all other coefficients are known as AC coefficients. Similarly, 2D DCT (DCT-II) is used for calculating a 2D sample sequence and is given as in equation 4

……………………………… (6) …………………….. (7)

…… (4)

In other form,

where N is the side of the macro block, Cij and Rij are the pixels being compared in current macro block and reference macro block, respectively.

…………………… (5)

Peak-Signal-to-Noise-Ratio (PSNR) given by equation (8) characterizes the motion compensated image that is created by using motion vectors and macro blocks from the reference frame.

where, X is a block of N x N samples and A is known as transform matrix. Equation 4 can be viewed as applying successively 1D DCT twice once for column values and than for row values or vice versa. This property of DCT is known as separability.

….. (8)

Quantization: After the transform encoding the transformed coefficients are quantized to reduce the number of bits required for encoding. A quantizer maps a signal with a range of values X to a quantized signal with a reduced range of values Y. The quantizers can be broadly classified as scalar or vector quantizer. A scalar quantizer maps one sample of the input signal to one quantized output value and a vector quantizer maps a group of input samples (a ‘vector’) to a group of quantized values.

Zigzag reordering and RLE: Quantized transform coefficients are required to be encoded as compactly as possible prior to storage and transmission. In a transform-based image or video encoder, the output of the quantizer is a sparse array containing a few nonzero coefficients and a large number of zerovalued coefficients. Reordering (to group together nonzero coefficients) and efficient representation of

Motion estimation and compensation: this phase is the heart of temporal compression where the encoding side estimates the motion in the current frame with respect to a previous or future frame. A motion compensated image for the current frame is








encoding. The significant DCT coefficients of a block of image or residual samples are typically the ‘low frequency’ 47

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 positions around the DC (0, 0) coefficient. The nonzero DCT coefficients are clustered around the top-left (DC) coefficient and the distribution is roughly symmetrical in the horizontal and vertical directions. After quantization, the DCT coefficients for a block are reordered to group together nonzero coefficients, enabling efficient representation of the remaining zero-valued quantized coefficients. The optimum reordering path (scan order) depends on the distribution of nonzero DCT coefficients. For a typical frame block scan order is a zigzag starting from the DC (top-left) coefficient as shown in figure 3.

…………………………………. (8)



………………. (10) …………. (11)

Due to symmetric property of cosine function, …………………………. (12) ………………………… (13)

Using the above relations 1D DCT can be represented in a structure known as butterfly approach. The junction represents the addition operation and the number on line represents the multiplication operation.

Figure 3: Zig Zag Reordering

Starting with the DC coefficient, each quantized coefficient is copied into a one-dimensional array. Nonzero coefficients tend to be grouped together at the start of the reordered array, followed by long sequences of zeros. The output of the reordering process is an array that typically contains one or more clusters of nonzero coefficients near the start, followed by strings of zero coefficients. Higher-frequency DCT coefficients are very often quantized to zero and so a reordered block will usually end in a run of zeros. III.

Figure 4: 1-D 4 POINT DCT METHOD [3]

[3] The flow graph consists of three stages. A plane based rotation of stage 1 and plane based rotation

PARTIAL VIDEO ENCRYPTION USING ALTERNATE TRANSFORM [3] It focuses on 4x4 block of data. This scheme incorporates more transform rather than only one that explained in section 2, the general method for calculating the DCT. These new transforms are as efficient as DCT encoding of residual frames. The



at stage 2 and a permutation. New unitary

transforms can be created by keeping stage 1 and 3 unchanged and changing the rotation angle at stage 2 as shown in figure 3.2.2 below, by varying the angles from Range of

new unitary transforms can be derived from 1-D DCT for N = 4 sample values using equation (1), For N=4, 48


and from .

to 3


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 compared to ADE which requires one addition, two subtractions and six multiplication operations. So for a 4x4 block of data general DCT requires 128 (64x2) multiplication operations and 24 (4x4x3) addition operations while ADE requires 192 (96x2) multiplication operations and 96 (48x2) addition operations. So we modify the above scheme and propose an alternating transforms to reduce the computations and hence increase the speed. This can be achieved by interchanging the stage 1 and stage 3 of the ADE scheme as illustrated in figure 4.


[3] The scheme shows highest EPE (Energy Packing Efficiency) for highly correlated data (I frames) is when both a1 and a2 are set to zero. For weaker correlation between data (P and B frames) maximum EPE is shown at a1= and a2= An encryption algorithm consists of two parts. First one is key generation and second is encryption using that key. This process is proposed for residual data only. For the purpose of key generation RC4 key generator is used. Steps for partial video encryption using ADE are as follows:



Design 2 transform tables. Repeat for each frame. Initialize the RC4 key generator by a random 128 bit key. For an input residual block of size 4*4 get M bit from the RC4. Chose a transform table and apply it on input block based on M-1 bits. th M bit is used to encrypt the sign of DC component th as change the sign of DC component if M bit is “1�. IV.



From the figure 7 it can be concluded that MAFD scheme require 96 (48x2) addition operations and 96 (48x2) multiplication operations to compute transform for a 4x4 block of data. Hence, resulting in a total reduction of 25% as compared to general DCT and 50% as compared to ADE scheme in computations.

The ADE scheme described in section 3 results in a increase in computational time for transform encoding as compared to general DCT method described in section 2, and hence a decrease in speed. On comparing equations (1) and the butterfly structure of fig.3 it is concluded that general DCT requires three additions and five multiplications operations for computing DCT of 4 elements that is lesser as

V. 49


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 In this section experimental results have been shown to demonstrate the effectiveness of the proposed scheme MAFD over ADE. For this purpose the four


equi-space rotation angles are used and four test video streams in grayscale mode are considered viz Miss America video consisting of 13 frames and Akiyo video with 30 frames both having resolution 176 x 144, 15 frames of Bear video of resolution 720 x 480 and 119 frames of Susie video with resolution 352 x 240 are chosen and both the procedures as explained in section 3 and 4 are implemented on blocks of 4 x 4 data of all the above video sequences using Image Processing Toolbox of MATLAB 7.0. Hence PSNR values, total number of bits required per pixel by each frame, time taken to compute DCT method and total execution time per frame, quality factor (PSNR / average bits per pixel) within the limitations of hardware and software are computed. Results are summarized as:















Table II above shows the average bits required per pixel per frame values for different video sequences. It can be observed that MAFD scheme results in a decrease in number of bits per pixel requirement to 15% (approx) as compared to ADE scheme. Table III: EXECUTION TIME TAKEN BY DCT METHOD VIDEO ADE MAFD SEQUENCE MISS 1.450846 1.041923077 AMERICA

Table I: AVERAGE PSNR VALUES PER ENCRYPTED FRAME VIDEO ADE MAFD SEQUENCE MISS AMERICA (176X144) 60.04938 60.01092308 (13 FRAMES) AKIYO (176 X 144) 55.984 55.92803333 (30 FRAMES) SUSIE (352 X 240) 55.20551 55.17861345 (119 FRAMES) BEAR (720 X 480) 55.872 55.8416 (15 FRAMES)










Table III above shows the experiment results for execution time of DCT methods for different video sequences using both the schemes. It can be observed that MAFD scheme results in a decrease in execution time of DCT to 40% (approx) as compared to ADE scheme. Table IV: TOTAL EXECUTION TIME PER FRAME ADE















Table I above shows the comparison between average PSNR values for encrypted frames of different video sequences. It can be observed that both ADE and MAFD schemes results in approximately same average PSNR values.

Table IV above shows the experiment results for average total execution time taken by different video sequences using both the schemes. It can be observed that MAFD


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 scheme results in a decrease in total execution time to 22% (approx) as compared to ADE scheme. Table V: AVERAGE QUALITY FACTOR PER FRAME VIDEO SEQUENCE















Table V above shows the experiment results for average quality factor per frame for different video sequences using both the schemes. It can be observed that MAFD scheme results in an increase in quality to 12% (approx) as compared to ADE scheme.


Figure 9 above represents the comparison between number of bits required after entropy encoding (Huffman Encoding) by encrypted frames of Miss America video obtained by both the schemes (ADE and MAFD). It can be observed that no of bits required by encrypted frames obtained by MAFD scheme is lower in case of P and B frames while its approximately similar in case of first frame(I frame).


Figure 8 above represents the comparison between PSNR values of encrypted frames of Miss America video obtained by both the schemes (ADE and MAFD). It can be observed that PSNR values of encrypted frames obtained by MAFD scheme is lower except for the first frame (I frame).


Figure 10 above represents the comparison of execution time of DCT method taken for Miss America video by both the schemes (ADE and MAFD). It can be observed that time taken by DCT method as obtained by MAFD scheme is lower. This is due to reduction in computations in modified schemes as compared to the ADE scheme. 51

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 the predicted frames of Akiyo, Miss America and Susie video sequences under both the methods i.e. ADE and MAFD.






Figure 11 above represents the comparison of total execution time per frame taken for Miss America video by both the schemes (ADE and MAFD). It can be observed that time taken by DCT method as obtained by MAFD scheme is lower.




Figure 12 above represents the comparison of quality factor i.e. ratio of PSNR and average bit required per pixel per frame for Miss America video by both the schemes (ADE and MAFD). It can be observed that quality factor is higher in case of modified scheme (MAFD). Figures 13 to 16 below displays the screenshots of original frame, reconstructed encrypted frames and 52

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 this results in a reduction in time taken to evaluate DCT function and hence the overall time for execution. The experimental results for average total execution time taken by different video sequences using both the schemes shows that MAFD scheme results in a decrease in total execution time to 22% (approx.) as compared to ADE scheme. Average quality factor per frame for different video sequences using both the schemes in case of MAFD scheme results in an increase in quality to 12% (approx) as compared to ADE scheme. As the selective sign encryption of DC coefficients is used for the encryption purpose so the overhead due to encryption process is very less. This work is carried out for 4x4 input blocks size due to this the reconstructed frames will be more accurate but on the other side as we decrease the block size the energy of the block decreases but results in an increase in computations and complexity as compared to the 8x8 blocks. Also both the schemes (ADE and MAFD) are compared on the basis of the parameters like PSNR, average bits per pixel, execution time etc. the analysis can be done on the basis of various attacks for which the system can be vulnerable.


VI. CONCLUSION AND FUTURE SCOPE The procedures as explained in section 3 and 4 are implemented in MATLAB and compared practically on the basis of parameters PSNR, number of bits per pixel required, execution time taken by each frame to evaluate DCT method, total execution time taken by and quality factor (ratio of PSNR and number of bits per pixel required). It is shown both theoretically and practically that MAFD scheme requires less computational time as compared to that of ADE. And it can be concluded from the results that both the procedures are providing approximately same average PSNR values that is approximately 56 db. A higher PSNR value represents less error. The average bits required per pixel per frame values for different video sequences in case of MAFD scheme results in a decrease in no of bits per pixel requirement to 15% (approx) as compared to ADE scheme. Execution time of DCT methods for different video sequences using both the schemes in case of MAFD scheme results in a decrease in execution time of DCT to 40% (approx) as compared to ADE scheme. The reason for this can be explained theoretically by examining figure 4,figure 5, figure 6 and figure 7, for the evaluation of 1 -D 4 sample will require evaluating 24 multiplications and 12 additions. Hence it results in an increased overhead of computations. With the modified scheme, that is interchanging stage 1 and stage 3 will require 12 multiplications and 12 additions for a 1-D sequence of 4 sample values. Hence results in reduction in number of multiplications to half. So




4. 5.



I. Agi and L. Gong, “An Empirical Study of Secure MPEG Video Transmission,” Proceedings of the Symposium on Network and Distributed Systems Security, pp 137-144, IEEE, 1996. L. Qiao and K. Nahrstedt, “Comparison of MPEG Encryption Algorithms,” International Journal on Computer and Graphics, Special Issue on Data Security in Image Communication and Network, 22(3), pp 437-438, 1998. Siu-Kei Au Yeung, Shuyuan Zhu and Bing Zeng,”Partial Video Encryption Based on Alternating Transforms”, IEEE Signal Processing Letters, Vol. 16, No. 10, pp 893-896, October 2009 I. Richardson, H.264 and MPEG-4 Video Compression. Hoboken, NJ: Wiley, 2003. Jian Zhao, ”Applying Digital Watermarking Techniques to Online Multimedia Commerce”, In: Proc. of the International Conference on Imaging Science, Systems, and Applications (CISSA97), June 30-July 3, 1997, Las Vegas, USA.


Impact of Variable Speed Wind Turbine driven Synchronous Generators in Transient Stability of Power Systems Dr.D. Devaraj, R.Jeevajothi Department of EEE,Kalasalingam University,Virudhunagar District,Tamilnadu,India. ,PIN-626190

Abstract—With the scenario of wind power constituting up to 20% of the electric grid capacity in the future, the need for systematic studies of the impact of wind power on transient stability of the grid has increased. This paper investigates possible improvements in grid transient stability while integrating the large-scale variable speed wind turbine driven synchronous generator. A dynamic modeling and simulation of a grid connected variable speed wind turbine (VSWT) driven synchronous generator with controllable power inverter strategies suitable for the study was developed, tested and verified. This dynamic model with its control scheme can regulate real power , maintain reactive power, generate voltage, and speed at different wind speeds. For this paper, studies were conducted on a standard IEEE 9 bus system augmented by a radially connected wind power plant (WPP) which contains 28 variable speed wind turbines with controllable power inverter strategies. Also it has the potential to control the rotor angle deviation and increase the critical clearing time during grid disturbance with the help of controllable power inverter strategy.

more energy than the fixed speed operation, reduces power fluctuations and improves reactive power supply [1]. Stable grid interface requires a reliable tool PSAT/Matlab for simulating and assessing the dynamics of a grid connected variable speed wind turbine driven synchronous generators [2].There are many papers dedicated to dynamic model development of variable speed wind turbine driven synchronous generators[3,7]. Taking an IEEE three-machine, nine-bus system [4], we attach the WPP system radially through a transmission system and transformers at bus 1 in Fig. 2.The equivalent WPP has a set of 28 turbines connected in daisy-chain fashion within the collector system. The direct driven synchronous generator is operated in a variable speed with capability to control the voltage at the regulated bus at constant power factor, or at constant reactive power. In this study, we set the wind turbines to have constant unity power factor. The 28 wind turbine generators have a combined rating of 100MW. The impact of wind-generation technology on power system transient stability is also shown in [5&6].

Keywords: Variable speed wind turbine, direct drive synchronous generator , rotor angle deviation and critical clearing time, transient stability, grid connected.

I. INTRODUCTION Installed wind power generation capacity is continuously increasing. Wind power is the most quickly growing electricity generation source with a 20% annual growth rate for the past five years. Variable speed operation yields 20 to 30 percent



II. MODELING OF VSWT DRIVEN SYNCHRONOUS GENERATOR Fig.1presents a schematic diagram of the proposed VSWT driven Synchronous generator connected to the grid.

CP= Power coefficient TM= Mechanical torque from wind turbine [N · m] The mechanical torque obtained from equation (3) enters into the input torque to the synchronous generator, and is driving the generator. CP may be expressed as a function of the tip speed ratio (TSR) λ given by equation (2) .

A.Wind Turbine

The wind turbine is described by the following equations(1)(2) and (3)


ωM R



1 ρπR 2 C PVW 2

PM = TM =




π (λ − 2) CP = (0.44− 0.0167β )sin − 0.00184(λ − 2)β 13− 0.3β


1 ω2 ρπ R 5 C P M3 2 λ

where β is the blade pitch angle. For a fixed pitch type the value of β is set to a constant value 4.50


where λ = tip speed ratio ωM=Mechanical speed of wind turbine [rad/s] R= Blade radius [m] VW=wind speed [m/s] PM =Mechanical power from wind turbine [kW] ρ =Air density [kg/m3]


Rectifier Transformer




S G Transformer

DC link 2.5KV

1.5 MVA, Fixed pitch angle=4.50




1.5MVA, 1.5MVA, 2KV/130KV

Figure1. Schematic diagram of the proposed VSWT driven Synchronous generator connected to the grid



By using proportional-integral-derivative (PID) control gains, errors between Pref and Pinv (measured real power of inverter )and between Qref and Qinv (measured reactive power of inverter ) are processed into the q- and d-axis reference current Iq ref and Id ref , respectively, which are transformed into the a-, b- and c- axis reference current Ia ref, Ib ref and Ic ref by the dq to abc transformation block. When the desired currents on the a-b-c frame are set, a pulse

The synchronous generator is equipped with an exciter identical to IEEE type 1 model [8 ].The exciter plays a role of helping the dc link to meet the adequate level of inverter output voltage as given in (5) below

Vdc =

2 2 ⋅ V AC _ RMS

where V



is RMS line to neutral voltage of


width modulation (PWM) technique is applied. The error signal is compared with a carrier signal and the switching signals are created for the 6MOSFETs of the VSI.

the inverter and DMAX is maximum duty cycle. The exciter plays a role of meeting the dc link voltage requirement. C.

Power Electronics Control

III. ASSESSMENT OF TRANSIENT STABILITY Analysis of transient stability of power systems involves the computation of their nonlinear dynamic response to large disturbances, usually a transmission network fault, followed by the isolation of the faulted element by protective relaying. In these studies, two methods are used for assessing dynamic performance of the power system following a large disturbance:

The power conversion system composed of a six-diode rectifier and a six-MOSFET voltage source inverter, which is simple, cost-effective and widely used for industrial applications[9].The VSI includes a LC harmonic filter at its terminal to reduce harmonics it generates. The rectifier converts ac power generated by the wind generator into dc power in an uncontrollable way; therefore, power control has to be implemented by the VSI. A current-controlled VSI can transfer the desired real and reactive power by generating an ac current with a desired reference waveform.

• •

The maximum power available from VSWT driven synchronous generator is given by(6

1 C MAX PMMAX = πρR 5 3P ω M3 λOPT 2

A. Critical Clearing Time

The critical clearing time (CCT) is the maximum time interval by which the fault must be cleared in order to preserve the system stability.


Generating units may lose synchronism with the power system following a large disturbance and be disconnected by their own protection systems if a fault persists on the power system beyond a critical period. The critical period will depend on number of factors. The nature of the fault (e.g. a solid three phase bus fault or a line to ground fault midway on a transmission circuit);

The desired real power reference Pref values are calculated by (7)

Pref = ηPMMAX


The desired reactive power reference Qref values are calculated by (8)

Qref = Pref ⋅

1− PF 2 PF

Calculation of critical fault clearing times for faults on the power system; and Examination of the rotor angle deviation of generators following a large disturbance.



The location of the fault with respect to the generation; and;


The capability and characteristics of the generating unit. The calculation of the critical clearing time for a generating unit for a particular fault is determined by carrying out a set of simulations in the time domain in which the fault is allowed to persist on the power system for increasing amounts of time before being removed. B.

Load at bus8-100MW,30MVar

The capacity of the VSWT driven synchronous generator is chosen to be 1.5 MVA and real power 1.5 MW. The rated speed of the rotor is chosen to be 40 rpm. The rated wind speed is 15 m/s. the cut-in and cut-out speeds are 4 m/s and 23 m/s respectively. The switching frequency of the grid interface inverter is 1.040 kHz. The capacitor value of grid interface rectifier is 2500uF and d.c link voltage is 2.5 KV. The generated voltage of synchronous generator is 0.6KV. The transformer rating of grid connected side is 2KV/130KV. The p.u voltage magnitude of primary of the transformer is 0.99 p.u.. The grid voltage is 130KV. Figures[3-8] represents the Simulation waveform of the modeled VSWT driven synchronous generator.

Rotor angle deviation

Rotor angle deviation assessment of wind power generator is one of main issues in power system security and operation. IV. SIMULATION RESULTS AND DISCUSSION Fig.2represents the Power system network with fault near by bus-7 with only conventional synchronous generators.


Real power inM W










20 time in sec





Figure 3. Simulation waveform of Real power of variable speed wind turbine

2 1.8 1.6 Reactive power in Mvar

Figure 2. Power system network used in the study(IEEE 9 bus system with fault near by bus-7 with only conventional synchronous generators.

1.4 1.2 1 0.8 0.6 0.4 0.2 0

• • • • • • •

Bus-1 - 100MVA, 16.5KV Bus-2 - 100MVA, 18KV Bus-3 -100MVA,13.8KV Tr-1-16.5KV/230/KV Tr-1-18KV/230/KV Tr-1-13.8KV/230/KV Load at bus5-125MW,50MVar Load at bus6-90MW,30MVar





20 time in sec





Figure 4. Simulation waveform of Reactive power of variable speed wind turbine



Generated phase voltages Va in p.u












20 time in sec





Figure 5. Simulation waveform of Generated phase voltage in p.u of variable speed wind turbine driven synchronous generator

Figure 9.

Voltages for line fault near bus 7 with only conventional

synchronous generators 3500 3000

V d cinvolts

2500 2000 1500 1000 500 0 -500





20 time in sec





Figure 6. Simulation waveform of d.c link voltage 2.5 KV of variable speed wind turbine driven synchronous generator.

Figure 10. Real power for line fault near bus 7 with only conventional synchronous generators

Figure 7. Simulation waveform of real power 1..5MW in grid side in p.u of variable speed wind turbine driven synchronous generator

Figure 11. Reactive power for line fault near bus 7 with only conventional synchronous generators

Figure 8. Simulation waveform of injected 0.25 MVAR reactive power in grid side in p.u of variable speed wind turbine driven synchronous generator

Figure 12. Rotor angle deviation for line fault near bus 7 with only conventional synchronous generators

WPP having 28 no. of wind turbine generators of capacity 1.5 MVA,600V ,50Hz is connected at bus-1. Figures[13-16] represents the voltage, real power, reactive power, rotor

Figures[9-12] represents the voltage, real power, reactive power, rotor angle deviation for line fault near bus 7 with only conventional synchronous generators.



this paper, the critical fault clearing time of the generator was increased by three cycles, when the above modeled variable speed wind turbine driven synchronous generator was connected at one of the generation buses.

angle deviation for line fault near bus 7 with conventional synchronous generators replaced by the wind turbine generators.

Rotor angle deviation was reduced nearly by 300 when the above modeled variable speed wind turbine driven synchronous generator was connected at one of the generation buses. Figure 13.


Voltages for line fault near bus 7 with a

The dynamic model of a VSWT driven synchronous generator with power electronic interface was proposed for computer simulation study and was implemented in a reliable power system transient analysis program. This paper has mainly focused on the modeling, assessment of the rotor angular stability and critical clearing time (CCT). This was done by observing the behavior of the test system with only conventional synchronous generators and then by connecting the modeled VSWT driven synchronous generator with the test system, when a three phase fault is included. Comprehensive impact studies are necessary before adding wind turbines to real networks. In addition, users or system designers who have a plan to install or design wind turbines in networks must ensure that their systems have well performed while meeting the requirements for grid interface. The work illustrated in this study may provide a reliable tool for evaluating the performance of a VSWT driven synchronous generators and its impacts on power networks in terms of dynamic behaviors; therefore, serve as a preliminary analysis for actual applications. Fault tests carried out has proven that the integration of this model could enhance the transient stability.

WPP at bus 1

Figure 14. Real power for line fault near bus 7 with a WPP at bus 1

Figure 15.


Reactive power for line fault near bus 7 with a

WPP at bus 1


Figure 16.

[1] “20% Wind Energy by 2030 – Increasing Wind Energy’s Contribution to U.S. Electricity Supply.” U.S.Department of Energy, May 2008. DOE/GO-102008-2567, [2] F. Milano, “PSAT, Matlab-based Power System Analysis Toolbox,” 2002, available at [3] Slootweg, H. Wind Power: Modeling and Impact on PowerS ystem Dynamics. Ph.D. Thesis, Technical University Delft, Delft, the Netherlands, 2003.

Rotor angle deviation for line fault near bus7 with a

WPP at bus 1

Results obtained shows that for the investigated IEEE9 bus system considered in


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011 [4] Sauer, P. W.; Pai, M. A. “Power System Dynamics and Stability,” ISBN 1-58874-673-9, Stipes Publishing L.L.C., Champaign, IL, 2006. [5] E. Muljadi ,T. B. Nguyen, M.A. Pai ,” Impact of Wind Power Plants Impact of Wind Turbine Systems on Power System Voltage Stability Enhancement on Voltage and Transient Stability of Power Systems”, IEEE Energy 2030 ,Atlanta, Georgia,USA,17-18 November 2008. [6] Samarasinghe C.; Ancell, G. “Effects of large scale wind generation on transient stability of the New Zealand power system.” IEEE Power and EnergySociety, General Meeting, July 2024,2008,Pittsburgh, PA. [7] Petersson, A.; Thiringer, T.; Harnefors, L.; Petru, T.,“Modeling and Experimental Verification of Grid Interaction of a DFIG Wind Turbine.” IEEETransactions on Energy Conversion; Vol. 20, Issue 4, December 2005; pp. 878 – 886. [8] J. G. Slootweg, S. W. H. de Haan, H. Polinder, and W.L. Kling, “General model for representing variable speed wind turbines in power system dynamics simulations,” IEEE Trans. Power Systems, vol. 18, no.1,pp. 144–151, Feb. 2003 Article in a conference proceedings: [9] Tande, J.O.G.; Muljadi, E.; Carlson, O.; Pierik, J.;Estanqueiro, A.; Sørensen, P.; O’Malley, M.;Mullane,A.; Anaya-Lara, O.; Lemstrom,B. “Dynamic models of wind farms for power system studies–status by IEA Wind R&D Annex 21.” European WindEnergy Conference & Exhibition, November 22−25,2004, London, U.K.




IJITCE Feb 2011  

IInternational Journal of Innovative Technology and Creative Engineering Feburary issue. No.1 Vol.2

IJITCE Feb 2011  

IInternational Journal of Innovative Technology and Creative Engineering Feburary issue. No.1 Vol.2