IDC Technical Journal - Issue 3

Page 1

Issue 03

January 2014

IDC Technical Journal Where technology starts

In depth articles:

App Reviews:

Mobile Ad-Hoc networks-Introduction, Challenges and Limitations

Remember the Milk

Orthogonal Array Testing Strategy

Tasker

Never forget the milk (or anything else) again. Task management made easy.

SAML security

Extend control of Android device and it's capabilities using this automation too

Short Articles:

Troubleshooting:

A Programmer's Experiences Linux Advanced Routing: Setting up a Mixed Public-Private Network

Integrating Probe with TomCat Art of Linux Troubleshooting

Application Streaming

Mobile

Security

Usability

Virtualization

Linux

Mobile Apps


EDITORIAL BOARD Jaimon Jose Pradeep Kumar Chaturvedi Shalabh Garg

CONTRIBUTING EDITORS Amudha Premkumar Anju Dagliya Archana Tiwari Bindu Nayar Jyotsana Kichloo Liza Abraham Madhuri Hebbar Sheela Vasudevan Srilatha Puthi Verulkar Rohan Vijay Kulkarni

COVER DESIGN Sandeep Virmani

BOOK COMPOSITION Arul Kumar Kannaiyan

About the Cover  Cloud computing has become an ubiquitous term for all off-premise services. Year 2014 will see a lot of growth in cloud, mobility and collaboration. In this issue’s cover image, Sandeep Virmani depicts security as an inseparable components of Cloud, Mobility and Collaboration.


Contents

IDC Tech Journal Volume 3 – January 2014

InDepth Articles 1

5

Mobile Ad-Hoc Networks

Pallavi Ravishankar

Object Orientation

Rohit Kumar

9

Orthogonal Array Testing Strategy

46

15

41

43

Sridhar Kotha

Profile-Based Scale Testing

Ravella Raghunath

SAML Security

Suresh

Test Driven Development (TDD)

Ankur Kumar

Vagrant

Chendil Kumar

21

Writing for Users

Jyotsana Kitchloo

Miscellaneous 34 Bits & Bytes: Configuring GroupWise for Forums

Guruprasad S

26 Bits & Bytes: Multipath TCP, a hidden gem in iOS 7

Ramki K

20 Bits & Bytes: Simplefs

Sankar P

ii Editorial: Continuous Learning – Reinvent Yourself

Jaimon Jose

Reviews 4 Remember the Milk Application for Task Management

Ashwin Sunk

12

23

Tasker

K Vinayagamoorthy

WebServices Testing Using SoapUI

Girish Mutt

Short Articles 40

49

13

A Programmer’s Experiences

Keshavan

Application Streaming

Raghu Babu KVM

Beyond BIOS with UEFI

Arun Prakash Jana & Faizal Saidalav

31

Challenges in testing Mobile Applications

27

25

29

Shivaji Nimbalkar

IPV6 LAB Setup

Shyrus Joseph

Linux Advanced Routing

Jaimon Jose

Mobile Device Management

Raghu Babu KVM

Troubleshooting 33

19

Art of Troubleshooting Linux Computers

Anant Kulkarni

Integrating Probe with Tomcat

Lavanya

35 Performance Testing Methodology for Java-based Applications

Shammi Kumar Jada


EDITORIAL

IDC Tech Journal — January 2014

by Brian Tracy

Those people who develop the ability to continuously acquire new and better forms of knowledge that they can apply to their work and to their lives will be the movers and shakers in our society for the indefinite future

Continuous Learning – Reinvent Yourself

ii p

Innovation was the focus for our last edition. We looked at the importance of innovation at the work place to remain successful in the fast-paced business environment. “Continuous Learning” is the most important factor to remain innovative in the current business environment where technology is changing faster than a traditional organization can adapt to.

independent of instructors or formal courses through social, collaborative or on-demand tools that allow learners and experts to share content with each other. For example.

As an individual, benefits of continuous learning include improved performance, more career flexibility, higher confidence and motivation to perform at different levels and more creativity. With greater knowledge and experience, you will have more tools in your arsenal to play with and be innovative.

ject before actually experiencing it. For example,

Books and articles   Seminars and Workshops   Blogs, mailing lists, online videos and collection of news aggregators To stay competent and ahead in busi-   many more… ness, the organization must foster a culture where there is a well-defined process of con- Formal Training Methodologies tinuous learning and development. Studies The learning sessions that are time bound indicate that those teams that learn quickly and have a certified outcome are categoare more adaptive than the teams that are rized as formal training. These could be slow in learning. Organizations must cre- class room based instructor led training ate processes and environment that unlock sessions or online sessions (short term or each individual’s potential by removing the long term.) Formal training helps when boundaries that limit the performance and one needs a deeper understanding or wants productivity. to become an expert in a domain or sub-

Class room training   Online Courses

Above all, the real learning happens when you share your knowledge with the co-workers and teams. You can take your We are fortunate to be in this era where we learning experience to the next level when have abundant tools and resources to aid us you participate in a conference and be a with our learning process. There are plenty of speaker, meet people and create your netinformal and formal learning opportunities. work. Finally, take some time off to innoIt is worth noting that the learning technol- vate. It could be to build a prototype, ogy market is over $5 billion today. demo and write about your experience and learning.

Informal learning

Informal learning can be considered as continuous learning that takes place

Jaimon Jose


Mobile Ad-Hoc Networks

Introduction, Challenges and Limitations

Destruction of this robust, reliable and secure infrastructure due to any calamity or disaster renders the whole network useless and leaves mankind “communication-crippled”. For instance, the tsunami hit zones of Southern India, or even the hurricane hit areas of America are prime examples of such destruction and communication blackouts. People were helpless ,had no means to communicate for hours! Rescue operations had come to a complete standstill and locating people amidst all the destruction became a mammoth task. The solution to all such hapless situations is to use MANETS. It is also interesting to note that, the growth in the number of people using mobile devices over the past decade has been exponential. Currently, most of the connections among wireless networks occur over fixed-infrastructure cellular 3G/4G networks or laptops/tablets connected to the internet over access points. This demand is just going to increase further. In such a situation of radically rising demand or even locations where setting up infrastructure is

geographically difficult, MANETS offer a cost-efficient handy solution.

So what are MANETS? Mobile Ad-Hoc NETwork often abbreviated as MANET is a self-configuring network of mobile devices connected by wireless links. It is an ad-hoc network, neither does it rely on any pre-existing infrastructure nor does it use the access points of any managed wireless infrastructure. An ad-hoc network is the one where all devices in the network have equal status and are free to associate with any other wireless device in the range ( in accordance with IEEE 802.11 protocol). In short, MANETs represent complex distributed systems that consist of wireless mobile nodes that can freely and dynamically self-organize into arbitrary and temporary, ‘‘ad-hoc’’ network topologies, allowing people and devices to seamlessly internetwork in areas with no pre-existing communication infrastructure.

Features of MANETs Some of the key features of MANETs areEach device in the network functions as a node as well as a router.

Pallavi Ravishankar  A research aspirant with a Bachelors degree in Computer Science from Dayananda Sagar College of Engineering, Bangalore. My areas of interest are complex social networks, cryptography and ad-hoc networks. I have been working for Novell since July 2013

IDC Tech Journal — January 2014

M

obile technology has become an indispensable part of our life today. Gone are the days when people were chained to their desks or had to lug around bulky computers. Be it finding a restaurant half a mile away or talking to our loved ones who live hundreds of miles away or even a mundane task such as making a utility bill payment, the world is literally at our fingertips with the advent of mobile technology! However, such convenience and uninterrupted communication requires a robust infrastructure like super-fast switches, servers, the internet, technology like VOIP, a base station, and so on. This costs millions.

1 p


The devices which constitute a MANET use technology like bluetooth or infrared to communicate with each other.   The network has a dynamic topology; it constantly changes with nodes moving in and out of the range.   The deployment of this kind of network is cost-efficient, speedy and easy.   It is a multi-hop network.   MANETs do not require any kind of fixed infrastructure.

Challenges in MANETS There are several challenges when it comes to MANETs. They have extremely dynamic topologies which make routing a challenge. Several algorithms are available for routing in MANETs such as TORA, AODV, DSDV,OSPF etc. Another area of concern in MANETs is Quality of Service(QoS). Several techniques have been proposed towards the improvement of QoS; chief among them are clustering,resource reservation,traffic classification, buffer management and admission control. Simplicity of deployment and a significant improvement in QoS make clustering a popular choice. Ant Colony Optimization (ACO) is a one of the best algorithms for path finding. This network is based on the real world behavior of ants, especially, the method they use to find their food. ACO is one of the algorithms that comes under the category of swarm intelligence. And research has shown that these algorithms are the best for path finding (thereby optimizing routing). This article proposes a technique for the betterment of QoS in MANETs by deploying a clustering algorithm coupled with the use of ACO for routing.

Proposed Technique

IDC Tech Journal — January 2014

ACO

2 p

Studies indicate that swarm intelligence algorithms are excellent for path finding. One such algorithm that is beneficial is ACO because it is a table driven routing protocol that uses the method used by ants for gathering food. The ants deposit a chemical called pheromone when they roam around looking for food. Other ants find the path to the food by navigating along the path with the greatest pheromone concentration. This algorithm has the following phases-Route Discovery, Route Maintenance and Route Error.

Route Discovery The routing table of any node in ACO consists of the destination address, next hop and the phermone value. The source node checks if it has a path to the destination in its routing table. If it finds one, then it forwards the data packets along this path. Otherwise, it initiates the flooding of FANTs (Forward Ants) to all its neighbors which navigate depositing phermones along the path (weights for each path). The node interprets the source address of the FANT as destination address and the address of the previous node as the next hop, and computes the pheromone value depending on the number of hops taken for the FANT to reach the node. When different FANTs reach the destination through different routes, the destination sends a Backward Ants (BANTs) for each of them by incrementing the pheromone value. When the sender receives the BANT from the destination node, the path is established and data packets can be sent.

Route Maintenance ACO does not need special packets for route maintenance. Route maintenance is done by the regular data packets with the help of pheromone tracks established between the source and destination nodes. The evaporation process of the real pheromone is modeled by decreasing the pheromone values.

Route Error The route error is modeled by the paths that have little or no concentration of pheromones. ACO provides an adaptive feature and generates multi-path routing. It is capable of adapting to the dynamic topology of the network efficiently with no significant dip in performance. It sets a path for load balancing rather than just the shortest path which makes it an excellent algorithm. The fast route recovery, distributed nature, fault tolerance and speed make ACO most suitable for deployment in MANETs.

WCA (what is the expandsion?) WCA is Weighted Clustering Algorithm, several algorithms are available for clustering in MANETs. The reason for choosing this algorithm is that it takes some major factors into account such as degree, distance, mobility and power. It considers all these with equal weight. After calculation of the weight, it elects the node with the least weight as the


found. It then broadcasts the data to all its cluster members, thus, ensuring proper QoS (confirming that the destination has received the message from the source).

Conclusion

Steps involved in the proposed technique for the optimization of QoS The diagrammatic representation of the proposed solution is shown above. The first step of the solution involves using the WCA to divide the entire network into clusters. Only the cluster head is always alive. The rest of the nodes in the cluster come alive only when the cluster head has to communicate. All communication in the entire MANET has to happen through a cluster head. One or more nodes can serve as gateways between cluster heads. The clustering periodically comes out to eliminate data loss due to decrease in power levels of the cluster head or nodes moving into/out of the network. When a source node wants to send some data to the destination, it firsts sends the data to its cluster head. The cluster head then uses the ACO to find a path to the destination by looking for other cluster heads. This communication between cluster heads can be regarded as multicast after all the cluster heads are located by ACO algorithm. This constitutes the second step of the proposed solution. The final step of the solution is the delivery of the message/data to the destination. When a given cluster-head has the destination node as its cluster member, it sends a BANT (in accordance with ACO) acknowledging that the destination has been

References 1.  An overview of Mobile Ad-Hoc Networks: Applications and Challenges- Jeroen Hoebeke, Ingrid Moerman, Bart Dhoedt and Piet Demeester 2.  Mobile Ad Hoc Networking -Carlos de Morias Corderio and Dharma P Agrawal 3.  Mesut G¨unes¸, Martin K¨ahmer and Imed Bouazizi, “Ant-Routing-Algorithm (ARA) for mobile multi-hop ad hoc networks new features and results”, Proceedings of the Med-Hoc Net Workshop, June 2003 4.  Ratish Agarwal, Dr.Mahesh Motwani, “Survey of clustering algorithms for MANET”, International Journal on Computer Science and Engineering, Vol.1(2), 2009, 98-104.

Did you ever wished to have a scheduler for your social media sharing. Good news is that someone thought about that already. Buffer shares your content at the best possible times throughout the day so that your followers and fans see your updates soon http://bufferapp.com

IDC Tech Journal — January 2014

cluster head. These factors account for all the major concerns when it comes to MANETs making it the optimal choice.

Though the challenges in the field of MANETs are magnanimous, but the need for deployment of such kind of networks is surmounting. From military battlefields to emergency evacuations, from PANs, to setting up an instant and temporary network of mobile devices, MANETs have diverse applications. The need of the hour is to enhance the QoS, optimize routing techniques and improve the security infrastructure in MANETs. All the aforementioned fields are ongoing areas of research and overcoming some or all of the limitations of such networks are essential to the large-scale deployment of MANETs. This article is a small step in this direction; proposing a technique to enhance the QoS in MANETs. However, it is not without limitations- biggest among which is the security. The proposed mechanism has no security infrastructure, assumes implicit trust and disregards the presence of malicious nodes. Further work in the aforementioned areas may very well lead to widespread deployment of such infrastructure-less networks.

3 p


Reviews

Remember the Milk Application for Task Management

– Ashwin Sunk

Are you finding difficulties in task management? Here is an app that will help you manage tasks on the go. “Remember the Milk (RTM)” is an application for task and time management. RTM provides options to manage tasks on the go through mobile app and also through a computer connected through the web. RTM can be used to manage tasks online as well as offline. As the tasks sync across different devices no additional effort is required to update across different devices (interval of syncing depends on the free version or professional version).

IDC Tech Journal — January 2014

RTM allows you to create multiple task lists. Tasks can be edited to include various fields and can be grouped into lists. Various fields include list group, priority, due date by default. Addition fields include time estimate, tags, location, URL. A use case of RTM usage.

4 p

It’s not easy for John to remember all the tasks to be completed in office and home because the tasks are spread over different days of week and month. RTM comes in handy to schedule the tasks and plan for day-to-day activities. John maintains two tags for all the tasks, “office” and “personal”. All the office related

activities are tagged with office and personal work activities are tagged with personal. At the start of every day, John plans the activities for the day, adds all the tasks into the RTM with office tag and schedule to be completed today. If any priority task comes up in the middle of the day, then John reprioritizes other activities and reschedules it for the other days. John has set up an e-mail notification reminder in RTM that is triggered at the end of the day listing all the pending tasks. This will help John to plan better for the next day. Also, it helps in creating the work report for the week. For personal tasks, John adds all the to-do activities under personal list. At the start of every day, the personal tasks are visible along with the office tasks. This will help in better planning for the day. For John, shopping is an important personal activity. He creates a list with the shop name and saves all the shopping items under this list. Now, if John wants to buy these items over the weekend, RTM will trigger a reminder on the weekend for the items to be shopped. Thus, RTM helps John being more productive and organized because the week days and weekend tasks can be well planned.


Object Orientation Maintaining a great relationship among objects

The primary design decision in any object oriented software development is to   Define what constitutes different classes   Define relationships between all the classes With comprehensive understanding of requirements, it is intuitive to come up with software component design having a set of classes and relationships which is in coherence with objects in problem space. The design may be perfect and the software may work great to begin with. But any software lives for long after it is created and requires a lot of changes all through its life to support new platforms and features. So the design of classes and its relationships should not only be in coherence with objects in problem space. It should also be flexible enough to support extensibility and maintainability. A software component should be designed in such a way that it is possible to extend it without de-stabilizing the existing functionality. Class relationships is the most fundamental and one of the most important decisions which drives extensibility and maintainability of a design. This article gives an insight on fundamentals of class relationships and how to achieve design flexibility.

Here are different ways in which two classes can be related:   Is-A: should be read as ‘class A Is-A class B’. In this case any method of class B can be called from class A, as if the method is available in class A itself.   Has-A: should be read as ‘class A Has-A class B’. This means, class A has object of class B as a member variable. In this case any method of class B can be called from class A using class B member variable.   Uses: If class A Uses class B, it means methods in class A has a local instance of class B. which is used to call method of class B. This should be used only for localized need. Two classes can be related in any of these ways and it is possible to access behavior of a class into another class. So the question is which relationship to choose? One basis is to relate classes based on relationship of corresponding objects in problem space. But that isn’t enough, as it doesn’t take extensibility and maintainability into consideration. These are important

Detailed description In procedural programming paradigm, a function or procedure defines a behavior. To get this behavior in an application, the procedure can simply be called, if it is available in the same process space.

Rohit Kumar  Rohit is working as Specialist in Novell. He is a member of architect group for ZENworks product

IDC Tech Journal — January 2014

Abstract

Similarly, in object oriented programming paradigm, a function or method of a class also defines behavior. But to get that behavior in another class, unlike procedural paradigm, it is not possible to directly call the method. To do that, first the two classes must be related.

5 p


considerations as any software needs a lot of changes all through its life to support new platforms and new features. Let’s begin with a problem statement and analyze its evolving design with focus on maintenance and extensibility. These evolving designs will also unravel any confusion on usage of different type of class relationships. Problem Statement:  Design readers which should be able to read from a file or a socket or a pipe. It should be able to read byte-by-byte or a number bytes together and return it. For simplicity, let’s limit our requirement to just these methods. Design #1:  In this design read() method in InputStream class is left for subclass to implement. And the read(buffer : byte []) method is implemented using the read() method in InputStream Class. All the subclass implements read() method. An example pseudo code explains how to use these classes. Where is the code? Do you want to link to it?

IDC Tech Journal — January 2014

It’s a good design which uses Is-A relationship for code-reuse. This way, it makes the design extensible, as a new class ByteArrayInputStream can be added here, which has Is-A relationship with InputStream and

6 p

automatically it will have read(buffer) method available. The design is maintainable as it avoids duplicate code for read(buffer) method in each class, thus any change in this method in future would require change only in InputStream class. Knowledge Bit #1 A Is-A B means class A can reuse code from class B. This also provides extensibility and maintainability. Issues:  Different applications start using this package for reading from different sources. There are applications which read data of huge size, as each byte is being with one read call, the perceived performance for read is slow. The other issue observed is on usability. The existing design provides methods to read byte or an array of bytes. Whereas a typical application use different data-types (e.g int, bool etc) and not just bytes. Design #2:  The solution to performance problem will be to implement buffering in read methods. This buffer is maintained in the class. Each time read method is called, it looks into the buffer and returns from there, if the data is not available in the buffer then it fills the entire buffer instead of reading just one byte.


The above design is done using knowledge Bit #1, it overcomes the design issues discussed above. We use Is-A relationship to create buffered and data streams. Pseudo code to use BufferedFileInputS shows that the only change required from the previous diagram 1 is to instantiate BufferedFileInputS instead of FileInputStream. This is possible as Is-A relationship provides type-compatibility. With this, existing applications have to do minimum changes to start using buffered read. Issues:  Now there are two different kind of streams in the design. One type of streams read from different sources and implements read method e.g. FileInputStream, SocketInputStream and PipeInputStream. We will call them as Base Stream classes. The other type of streams work with base streams and provides extra functionality on top of that e.g. BufferedFileInputS, DataFileInputS etc. We will call them Decorator Stream classes. We realize that to add any new base stream class, we need to create corresponding decorator stream classes as well. Things get more complicated when all these decorator stream classes need to work together and this will lead to explosion in number of classes. This is extensibility problem in the design. It alsoleads to maintainability problem as changing readBoolean method will require change in all data related decorator stream classes. This problem is known as Class Proliferation. Knowledge Bit #2 Is-A relationship provides type-compatibility, which allows use of variable implementation in an easier way

(See the FileInputStream usage code difference in diagram 1 and diagram 2). Extensive use of Is-A for code reuse may lead to problems like class proliferation. Design #3:  When we look at it closely we realize that all buffered and data decorator stream classes do similar things. All buffered decorator stream classes read in buffer and all data stream classes use read() method from base stream classes and construct different datatypes based on the method being called e.g. readInt() or readBoolean() etc. So quite clearly, the design should avoid having repetitive code. The above design solves the problem of class proliferation. Here we use Has-A relationship instead of Is-A. In this design adding a new base stream class will not require addition of new decorator stream classes. The existing BufferedInputStream and DataInputStream classes can be easily modified to start working with the new base stream class. Knowledge Bit #3 A Has-A B means class A can reuse code from class B. This provides extensibility/maintainability and also solves the problem of class proliferation. Issue:  When a new base stream class is added, all the decorator stream classes need modification to add reference of this new class, which is a maintenance problem. Design #4:  The above design solves this maintenance problem. In the new design all the decorator stream classes keeps a reference of InputStream instead of each base stream classes. So when a new base stream class is added, it is automatically referenced in decorator stream classes through InputStream reference. This design also allows all the decorator stream classes to work together e.g. we can create DataInputStream, which uses a BufferedInputStream, which internally uses FileInputStream. It’s a very flexible design!

IDC Tech Journal — January 2014

Changing the same read method in each class may not be advisable as some application/customer may not want this overhead of extra memory getting utilized as internal buffer for each class.

7 p


Knowledge Bit #4

IDC Tech Journal — January 2014

A Has-A B means class A can reuse code from any class of type B. This provides greater extensibility and maintainability.

8 p

the InputStream class and also has a reference of all the base and decorator stream classes. Knowledge Bit #5

Issue:  If a new decorator stream class is added which provides extra functionality to the basic stream classes. The designer of the new decorator stream class would need to keep a reference of InputStream. A good design should hide these complexities for better extensibility.

A Has-A B means any class of type A can reuse code from any class of type B. This provides greater extensibility and maintainability.

Design #5:  This design makes further improvement over the existing one. With the new design, a new decorator stream class can be added by just extending to FilterInputStream and automatically has a reference of

Is-A relationship provides type compatibility and Has-A provides reuse of variable implementation. Right use of both together provides highly maintainable and extensible design.

Conclusion

Good Design is not accomplished by Chance! *** Fundamental concepts provide the basis for getting it right!


Orthogonal Array Testing Strategy

Testing software is complex because it is difficult to model the product in such a way that is easy to understand its behavior under various conditions with various factors affecting the software behavior. Optimizing software testing is a huge challenge. For complex projects that need to be delivered in compressed time frames, we need statistical testing techniques to ensure sufficient test coverage and reliable results. Software component interactions and integrations are a major source of defects. Most of these defects arise from simple pair-wise interactions. With so many possible combinations of components or settings, it is easy to miss one. Orthogonal Array Testing Strategy (OATS) is a proven technique, especially for integration testing of software components. OATS can be used to reduce the number of combinations and provide maximum coverage with a minimum number of test cases. Endlessly executing tests take too much effort to find defects and does not increase the confidence in the system. Executing a concise, well-defined set of tests

OATS techniques create efficient and concise test sets with fewer test cases than testing all combinations of all variables & their dependencies. You can create a test set that has an even distribution of all pair-wise combinations.

What is Orthogonal Array? Orthogonal Array Testing Strategy created by Dr. Genichi Taguchi is a proven, systematic, statistical way of testing pair-wise interactions. Orthogonal Array is an array of values in which:   Each column is an independent variable to be tested for interaction represented as – “Factor”.   Each Factor can take a certain set of values called “Levels”.   Each row represents a test case/combination.

What does OATS do?

Guarantees testing pair-wise combinations of all selected variables.   Creates an efficient test suite with fewer tests than all combos of all variables.   Even distribution of all variables.   Simple to generate, less error prone.

Sridhar Kotha

Taguchi Approach

An incisive IT professional having over 14+ years of extensive work experience in developing Test Automation Frameworks, Performance Testing Automation and Testing & Automation of Routers for Security & VPN (Networking Devices) & Automation of Mobile Software. Worked in companies like Yahoo, Nokia, Azingo & Wipro. Working as Specialist in FNS QA from 2.4 yrs

IDC Tech Journal — January 2014

Abstract

that uncovers most of the defects is a wise approach and a cost saving technique.

9 p


Factor Name Printer Type

LPR Printer

Secure Printer

Values Audit Printer

Printer Driver

HP

XEROX

CANON

Client Platform

Win 7

Win 8

Print JOB

Pause Job

Print Job

Driver Store

Remote

Local

Driver Operation

Upload

Download

The Taguchi Approach Taguchi suggests five major steps in the designing process. The steps take the test team from formulating the test problem to creating a good test design and refining the test design. The steps are as follows:

Applying the Technique STEP 1:  Identify the Factors and Values for generating the test set. STEP 2:  Identify any dependency of values between the Factors to avoid incorrect combinations. STEP 3:  Input the parameters as Factors and Values OA Tool that generates Orthogonal Array Test Set. Using the tool reduces All Combinations – 1500 Tests to 25 Tests.

Normal Printer

Win XP

Linux

MAC

Resume Job

Purge Job

List Job

If not, modify the Factors and Values to generate another set. This may be done by splitting the Factors and getting 2 different sets.   Add any manual cases which are not part of the test set.   Incorporate any boundary values for parameters which are required to be tested. STEP 5:  Generate the Test Spec in required format using the Script functionality of the Tool. The same Factors & Values when analyzed using another tool named ‘All Pairs’, resulted in 42 test cases.

Why Orthogonal Array Testing Strategy? Identify a minimum set of tests that will find all multimode defects (particularly serious defects)   Clarification

IDC Tech Journal — January 2014

STEP 4:  Analyze the test set generated to ensure that it covers required cases.

Direct Printer

10 Optimized test table using Orthogonal Array Tool – rdExpert p


What works & What doesn’t! (Limitations?)   Applying OATS manually is not advisable.   Focusing the testing effort on the wrong area of the application.   Picking the wrong parameters to combine.   Orthogonal array only tests the most optimal combinations and not all.   Use your testing skills, expertise, and experience to improve the test cases produced by these methods.

Benefits of OA   Implementation time is less: The implementation time is less, as the test cases are less. Even though the test suit count is less, the combination will be optimum and covers the basic feature completely.   Execution time is less: As the test cases are less in new test suite, execution time is also less.   Result analysis takes less time: Time taken to compare the result files with the standard files is less. This reduces the overall test execution time.   Increase in overall productivity: All the above points provide evidence to the fact that the OA methodology increases the overall productivity as lesser number of test cases are written. Implementation

time, test execution time and the result analysis time is less compared to the other methods. Hence, OA concept increases the overall productivity by 40%-50%.   High code coverage: As evidenced in the join index experience, OA methodology covers close to 95% of the feature code compared to the usual methods. This is achieved with less number of test cases and with less execution time. For the remaining 5% of the code, tests have to be written manually. In this way, close to 100% of the feature code is covered.

Conclusion “We can save time and effort and find bugs efficiently by testing variables and values in combination.” It is beneficial to the Software product testing to make use of the Orthogonal Arrays. The only constraint would be the applicability of the concept to certain types of products. There are 3 areas in a product life cycle management where we can benefit:   Test case development for new feature/product   Re designing the existing test cases   Sustenance of the test cases thus developed. The most critical portion of the adoption process is the product understanding and modeling the product/ feature as parameters and levels. The care and efforts put in to develop this model will drive the success of the test suite design.

References Good book on OA “Orthogonal Arrays, Theory and Applications, by A.S.Hedayat, N.J.A Sloane, and John Stufken   http://www.phadkeassociates.com/index_rdexperttestplanning.htm   http://www.pairwise.org/tools.asp   http://www.combinatorialtesting.com/ clear-introductions-1   http://www.slideshare.net/Softwarecentral/ test-smarter-with-pairwise-testing

Many of you store your personal or professional documents in the cloud in Google Drive, DropBox, box.net or Skydrive. We know that these services do peek into your files. What if we can add a layer of security without loosing the flexibility of accessing these files from any of your devices. Here are two solutions that you can use. These technologies encrypt your files on the device, not in the cloud https://boxcryptor.com  http://www.viivo.com

IDC Tech Journal — January 2014

All single-mode defects can be found if every option is tested at least once (unit elements)   2-way or “Pair-wise” is combination of 2 items (parameters) that cause a defect   3-way or “Tri-wise” is combination of 3 items (parameters) that cause a defect   “Pair-wise” defect detection (pair-wise testing) finds most defects, ex. testing all pairs typically finds 75% of defects Source: Kuhn   NASA Deep Space Mission - study showed that 88% of bugs discovered using “pair-wise” defect detection testing   U.S. Food and Drug Administration – study showed 98% bugs discovered using “pair-wise” defect detection. Source: 27th NASA / IEEE Software Engineering Workshop, NASA Goddard Space Flight Center, 4-6 Dec, 2002.

11 p


Reviews

Tasker

Problem 1: Repetitive Manual Tasks

p.m. (people can only reach me through a phone call at this time) and re-enable all of them at 5.15 am. All the data will be synced and ready when I wake up at 5.20 am.

I need to make sure that the following states are maintained at the following locations. I have to do these manually, without fail, whenever I am in these locations. Location\Phone state

Silent mode

Wifi

At home

Off

On

At Office

Vibrate

On

On road

Vibrate

Off

Problem 3: Need to Set Up Irregular Alarms Generally, most of us wake up at a fixed time on weekdays. So, we can set up a fixed alarm for weekdays and let it repeat everyday. However, on a previous night of a holiday, other than weekend, I would have gone to sleep thinking I can wake up late next morning and my sincere smartphone wakes me up at 5.20 am.

Solution  In Tasker, I setup profiles to detect my loca-

tion automatically and change the settings accordingly. For example, if I am in the 300 meter circle around my home, switch to Home profile. Or, if I am in the 300 meter circle around my office, switch to Office profile. If I am in neither, then switch to Road profile.

Solution  Though I know it is possible to solve this problem using Tasker, I am yet to solve it?

IDC Tech Journal — January 2014

Problem 2: Battery Drain and Unnecessary Data Consumption

p

– K Vinayagamoorthy

Conclusion The tasks mentioned above are just a small fraction of what is possible with “Tasker”. There are ways to put our own python or other scripts to be executed on specific conditions in scenarios similar to the above. Be aware that Tasker is a paid App (around Rs 200/-) and it requires you to invest some amount of time to understand how to use it. This page has enough information to get started: http://www.pocketables.com/2013/03/

Most smartphones sync data automatically to and from cloud or specific servers so that we can access the data at once. But this can drain the battery and also consume a lot of precious mobile/3G data bandwidth. Especially, in the night when we know that we won’t need it, we can disable them and enable them in the morning. However, this has to be done manually. Time\Phone State

Auto-Sync Wi-Fi (mails and data)

Mobile Data

Notification Volume

Ringer Volume

At 10.15 pm

Switch off

Switch off

Switch off

0

3

At 05.15 am

Switch on

Switch on

Switch on

3

7

Solution  As mentioned above, I created a profile which

12 disables or weakens a specific state automatically at 10.15

overview-of-pocketables-tasker-articles.html. Try it and have fun.


Short Article

Beyond BIOS with UEFI UEFI (Unified Extensible Firmware Interface) is a technology that was originally initiated by Intel and currently being handled by the UEFI Forum. It intends to replace legacy BIOS as the software interface between the OS and platform firmware. UEFI provides a modern, well defined environment for booting an OS and running pre-boot applications. As of today, most UEFI images provide legacy support for BIOS services. The UEFI specification defines a new model for the interfacing between the OS and firmware through data tables that contain platform-related information, along with boot and runtime service.

Why UEFI? As PC technology has advanced, more features that require BIOS support have emerged, such as remote security management, temperature and power monitoring, and processor extensions, such as visualization and Turbo Boost. The BIOS is a 16-bit system, with very limited integration with the hardware and operating system, and it can access a maximum of only 1MB of memory. It has become increasingly difficult to accommodate everything we expect from a modern computer within the old BIOS framework with its inherent limitations. UEFI also deals with MBR-based disk partitioning limitations through GPT (GUID Partition Table).

UEFI Simplified UEFI has enough features to be considered as a minimal operating system that sits on top of the firmware unlike PC BIOS, which is squeezed inside. And it has enough potential to provide functionality of a “true” OS. It can access all the memory installed in a system, and make use of its own disk storage space - a

hidden area of onboard flash storage or hard disk space called the EFI System Partition. Its capabilities can be extended with new modules just like software frameworks. For example, device drivers for motherboard components and external peripherals, and user options can be presented in an attractive graphical front-end, controlled with the mouse. If a hardware comes with a touchscreen, it’s possible to change system settings by swiping and tapping. On desktops or laptops, you can access the UEFI settings just like the BIOS settings screens.

UEFI Design: From Limitations to New Possibilities The design of UEFI is modeled on the following fundamental elements: System Partition:  The System partition is a partition with a file system designed to allow safe Arun Prakash Jana  Eight years of experience in Systems Software, Embedded Systems Software & SMS/MMS Protocols on Linux. With Novell for the past 3 years. Currently pursuing his MS. A Tech enthusiast fond of tiny and efficient software. ARUN received the Champions of Excellence Award within his first year of joining Novell for his contributions to ZENworks Imaging. Always on the lookout for interesting and innovative ideas. Appreciated for the Imaging Bootable USB concept demoed in one of the IDC Innovation Demos Faizal Saidalav  Senior Software Engineer

at Novell Software Development (I) Pvt Ltd, he is a research aspirant and holds M-Tech in Opto Electronics & Laser technology. He has completed PG Diploma in Embedded Systems from CDAC Trivandrum. Currently FAIZAL S N is associated with Imaging team in EPM Department. Appreciated for the UEFI Secure Boot implementation in ZENworks Imaging

IDC Tech Journal — January 2014

Introduction

13 p


UEFI Features Services:  EFI defines two types of services: boot and runtime services. Boot services are only available while the firmware owns the platform and the control is not passed to the OS. Boot services include text and graphical consoles on various devices, bus, block, and file services. Runtime services are accessible even while the OS is running; they include services such as date, time, and NVRAM access.

sharing between multiple vendors, and for different purposes. This separate, sharable system partition increases platform value-add without significantly growing the need for nonvolatile platform memory. Boot Services:  Boot services provide interfaces for devices and system functionality that can be used during boot time. Device access is abstracted through “handles” and “protocols.” This facilitates reuse of investment in existing BIOS code by keeping the underlying implementation requirements out of the specification without burdening the consumer accessing the device.

IDC Tech Journal — January 2014

Runtime Services:  A minimal set of runtime services is presented to ensure appropriate abstraction of base platform hardware resources that may be needed by the OS during its normal operations.

14 p

The above diagram illustrates the interactions of the various components of a UEFI specification-compliant system that are used to accomplish platform and OS boot. The platform firmware is able to retrieve the OS loader (think of it as the 1st stage bootloader) image from the System Partition, which can be on a disk, CD-ROM, DVD, or USB. It also supports network boot. Once started, the OS loader boots the complete operating system. To do so, the OS loader uses the EFI boot services and interfaces defined by it, and initializes the various platform components and the OS software that manages them.

OS Loaders:  A class of the UEFI applications which is stored as files on a file system that can be accessed by the firmware, called EFI System partition (ESP). Supported file systems include FAT32, FAT16, and FAT12 and the supported partition table scheme is GPT only. GPT:  GPT (GUID Partition Table) is a disk partitioning specification that caters to the limitations of legacy MBR-based disk partitioning such as limited number of physical partitions, disk and partition uniqueness, disk size limit, etc. OS X or Microsoft Windows allow booting only from a GPT disk on UEFI firmware. Secure Boot:  The UEFI 2.3.2 specification adds a security oriented protocol known as Secure Boot, which can secure the boot process by preventing the loading of drivers,OS loaders, or malicious software (for example, rootkits) that are not signed with an acceptable digital signature even before the OS is loaded. Pre-boot malware (for example, boot-sector viruses), by executing before an OS kernel gains control of the computer, can “hide out” in ways that aren’t possible once an OS has taken over. With Secure Boot active, the firmware checks for the presence of a cryptographic signature on any EFI program that it executes. If the cryptographic signature is absent, and does not correspond to a key held in the computer’s NVRAM, or is blacklisted in the NVRAM, the firmware refuses to execute the program. While it is designed to protect the system by only allowing authenticated binaries in the boot process, UEFI Secure Boot is an optional feature for most generalpurpose UEFI systems and can be disabled.

Ever wished to have a scheduled delivery and status tracking for Gmail. Boomerang for Gmail is a browser plugin that lets you control when you send and receive email messages http://www.boomeranggmail.com


SAML Security

This article gives an overview, usage, and technical information about SAML. The intended audiences of this article are application developers, designers, and architects of applications that deal with identities. Protocol developers and interested researchers can refer to normative reference to the SAML specifications at OASIS (https://www.oasis-open.org/committees/download.php/27819/). Say, you are preparing for long vacation. Interesting isn’t it? You research a lot and start finalizing the destination and logistics. You have to do lot of things including booking a flight or train, booking hotels, sightseeing tours, getting a visa for the destination country, and so forth. You may have to apply for these logistics with different vendors. Each of them may require different information from you about yourself and your preferences. The visa and other government processes are most daunting. You have to fill a lot of forms, personal details, and preferences at each of the vendor applications. Most of the information you provide will be the same. In addition, you need to maintain different accounts and passwords for each vendor application. You have to log in to each application and provide information. Why can’t the applications themselves share the needed data among each other? Have you ever pondered these questions? SAML is the answer for that. SAML enables collaborating applications to share identity information with or without users consent. This, in turn, simplifies users’ job and provides them the best service and user experience.

As an application developer, designer, or architect, it is necessary to know about SAML so that you can provide better user experience during registration and authentication and provide customized services to your customers.

SAML You see identities everywhere. Most actions in an application are carried out in context of an identity. Identity may be a user, machine, or an entity. Every user, machine, or an entity has a unique identity irrespective of the application they access. However, each application may refer to the identity in different ways. For example, an airline Web application may refer you through your full name whereas a Government organization may refer you through your social security number or PAN number. At each application, you may have to give different details for your identity. In addition, you might have given different parts of your information to different applications at different point of time. That is to say, your collective identity is dispersed over a set of applications. You may be using the same password

Suresh

A passionate programmer and architect. He has over 12 years of experience. He is tech savy & follows up technology trends. He has published patents. His interests include security, functional programming and massive parallel scalable systems. He likes natural food and likes to do organic farming. He works for Access Manager at Novell. He has Masters of Engineering in Computer Science and he has published thesis on Fractal Image Compression using Genetic Algorithms

IDC Tech Journal — January 2014

Introduction

15 p


with all applications to avoid forgetting it. SAML recommends you to store your critical information to a centralized provider and share the information with other applications in a safe and time-bound manner. In this way, you use your password only with the trusted provider, your password is not shared with other applications, and authentications with other applications are time-bound. This reduces the threat of exposing your account because you are not sharing passwords and your authentications with other sites are valid only for few hours. SAML simplifies the exchange of identity information between different entities in a safe and secure manner with or without the permission of the owner of the identity. At the core, SAML is very simple. It creates a document and attests it. The consumer of the identity simply trusts the attestation and the attested information. This attestation is carried out digitally by using XML documents and XML digital signatures. In analogy, this is similar to getting your address proof attested by a notarized lawyer and producing this document while applying for passport. The passport office trusts the notary and the notarized address proof document as proof of your address. The signed XML documents in SAML are called as “Assertions”.

SAML Components In SAML, at minimum three entities are involved:

IDC Tech Journal — January 2014

Identity Provider or SAML Issuer who provides Assertions   Service Provider or SAML Requester who consumes assertions   Subject or User or SAML Subject who initiates the actions

16 p

SAML assertions are transferred from one location to another. You need to design a mechanism to request for these assertions. The SAML protocol is responsible for the transfer of assertions. The SAML protocol is modeled around Request-Response message exchanges. This should be carried out over existing transport protocol to minimize the complexity of managing new protocols. This process of binding SAML protocol over a transport protocol such as HTTP and SOAP is called “SAML Bindings”.

The content in the request and responses and its meaning can vary based on use cases. These different use cases are called as “SAML Profiles”. For example, the “Multi domain Web SSO” profile. This SSO profile defines a set of bindings, protocols, and different types of SAML assertions used mostly in Web based SSO use cases.

Assertions Assertions are signed XML documents, which assert information about the subject. What are the content of these XML documents? Can it be arbitrary? To enable interoperability between different vendors and technologies, the XML content inside assertions are standardized. The assertions carry statements about the subject. There are three types of statements: Authentication Statement, Authorization Statement, and Attribute Statement. Authentication Statement: Provides the information about how the subject authenticates at the identity provider site. It contains the level and method of authentication. The service provider may choose to enforce rules based on this information. For example, the authentication statement may state that the user is authenticated by using password. The service provider may choose to reject the communication at its site saying it requires a higher level of authentication such as X509 certificates. Attribute Statements: Provides the information about the subject. Information can be personal details of the subject such as email ID and address or it can be enterprise attributes of the organization such as business unit, the subject belongs to, and role of the subject. The Attribute Statements are also called as claims. Authorization statements: Contains authorization decisions that have been taken at the identity provider site. It carries the resources that the subject can access and with what level of granularity. The service provider may enforce these actions on receiving these assertions. Apart from these statements, the assertion carries information about the issuer, name, key used for verification of integrity of assertion, conditions when this assertion is valid, and so forth. It also contains a subject


Artifact Resolution Protocol is used in “Artifact profile” that we will see later. This is used while transferring the assertion over a non-secure channel. Instead of sending the assertion during authentication request, the identity provider may send an identifier of the assertion called Artifact instead of the whole assertion. The service provider may then use this ID to query for the actual assertion over a secure channel. Note that here the non-secure channel means, when the authentication request will be carried out from the browser to the identity provider, this channel at times may be HTTP instead of HTTPS. Name Identifier Management Protocol and Name Identifier Mapping Protocol are introduced in SAML 2.0. These protocols are used when anonymity of the subject is desired for the trust between the identity provider and service provider. The service provider or an attacker looking at the communications will not know about the actual identity and it will be mapped to an anonymous ID. This protocol manages this ID dynamically during the federation session. Single Logout Protocol is used when a near logout is required at all service provider sites to which users have federated their authentication.

Protocols After defining the format of assertions, SAML defines a standard way to get or manipulate assertions from the identity provider. SAML defines a number of protocols for this. Each protocol is based on the RequestResponse paradigm.

Bindings

Authentication Request Protocol is the basic protocol to get the assertion from an identity provider. The request contains “AuthnRequest” and the identity provider responds with an assertion through “AuthnResponse”.

The XML based protocols discussed in the previous section can be carried over any other protocol. The most used one is the HTTP protocol as SAML is widely used in Web single sign-on. This process of overloading the transport protocol with SAML protocol is called as “SAML Bindings”. Within itself, there are a number of ways to transport the SAML protocol according to the desired security properties.

Assertion Query and Request Protocol is a utility protocol. This protocol is commonly used after initial authentication and assertions are exchanged. The service provider can request for the assertions earlier issued either based on assertionID of the previously issued assertion or based on subject and statement types contained in the assertion.

Usually the SAML single sign-on kicks in when a user access from a browser to a Website protected by a service provider. The user has to be redirected to the identity provider at a different domain site. This type of cross-domain communication by using browser can be communicated in two ways in HTTP: using HTTP “302 status code” and HTTP

IDC Tech Journal — January 2014

confirmation method by using which the service provider may confirm about with whom it is interacting while receiving the assertion. There are three ways to confirm the party delivering the assertion to the service provider: Holder of Key method (HOK), sender vouches, and bearer. The HOK method embeds a key inside the assertion while assertion is created at the identity provider site. The subject provides the knowledge of the key during its negotiation with the service provider along with the assertion. This type is usually engaged when the client is an active and smart client who can do these kinds of negotiations. In case of dumb clients such as browser, generally bearer assertion is used. The bearer assertion believes that whoever carries the assertion is the subject without any verification. Hence, these assertions have to be carried over a secure communication and handled secretly to avoid stealth and misuse. The sender vouches method involves a trusted intermediary. This intermediary authenticates the user and gets the assertion from an identity provider. Both the identity provider and the intermediary sign the assertion. The service provider has to trust both the intermediary and identity provider to validate the assertion.

17 p


Post method. They are respectively defined in two bindings: “HTTP Redirect Binding” and “HTTP Post Binding”. If the user is not authenticated at the service provider site, the service provider will send either of the following:

Claims Enabling Your Application (or Frequently Asked Questions?)

A “302 redirect” message with destination set to Identity Provider URL along with the AuthnRequest message.   A form encoded HTML page with POST action set to the identity provider URL.

How do I enable my application for this: All you have to do is to choose appropriate SAML components and use them.

The identity provider also uses the same mechanism to transfer assertion to service provider. If sending the whole assertion via the unsecured browser as defined in previous methods is not desired, an artifact ID can be transferred. This can be transferred via a form control or query string as defined above. The service provider may then use a secure direct channel with the identity provider to receive the assertions by providing this artifact. This binding is called as “Artifact Binding”. There are other less used bindings such as “SOAP”, Reverse SOAP (PAOS), and SAML URI bindings.

Profiles Assertions, protocols, and bindings can be combined to form a profile, which solves specific business use cases. They are called as SAML profiles. Web Browser SSO profile: The most used SAML profile. This profile defines how a user can be authenticated only at the identity provider and can use that authentication with multiple service providers without loggingin multiple times. Profiles combine the Authentication Request Protocol or Artifact Resolution protocol over HTTP Redirect, POST or artifact binding, and transfer assertions.

IDC Tech Journal — January 2014

Single Logout profiles: This profile combines Single Logout protocol over HTTP Redirect, POST, and artifact bindings.

18 p

Enhanced Client and Proxy (ECP) profile: This profile is used for smart clients other than browser and mainly uses SOAP or POAS bindings. Assertion Query Profile: This profile defines ways to request for user’s attributes over SOAP binding by using SAML Query and Request protocol. Other profiles include Identity Provider discovery profile, Name ID mapping and managing profiles, and Identity Provider discovery profile.

After learning the technical details about SAML, the frequently asked questions include:

Which SAML profile is suitable for my application: If it is a Web application, use the “SAML Web Browser SSO” profile. What protocol binding to use: If you enable HTTPS with identity provider, service provider, and browser, choose the “HTTP Redirect” binding else choose “HTTP Artifact” binding. What kind of claims I need for your identity: Make sure that you get these in the assertion and you parse these claims from the Statements of the Assertion you receive. Once you decide all these, write few lines of code to redirect to the identity provider when an unauthenticated user request comes from a client, send the SAML “AuthnRequest” along with the redirection. You will receive AuthnResponse with assertion from the identity provider. Parse the XML statement and validate the assertion for its signature and validity. When validated, you can establish a valid session and consume the claims in the assertion.

Summary In this article, we analyzed briefly the usages, protocols, and technical overview of SAML. So where can you use this information? You can find it useful when you are designing or developing a service provider and want to delegate the authentication to an identity provider. This way you can make your application ready for claims-based authentication instead of coupling your authentication tightly with an authentication mechanism. You can be assured that your application development can go in parallel without worrying about adding authentication and other identity-based security improvements. These improvements can be done at a single identity provider site without modifying your application. Industry is already moving towards claims-based authentication. Few examples are Microsoft’s SharePoint, Google Apps, Microsoft Office 365, and Salesforce.


l l l

Troubleshooting

Integrating Probe with Tomcat

Let us start with a little background of Probe. PSI Probe is a community-driven fork of Lambda Probe distributed under the open-source license (GPLv2). It is intended to replace and extend Tomcat Manager, making it easier to manage and monitor an instance of Apache Tomcat. It provides many features to monitor and debug Tomcat server with a Web interface. Disadvantages include: 1.  It can debug only the Tomcat server on which it is deployed. It cannot monitor a remote process. 2.  It cannot be used to monitor a plain Java application. It can monitor only Tomcat servers. ZENworks needed a tool for debugging and monitoring remote Java applications. To fulfill this requirement, we have customized Probe (called as

The key changes done in ZENworks-Probe include:   OpenID Authentication   Connecting to a remote JVM (should Java Application be connecting to a remote Java application?)   Thread Dump   Heap Dump Probe is part of all ZENworks builds. But, you can integrate it with any Tomcat server and use it to debug any Java application. You do not require ZENworks to use Probe. The focus of this article is to provide steps about how to integrate ZENworks-Probe with a Tomcat server and how to debug regular Java applications.

Deploying ZENworks-Probe To deploy Probe with your Tomcat server, perform the following steps: 1.  Place zenworks-probe.war in the tomcat/webapps folder. The war file will be uploaded to novell.com /

Lavanya  has an MCA degree from NIT warangal. She has more than 7 years of IT experience. She is currently working in ZENworks team as a Senior Associate.

IDC Tech Journal — January 2014

Background

ZENworks-Probe). ZENworks-Probe utilizes the JMX technology to connect and collect the required information through Mbeans. We have added additional features in ZENworks-Probe for collecting Thread dumps and Heap Dumps.

19 p


Probe grants ROLE_MANAGER as the default role if that is not sent from the OpenID provider. 5.  Open Spring-probe-security.xml from WEB-INF/, edit the element myOpenID4JavaConsumer, and change required to false. After this change, Probe does not mandate the “role” attribute while making request to OpenID providers. 6.  Restart the Tomcat server.

Debugging a Java Application by using Probe To debug any Java application by using JMX, you need to enable JMX for the process by setting the following Java properties while launching the JVM: downloads shortly after FCS. The url is not decided yet. For now, this can be picked up from novell-zenworks-probe msi or rpm, which will be part of all the ZENworks builds. 2.  Restart the Tomcat server. 3.  Probe uses the OpenID authentication. It is configured to use ZENworks Server for OpenID authentication. To use other OpenID authentication methods, create probe.properties in exploded war of zenworks-probe, and provide the OpenID URL such as me.yahoo.com by adding openid_provider_url=me.yahoo.com. 4.  Create a file with the name openid.properties in WEB-INF/classes and add the following entry:

Bits & Bytes

IDC Tech Journal — January 2014

openid_default_role=ROLE_MANAGER

20 p

-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8999 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote. authenticate=false -Djava.rmi.server.hostname=testmachine.labs. blr.novell.com

This enables JMX on the port 8999. Now, launch Probe with the following URL to connect to that application: http://localhost/zenworks-probe/?processId= ­testmachine.labs.blr.novell.com:8999 This will launch Probe. Note that ssl is set to false in the property.

ment.jmxremote.ssl

Dcom.sun.manage-

Simplefs — Sankar P Are you worried that engineers are losing focus on the basics of programming? Have you ever wondered what it would be like if you apply your CS skills on something that you really like to do? Here is such an attempt by Sankar. Sankar developed a file system from scratch. He recently released version 1.0 of his simplefs filesystem that supports:   Creation of files and nested directories   Enumerating files in a directory   Reading and writing of files He is also planning to implement support for extents and journaling in upcoming releases. Reach out to him, read his blog or browse code for details. References: 1.  http://psankar.blogspot.in/2013/08/introducing-simplefs-ridiculously.html 2.  https://github.com/psankar/simplefs


Writing for Users

Helpful and relevant user documentation hinges on multiple factors. Here are the most important things that can make it relevant to our users.

Defining User Personas Well-defined user personas can go a long way in creating useful product definition, assist in design decisions, prioritize the right features, and help in creating documents that help users achieve their goals. User personas focus on real people, their technical expertise, and their work environment and what challenges it poses. They bring us close to the specific tasks that users use the product for to help them do their jobs. This knowledge helps us define the tasks users must accomplish and understand the larger goal for performing those tasks.

Identifying Your Users’ Top Tasks – Address the Larger Goal After defining user personas, ensure you understand the core tasks they want to accomplish by using the

product. This understanding becomes more important for complex products that have more than one type of user. For example, if the users include architects, administrators, end users, help desk, and so on, you need to determine what’s most important to each of them. Based on their defined goals and tasks, organize information into essential blocks and determine which blocks apply to each persona. The task-based scenarios describe what users can do with the product. In most cases, the needs of distinct personas are common with just a few exceptions. It is essential to focus on the user types who account for most of the audience. Identifying users’ top tasks involves a modified task analysis. Focus on the core tasks based on top user goals and not based on the product features. In addition, do not document other tasks that are easily discoverable by users. The best approach is to cover task-oriented user scenarios that are important to users instead of describing each of the product’s functions in detail. Documents do not need to present a mirror copy of everything that the product offers. For example, documenting every menu in the UI is not useful. Users can figure that out while they are using the product. Instead, describe why users would use a specific feature and correlate it with users’ goals. This explanation includes making all topic titles meaningful and not dependent on the UI. The key is to document from the user perspective and not from the GUI perspective. For example, writing the title as

Jyotsana Kitchloo

A Documentation Specialist at NetIQ. She is currently working on Identity Manager and eDirectory documentation. She has been involved in technical communication for over 12 years. Her interests include reading, writing, and project management

IDC Tech Journal — January 2014

U

sers read technical documentation with a specific task in mind. Hence, understanding your users before writing documents for them is one of the best strategies for encouraging them to read documents. Understanding your users involves finding out who they are and what they are looking at from the documents that can help them make the right decisions. Knowing users thoroughly can help in determining their business needs, context, and conditions under which they work. When you put yourself in their shoes, you understand the distinctive types of users who use your product for different tasks, which gives you a new perspective of building realistic and useful content.

21 p


these. This aspect must be clear in the documentation as well. In such cases, you can append additional sections to cover these tasks and call it out clearly as appropriate. Therefore, all users can find relevant information easily in the documentation.

Providing Better User Experience

IDC Tech Journal — January 2014

“Generating an Identity Vault User Report” is much meaningful for users to accomplish their task of generating a specific report than “Using the Reporting Window”. You can make it more useful with the usage of graphics or videos as appropriate. After all, users use the product with a specific task in mind, which can include sub-tasks of using several features of the product in a particular workflow. For example, a typical product workflow could include the following: plan the solution, design it, install it, configure it with the connected applications, and finally administer and maintain the solution. If the documentation doesn’t help them complete their tasks, they will not refer to them and will instead get their answers from customer support. Identity Manager is a fitting example for this situation, since that product has eight different types of user personas. To cater to each type of user, the documentation must be distinctive but relevant to each of those personas, so the target users get what they are looking for. Though it is difficult to satisfy all their needs, it is important to determine what each user uses the product for. For example, an Identity Architect designs an identity and access management solution for an organization. His tasks differ from those of an Identity Administrator who takes the identity requirements from him and sets up and maintains the identity solution. There are Security and Compliance Owners who are responsible for designing the access governance policies for an organization. The Auditors are responsible for verifying that the access defined by the Security and Compliance Owner’s policies is enforced.

Note that few tasks may be common for each user persona and different people or teams may perform 22

p

A rich GUI goes a long way in helping users figure out things for themselves and eliminates the need to consult documentation quite often or to ponder what to do. When the GUI is self-explanatory, documentation does not need to repeat this information. Documentation is not for covering GUI problems; rather it is for complementing a rich GUI with detailed conceptual information. There is no need to describe all GUI items in the documentation; instead some details can be made available in the GUI as short messages and tool tips. It is a good idea to develop intuitive GUIs so every window doesn’t require a Help topic. Usability testing during product development can help in improving GUI and screen flows to a significant extent.

Organizing Information Simplicity Matters When you study your users well, your job of organizing information becomes easier. Now you know the big picture and workflow of user tasks. You can determine the purpose of the document and create a logical structure for the document. The ideal way is to start with the most important information at the beginning, provide extensive and relevant examples that can map the user scenarios, and add adequate reference material that describes features in detail to help the power users of your product. You can also include the knowledge-based articles and common problems that users experience. Technical service requests and product discussion forums are great sources of information for determining common problems faced by users.

Conclusion Customer satisfaction is the biggest success for your product and it depends on how well they know using it correctly. When they can accomplish it by reading the documents and without having to call the technical support, your job is done. Happy customers mean more business, loyalty, and reduced support costs.


Reviews

WebServices Testing Using SoapUI

– Girish Mutt

Most of the enterprise products available in market have migrated from being traditional enterprise products to Service Oriented Architecture (SOA) and Web Service environments to cater the ever changing needs of the customers. With this change, we are faced with a new challenge to ensure that products work seamlessly in SOA and Web services based customer environments. This brings in much needed focus towards ensuring quality control in such environments, which involves functional testing, regression testing, performance testing, and load testing of the web service environments. In SOA and Web Services environments, SoapUI has quickly captured the minds of all the web services functional testers all over the world. It is the most recognized open-source web service test suite development tool. It provides a framework where developers and testers can build test suites and test cases of SOAP-based and RESTbased Web services.

SoapUI Overview SoapUI is an open source testing tool which is meant for web services testing. It is a cross platform solution supported on Windows, Linux, and Mac OS. Although SoapUI is an open source tool, it has enterpriseclass features which allow users to quickly create and execute functional, regression, compliance, and load tests. Thus, it can be used to ensure complete test coverage and it supports all the standard protocols and technologies. Even users who have never used SoapUI before will find that creating even the most advanced test scenarios is very simple. In a single test

environment, SoapUI provides industry-leading technologies and standards support, from SOAP-based and REST-based Web services to JMS enterprise messaging layers, databases, Rich Internet Applications, and much more. SoapUI is also available in a commercial offering which has all the features of an open source version along with the features that enhances productivity.

SoapUI Features SoapUI supports a wide variety of features, which not only takes care of the basic functional testing of web services, but also supports coverage on the regression, compliance, load, and performance tests.

Functional Testing SoapUI has excellent provisions for functional and regression testing of any web services application. It contains powerful and innovative features like drag and drop test creation, test debugging, data-driven testing, advanced scripting, and many others. It allows you to validate and improve the quality of your services by writing quality functional tests even with limited programming skills.

Service Simulation

IDC Tech Journal — January 2014

Introduction

SoapUI supports service simulation or mockup, which help in creation of tests against the web services before they can be used in real environment. Thus it enables the consumers to access services without the need to build them completely. Users can simulate any desired behavior, no matter how complex, 23

p


SoapUI Feature Set

and can completely configure the responses needed for the service requests.

Security Testing SoapUI has good provisions for testing and scanning of web services, which protects web services from common security vulnerabilities. With the tool, users can generate variety of vulnerability scans to simulate any hacker . Some of the important vulnerabilities that can be simulated include SQL injection, XML bomb, Cross site scripting, fuzzy scans, and boundary scans.

Recording SoapUI has capabilities to record, monitor, and display all the data that is exchanged between client and server in Web services deployments. SoapUI makes it easy to see what is happening, so that users can quickly diagnose and fix problems. It even supports WS-Security and SSL decryption, allowing the users to analyze and modify encrypted messages.

IDC Tech Journal — January 2014

Analytics SoapUI containscomprehensive in-built reporting capabilities. This helps tocreate easy to understand reports for functional and load tests from within the UI at the project, test Suite, test Case, and load test levels.

Automation

SoapUI packs advanced end-to-end automation features, allowing usersto dramatically reduce costs and time-to-market. Using the Command-Line tools bundled with SoapUI, userscan run their functionalor load tests and Mock services from just about any task sched24 uler, or as an integrated part of their build process.

p

Technology Support SoapUI provides support for all the common protocols and standards. The technologies supported include SOAP/ WSDL, REST, HTTPS, JDBS, JMS, and others.

Load Testing SoapUI is capable of building the most advanced load tests quickly and easily. It supports direct integration with LoadUI which is an Open Source Load Testing solution that is free and cross-platform. It can be used to perform distributed real time testing with LoadUI agents. It also generates real time analytics with auto-generated reports.

Conclusion SoapUI has quickly become a very popular functional testing tool in Web services environment. With very strong community behind it, it is a versatile tool that users can rely on for their Web services automation today and tomorrow. So what are you waiting for? go ahead and start exploring the SoapUI for your Web services testing requirements.

Glossary of Terms SOA REST SOAP WSDL JDBS JMS

Service Oriented Architecture Representational State Transfer Simple Object Access Protocol Web Services Description Language Java Database Services Java Message Service

Some Useful Resources 1.  http://en.wikipedia.org/wiki/Soapui 2.  http://sourceforge.net/projects/soapui/


Short Article

Linux Advanced Routing Setting up a Mixed Public-Private Network

the server through the public interface or through the router’s port forwarding via the NAT interface was a challenge in this case. The diagram shows the network requirements. As seen in the diagram, 164.99.89.77 is the public interface (eth1) and 172.17.2.80 (eth0) is the private interface. Virtual interface vmnet5 provides the NAT environment with network address 172.17.2.0. The requirement is to reach the guest via eth0 or eth1 from the 164.99 network. The host (164.99.89.74) also provides port forwarding to reach the guest via the private interface. The primary objective is to ensure that requests coming through a particular interface get responded via the same interface. After some research on Linux advanced routing, I stumbled upon an internet post and designed the basic routing table based on the recommendations from there. The following is a summary of changes: 1.  Disable reverse-path filtering for both interfaces. When source and destination traffic to the same IP using different interface occurs, the Linux kernel drops the traffic as potentially spoofed. This is called reverse-path filtering.

Jaimon Jose  Distinguished Engineer and architect for Identity and Access Governance products. Currently he is focusing on emerging technologies and providing NetIQ solutions in a cloud footprint

IDC Tech Journal — January 2014

R

ecently, I had a unique need to have a mix of public and private network on a particular server for some testing. A number of services were already configured for the public interface. I had to test a particular feature by using a NAT environment and the easiest I could think of was to configure the same server with a NAT ifc in the VMWare environment and configure that feature to use this private interface. Setting up the proper routes to reach

25 p


5.  Then a preferred default route.

Address 164.99.0.0

Comments Public network

164.99.89.77

IP Address of the public interface (eth0)

ip route add default via 172.17.2.2

164.99.89.254 Gateway address for the public network 172.17.2.0

Private network

172.17.2.80

IP address of the private interface (eth1)

172.17.2.2

Gateway address for the public network

2.  Create two additional routing tables in /etc/ iproute2/rt_tables. For example, T1 and T2. This file will look as follows:

3.  Populate the tables as given below: ip route add 164.99.0.0 dev eth1 src 164.99.89.77 tabel T1

6.  Set up the routing rules. ip rule add from 164.99.89.77 table T1 ip rule add from 172.17.2.80 table T2

The above rules ensure all responses to traffic coming on a particular interface get answered via the same interface. The resulting routing table looks as follows with the changes mentioned above.

The server can be reached via public interface or via the private interface with the port forwarding in the router now.

ip route add default via 164.99.89.254 table T1 
 ip route add 172.17.2.0 dev eth0 src 172.17.2.80 table T2
 ip route add default via 172.17.2.2 table T2

4.  Set up the main routing table. ip route add 164.99.0.0 dev eth1 src 164.99.89.77 ip route add 172.17.2.0 dev eth0 src 172.17.2.80

Bits & Bytes

IDC Tech Journal — January 2014

Multipath TCP, a hidden gem in iOS 7 — Ramki K

26 p

Traffic congestion, like what we see on our roads, happens in cyber-highways as well. We have some means to handle it, such as prioritization of network packets and traffic metering. But unlike our road network, the advantage that we have on the virtual network is we have multiple networks to which we can connect. This is what multi-path TCP approach leverages. In the current scenario, if your phone or tablet is connected to Wi-Fi and a cellular network at the same time, only one of these connections is used for data transfer. If that connection drops, there will be data loss and you have to try again. If multi-path TCP is enabled, the device will be able to intelligently switch between an active Wi-Fi and cellular network, avoiding data disruption.   This protocol was accidentally discovered by Prof.Olivier Bonaventure of IP Networking Lab, Belgium, and is bound to have a far reaching effect on network connectivity for handheld devices. Apple has introduced this protocol in iOS 7 and is currently using this in Siri, Apple’s personal assistant. References: 1.  http://www.idownloadblog.com/2013/09/24/apple-using-multi-path-tcp/ 2.  http://pocketnow.com/2013/09/25/multipath-tcp#!prettyPhoto


Short Article

IPV6 LAB Setup Internet Protocol Version 6 (IPv6) is a network layer protocol that enables data communications over a packet switched network. The working standard for the IPv6 protocol was published by the Internet Engineering Task Force (IETF) in 1998. The IETF specification for IPv6 is RFC 2460. IPv6 was intended to replace the widely used Internet Protocol Version 4 (IPv4) that is considered the backbone of the modern Internet. IPv6 is often referred to as the “next generation Internet” because of it’s expanded capabilities and it’s growth through recent large scale deployments. This Paper is intended to give a brief idea about setting up a full-fledged IPv6 Test Lab. This includes setting up IPv6 Router, DHCPv6 Server, and a DNS Server to test the basic functionalities of any IPv6-enabled device.

Introduction Internet Protocol version 6 (IPv6) is a network layer protocol like IPv4, the current version of the Internet Protocol, used on the Internet.The main improvement brought by this version is the increase in the number of addresses available for networked devices. IPv4 supports about 4.3 billion addresses, which is inadequate. IPv6, however, supports 2128 addresses; this is approximately 5×1028 times the addresses supported by IPv4.

Configuring an IPv6 Router You can setup a simple router using a Windows machine. For this, you need to install two Network Interface Cards in the Machine. In this document, the two connections will be referred to as Subnet 1 Connection and Subnet 2 Connection.

You have to run the following commands at the command prompt: netsh interface ipv6 set interface “Subnet 1 Connection” forwarding=enabled advertise=enabled netsh interface ipv6 set interface “Subnet 2 Connection” forwarding=enabled advertise=enabled netsh interface ipv6 add route 1112:ef2:01:14::/64 “Subnet 1 Connection” publish=yes netsh interface ipv6 add route 1112:ef2:01:15::/64 “Subnet 2 Connection” publish=yes netsh interface ipv6 add route ::/0 “Subnet 2 Connection” nexthop=<Address of the Next Router> publish=yes

Connect the two Network Interface Cards to two switches, which will form the two subnets. For each of the Hosts connected to the subnet, the Gateway address would be the address of the Network Interface it is connected to. The ipv6 add route command mentioned above makes the Router advertise the prefix specified in the command (1112:ef2:01:14::/64 and 1112:ef2:01:15::/64 in the above commands). All the hosts in the network will receive this advertisements and form addresses with the prefixes advertised. These are called ‘Stateless Addresses’.

Configuring Dibbler Server Dibbler is a free DHCPv6 software package that supports many DHCPv6 options and acts both as server and client. It is available for Linux (kernel 2.4 and 2.6) and Windows (from NT to Vista) as source code or compiled binary distribution.

Shyrus Joseph  Started career with Wipro.

Holds an MS in Software Engineering. Currently working for Access Manager QA team

IDC Tech Journal — January 2014

Abstract

27 p


Installing Dibbler Server To install Dibbler from sources: 1.  Download tar.gz source archive from http://sourceforge.net/projects/dibbler/files/latest/ download 2.  Extract the file, and then type make followed by target (for example: server, client, or relay). 3. After the compilation is successful, type make install. For example, to build a server, type the following commands: tar zxvf dibbler-0.4.0-src.tar.gz make server make install mkdir -p /var/lib/dibbler

Configuring Client DHCPv6 The following is a sample of the configuration file (client.conf) for Stateful DHCPv6: iface eth0 { ia option dns-server option domain }

In the preceding example, eth0 is the interface used. On Windows, this could be “LAN connection 1”. The following is a sample of the configuration file (client.conf) for Stateless DHCPv6: iface eth0 { stateless option dns-server option domain }

In the preceding example, eth0 is the interface used. On Windows, this could be “LAN connection 1”.

Configuring Server DHCPv6

IDC Tech Journal — January 2014

Stateful DHCPv6:  The following is a sample of the configuration file for Stateful DHCPv6. In the following example, the configuration offers stateful address assignment from an address pool: log-level 8 log-mode long iface “eth0” { class { pool 1112:ef2:01:14::1000-\ 1112:ef2:01:14::ffff } # provides DNS server location to the clients # also, the server uses this address to perform DNS Update, # so it must be valid and DNS server must accept DNS Updates. option dns-server 1112:ef2:01:14::1 # provide the domain name option domain example.com }

is the interface used. On Windows, this could be “LAN connection 1”. 28

p

eth0

This configures a DHCPv6 Server with an IPv6 Address ranging from 1112:ef2:01:14::1000 to 1112:ef2:01:14::ffff. Note: Depending on the functionality you want to use (server, client, or relay), you must edit the configuration file (client.conf for client, server.conf for server, and relay.conf for relay). All configuration files must be placed in the /etc/dibbler directory. Also ensure that the /var/lib/dibbler directory is present and is writeable. After editing the configuration files, issue one of the following commands: #dibbler-server start #dibbler-client start

Configuring an IPv6 DNS Server Configuring a DNS server is important for Name resolution. You can setup a DNS IPv6 DNS Server using a Windows server. Windows server has a Server Manager, which is very convenient to configure Server roles including DNS. To configure a DNS server by using the Server Manager utility 1.  In the Server Manager utility, under Roles Summary, click Add Roles and select DNS Server.The only difference between the IPv4 and IPv6 DNS server configuration is the creation of Reverse Zones. To create reverse zones: 1.  Right-click Reverse Zones and select New Zone. 2.  When prompted, select IPv6 Reverse Lookup Zone. 3.  In the window that is displayed, enter the IPv6 prefix in the IPv6 Address Prefix’field. If you want to enter a 64 bit prefix 1112:ef2:01:14, you must specify, 1112:ef2:01:14::/64 After creating the forward and reverse zones,you can add hosts as required.

Conclusion This document only gives an idea of configuring a simple IPv6 environment to test a behavior of any host that supports IPv6.

References 1.  StepSettingUpIPV6.doc http://www.microsoft. com/en-us/download/details.aspx?id=1736 2.  Microsoft IPv6 Web site http://go.microsoft.com/ fwlink/?LinkId=24350 3.  Introduction to IPv6 http://go.microsoft.com/ fwlink/?LinkId=69223 4.  http://ipv6int.net


Short Article

Mobile Device Management Consumerization of IT - defined in simple terms as the use of technologies that can easily be provisioned by non-technologists. Consumerization of IT not only saves money and increases business agility, but also improves productivity.

BYOD [Bring Your Own Device] is one of the initiatives to achieve consumerization. BYOD policies allow the usage of personal devices in one’s workplace and use them to connect to office network. With rapid adaptation of mobile devices like laptops, smart phones, and tablets across the world, mobility has become an important parameter to consider in an enterprise. These days, managing and securing mobile devices has become the top priority for enterprises. In this article we are going to discuss on Mobile Device Management (MDM). MDM is one of the key concern area for the IT staff as mobility is evolving in super fast. Mobile Device Management:  MDM is a software that secures, monitors, manages, and supports the mobile devices and typically includes over-the-air distribution of applications, data, and configuration settings. Recommended Features:  Some of the recommended features of typical MDM solutions are:   Support multiple vendor devices (Android, iOS,Windows, etc.,)   Ability to provision users and their mobile devices.

Cloud based communication, so that it is easy to be in touch with devices through over-the-air transmission.   Easy deployment of required apps.   Remote configuration, management, and security.   Remote device tracking.   Backup and restore functionality of corporate data or container as well as wiping the corporate data as and when needed.   Logging and reporting for compliance usage.   Communication with other mobility softwares like Mobile Information Management (MIM), Mobile File Management (MFM), and printing apps.   Communication with other end-point management software   Data encryption and decryption.   Interface with LDAP, SMTP & other services. As the mobility is the evolving area, so much research is happening in the related areas. Containerization is a concept in the mobility domain to separate the personal data from the corporate data in mobile devices. Containerization would enable MDM to control, access, and apply security policies effectively. Raghu Babu KVM  Working in Novell from past 5 yrs and has around 10 yrs of exp in the industry. He has worked in different technologies such as Telecom Mediation, Billing & OSS, Virtualization,Cloud Computing, Collaboration and Mobility. Research interests are Cloud & Mobile Computing. Raghu received MS(Software Systems) from BITS-Pilani and is currently working in Corporate Interoperability Test (CIT) team

IDC Tech Journal — January 2014

I

n the recent past, consumerization and BYOD (bring your own device) have become the buzz words in the technology industry.

29 p


This approach leverages an enterprise container that replicates and replaces native OS capabilities such as mail, calendar, contacts, and browser to more tightly control access to enterprise data. Containerization allows the IT department to control the data movement across the available apps and restricts the corporate data usage out of the container for personal apps. Further improvement in containerization results in multi-persona devices, where the both corporate and personal data can reside on a device in parallel.

IDC Tech Journal — January 2014

Some of the recommended security policies in corporate container are:

30 p

Uninstalling the MDM app.   Copy pasting the corporate data across the container.   Usage of corporate data across the container for personal apps.   Complex password protection   Restricting the camera, blue-tooth and so on, in the corporate network   Tracking the device and restrictions to connect to the corporate network. Different MDM softwares: As Mobility is a new and evolving area, business opportunity space is very high. With this reason, many organizations are concentrating in this market. Some of the organizations, who involved actively in this area are Airwatch

(MDM), Mobileiron (MDM), Novell (ZMM), Citrix (XenMobile) and so on. ZENWorks Mobile Management (ZMM):  Novell ZENWorks Mobile Management helps the users to secure and manage the mobile devices. Some of the ZMM features are:   Supports different vendor devices (Android, iOS,Windows etc.,).   Secures and monitors the mobile device including the data.   User provisioning for different devices.   Pushes or distributes data and app to the managed devices.   Ability to integrate with LDAP, SMTP, DataSync, and ZENWorks server.   Easy administration through web browser.   Provides a neat dashboard with pictorial representation   Provides reports to analyze the mobile devices. ZENWorks Mobile Management (ZMM) along with ZENWorks Configuration Management (ZCM) would help IT administrators to provision and manage enterprise endpoint devices including the mobile devices. For more details, see the Novell ZMM product website http://www.novell.com/products/zenworks/ mobile-management/


Short Article

T

esting. No one really wants to do it!!!! It’s expensive. It’s time consuming. But unfortunately, it’s needed to ensure that our consumers have a positive experience when they use our mobile applications. it’s important that we make sure that the experience is a great one for every consumer, every time they use our application,

When it comes to testing mobile applications there are unique challenges. The market for mobile applications is growing rapidly and the demand for mobile applications is rising as the technology is constantly advancing. It’s evident that very soon the mobile devices will surpass the PCs and desktops in the near future. But as with any emerging tech-

starting with that very first time. When they find too many issues they will abandon our product and move away. The goal of our testing must not limit to finding errors. Instead, our goal must be to understand the quality of our offering. Does it work? Does it function as expected? Will it meet the needs of our users?

nology, developing and implementing mobile applications can pose a number of unique challenges, so is the testing of mobile applications. In this document, we examine the various testing options for mobile applications while explaining the factors that we will need to consider in determining our testing strategy. Shivaji Nimbalkar  Around 10+ yrs of experience in IT industry and working with Novell since last 1 yr. He has worked in different technologies such as Telecom Mediation, BSS & OSS, Virtualization, Data Warehousing and Mobility. Research interests are Data Mining and Mobile Computing. Shivaji is currently working as Specialist in Corporate Interoperability Test (CIT) team

IDC Tech Journal — January 2014

Challenges in testing Mobile Applications

31 p


Finally, we make some recommendations on how we can combine the various testing options to find the testing strategy that is suitable for our mobile application. QA Challenges in Mobile Application Testing:

IDC Tech Journal — January 2014

1.  Device Variation This is the biggest challenge in mobile testing because of compatibility issues. A mobile application can be deployed across devices that can have different: a.  Operating systems like iOS, Android, Windows, BlackBerry and so on. b.  Versions of operating systems like iOS 4.x, iOS 5.x, BlackBerry OS 4.x, 5.x, 6.x, Android Jelly Beans 4.x and so on. c.  Manufacturers like Samsung, HTC, Nokia, Micromax, Dell etc. d.  Keypad type such as virtual keypad, hard keypad etc. Quality team cannot guarantee that if a tested application works well on a given device, it will work on another device even if it is from the same product family because the screen resolution, CPU, memory, OS Optimization, and hardware could be different. 2.  Mobile Testing Tools Availability Tools developed for Desktop and web based applications do not work for mobile applications. It requires a complex scripting technique and new tool development for mobile application testing. 3.  Industry Standards Mobile applications testing must meet industry standards for an application to be globally acceptable or popular. 4.  T esting on various networks and network vendors Most of the mobile applications require network connectivity sometime or the other. If the app connects to a server for flow of information to and fro, testing on various (at least all major ones) networks is important. Mobile networks use different technologies like CDMA and GSM with their 2G, 3G and 4G versions. The network infrastructure used by network operators may affect data communication between the app and the backend. Apart from different operators, an application needs to be tested on Wi-Fi network as well. 5.  Mobile User Mobile application audience comprises users who may be non-tech savvy or extremely technical users, from children to middle age users. Each of these may have a different way of using the application and different expectation. A middle-age user might be more patient than a teenager when it comes to the response time. Mobile users have incredibly high expectations from the applications! Testers have to wear different hats while testing the application and make sure that it provides a good overall response to all types of users.   To overcome these challenges, you can adopt few of the following strategies: a.  Device Emulator Emulated devices are very cost effective because they 32 allow quick and efficient testing. This allows bulk of

p

the testing in a well instrumented test environment that is cost effective. When we look at device emulators for testing, ensure that they have diagnostic capabilities to isolate problems and the flexibility to test different network options. All applications can be deployed and tested on emulator without investing in mobile handset for various OS.   Emulators are mostly available for free, and we can also perform UI, Stress and performance testing on that. 30-40% test could be achieved using emulators. b.  Cloud Testing Solution Mobile Devices can be accessed through web interface i.e Browser. Applications can be deployed tested and managed.   Automation module is available and solution is secure if private cloud is used with no maintenance. c.  Real Time Devices with Real Networks The testing team cannot completely avoid this option but there should be an option to test real devices on real networks whenever required. This is important because the mobile application will always be used on mobile devices by end-users who may access the application from a remote area with fluctuating network signal strength. d.  Mobile Application Tools like   FoneMoney   UIAutomation   Robotium   Web OS   EggPlant

Conclusion The significant challenges and risks involved in mobile application testing can impact production of mobile apps. These risks and challenges can be mitigated by adopting the various testing types and strategies outlined in this paper. Careful selection of target devices, connectivity and tools can ensure a cost effective mobile testing process. Also, combining the solutions to mobile specific aspects of application testing with traditional best practices and testing processes can effectively address the challenges of mobile application testing.


l l l

Troubleshooting

Art of Troubleshooting Linux Computers

This information will assist you in troubleshooting Linux Computers.

Scenario 1: /etc/passwd is deleted This file contains information about user accounts and passwords. If you log in to a computer that does not have this file, an error message stating incorrect login pops up. Now, that you have seen the problem and its consequences, it’s time you solve it. Boot the computer using single user mode. At the start of booting, press any key to enter into the GRUB menu. From the list of the operating systems, select the one you are working on with and press Enter. Let’s play around with kernel parameters. 1.  Select the kernel and press e to edit its parameters. 2.  Type a space followed by 1 at the end and press the Enter key      This will instruct the kernel to boot into single user mode, which is also known as maintenance mode. 3.  Press b to continue the booting process. Now that you have booted into single user mode, you are probably asking yourself, what is next? The tricky portion of this exercise is now over and it takes just one command to have your passwd file in its place. Actually, there is a file /etc/passwd-, which is nothing

but the backup file for /etc/passwd. So all you need to do is to execute the following command: cp /etc/passwd- /etc/passwd

Execute the init 5 command to switch to the graphical mode.

Scenario 2: /etc/inittab is deleted As you are aware that init is the process that starts first and then starts all other processes. The /etc/ inittab file contains instructions for the init process and if it’s missing, then no further processes can be launched. If you start a computer that does not have the /etc/ inittab file, the following message prompts you to enter a runlevel: INIT: No inittab file found…and will ask you to enter a runlevel. Even if you specify the runlevel, you are prompted that no more processes are remaining in this runlevel. Fixing this problem in the single user mode is not easy. You need the Linux rescue environment to fix this problem. The below steps describes the procedure to enter rescue mode, 1.  Set the first boot device to CD and boot using a RHEL 5 CD.

Anant Kulkarni I am working as Software Consultant in NetIQ from past 1.5 years, currently in AppManager UNIX team and having 6.5 years of experience in the field of Testing and Linux. I am also a certified Networking Professional, having CCNA and CCNP routing certifications.

IDC Tech Journal — January 2014

E

verything seems fine when your Linux computer works just the way you want. But that feeling changes dramatically when your computer develops problems that you might find really difficult to resolve. Imagine, some of the important files on a RHEL 5 computer get deleted or corrupted.

33 p


2.  At the boot prompt, type the following to enter the rescue environment:   Linux rescue and press Enter 3.  The computer is now mounted within /mnt/ sysimage. 4.  Reinstall the initscripts package to get the / etc/inittab file as follows: You can also press the Tab key after init to auto complete the name.

Now you’ll get your /etc/inittab file back. Type Exit to leave the rescue environment, set your first boot device to hard disk and boot normally.

For example, I got hd0 and fd0 the hard disk and floppy disk, respectively.   As we know that GRUB is stored in the first sector of a hard disk, which is hd0,0.   So the complete command would be root (hd0,0).   Enter this command and press the Enter key to carry on. 2.  Enter kernel /v and press Tab to auto complete it.   For example, it’s vmlinuz-2.6.18-128.el5.   Note it down as we’ll require this information further, and then press Enter. 3.  Enter initrd /i and press Tab to auto complete it. For me, it’s initrd-2.6.18-128.el5.img   Again note it down and press Enter. 4  Type boot and press Enter, and the computer will boot normally. 5  Create a grub.conf file manually.   So create the /boot/grub/grub.conf file and enter the following data in it. vi /boot/grub/grub.conf

This file is the configuration file of the GRUB boot loader. If it is deleted and you start your computer, you will see a GRUB prompt that indicates that grub. conf is missing and there is no further instruction for GRUB to carry on its operation. At the GRUB prompt, you can enter some command that can make your system boot.

6  Press ESC then type :wq to save the file and quit the vi editor.

The below steps describe the process, 1.  Type root (and press Tab to find out the hard disks attached to the system).

You have created a grub.conf file manually to resolve the problem. Don’t forget that the kernel and the initrd image file name may vary on your system.

Bits & Bytes

IDC Tech Journal — January 2014

Scenario 3: /boot/grub/grub.conf is deleted

34 p

Configuring GroupWise for Forums — Guruprasad S For easy access of Novell Forums postings, instead of going to the forums web page (for example, https://forums.novell.com/novell-product-discussions/endpoint-management/zenworks/), you can configure this in GroupWise and view all the discussions in the GroupWise client. This is very useful, as you can see all the discussions in GroupWise and do not need to go to the Forms webpage. To configure GroupWise for Forums: 1.  In GroupWise, go to Accounts > Account Options > News > Add. 2.  Give an appropriate account name and click Next. In the News (NNTP) Server field, enter nntp.novell.com and click Next. Specify the type of internet connection you are using (internet through LAN or dialup/modem) and click Next. Enter a description of the account in the Description field and click Finish. 3.  Subscribe to the groups you want to part of. You will start seeing the discussions in your GroupWise client.


l l l

Troubleshooting

Performance Testing Methodology for Java-based Applications An easy way of testing the performance of Java applications

Most of the products in IT industry are Java-based applications. When it comes to testing the performance of these applications, there are many challenges you faces at different stages of the performance testing. For Java-based applications, the most-difficult phase is monitoring and analyzing JVM threads, heap memory, and garbage collection, to identify the root cause of performance bottlenecks for the application. This white paper presents a detailed methodology about an easy way of testing the performance of Javabased applications. It explains how to define the right performance attributes for the application you are testing and how effectively you can monitor and debug the performance bottlenecks during performance testing of the application. With this approach, you can confidently report results about meeting the goals of the performance testing for every feature of the application you are testing. This document also explains how you can choose the right (CUI) tool-set for quickly identifying the Java performance issues in a Linux testing environment.

Introduction For any Java-based application, real-time monitoring of its JVM resources is a must.   It is a daunting task to analyze the JVM resources that are utilized by this application, such as CPU, threads running on a particular Java process) Java heap memory utilization and garbage collection (GC) utilization.   How effectively can you monitor these JVM resources used by this application (for example, Sentinel) and

report performance bottlenecks in the application to the application developers?   For performance testing, the key bottlenecks when the application is running on the system are:   Hardware resource utilization like CPU, Memory, and Disk I/O occupied by the application components.   Response time of the application.   Throughput of the application.   For a Java-based application, it is very difficult to analyze   Which Java threads of the application are holding the system resources - CPU, Memory, Disk, Network-?   Which threads of the application are spending time on garbage collection (GC)?   Which threads of the application are mostly in the waiting state?   Which threads of the application are causing deadlocks or application crash? Shammi Kumar Jada is currently working with NetIQ for the Sentinel team as a Test Specialist. He is working in NetIQ for last 6+ years. He has experience working on multiple Novell products like iManager, IDM Designer & Analyzer. Before joining NetIQ, Shammi has worked on Telecom Network Management products with C-DOT (Center for Development of Telematics).

IDC Tech Journal — January 2014

Abstract

35 p


When does JVM throw out of memory (OOM) error or a core dump?   How can you determine a particular thread belonging to a specific part of the code (Java class) is blocking your testing with memory dumps, deadlocks, etc).Are the GC (garbage collection) threads consuming most of the time which results in the degradation of the system performance ?   How effectively can a testing member inform the developer of the application in order to save his testing time and the developer’s time by specifying the exact thread information, which is the root cause of many performance issues. In order to quickly address these issues during the performance testing of Java-based applications, I have come up with an easy solution with the built-in Linux/ Java command user interface tools for monitoring and analyzing Java performance bottlenecks in your testing environment.

Challenges in Monitoring the Performance of Java-Based Applications Identifying the Performance Metrics and Attributes for Monitoring During the performance testing of any application, typically, you need to test three important performance metrics: hardware resource utilization, response time, and throughput . You must select your method of testing based on the application.

For example, some Java-based applications are bound to the hardware resource utilization on the server. This is the case with server-based applications such as Sentinel, App Manager, etc. For these applications, you mostly need to monitor the JVM threads that are holding the CPU, consume Memory, slows down the Disk I/O, and utilize high network bandwidth. Some applications are bound to response time. For example, web applications like Access Manager or iManager. For these applications, you need to determine which JVM threads are slowing down the Web server or the Database server response time. For most of the applications, throughput is the key performance metric for which you need to benchmark the performance sizing numbers. For these applications, you need to monitor the JVM threads on specific features when they are scaling up. To effectively debug the JVM that your application uses, follow these steps in the performance testing environment: 1.  Categorize the test cases that you planned for the performance testing. 2.  Choose the right method of testing (aka performance metric) appropriate for the application. 3.  Determine and keep the performance attributes ready (belonging to that metric) that you need to monitor during test case execution.   After you finalize these attributes, choose the right tool-set (strictly built-in, Freeware, Command line user interface, automated and easy to install) and prepare the performance testing environment.

Performance Metric

Monitoring Attributes

Linux/Java Tools

Hardware Resources

CPU usage (threads holding CPU   Memory usage (memory leaks, thread leaks)   IO waits (threads occupying 100% Disk I/O)   Network usage (between the Client and the Server)

mpstat, sar (for CPU)   top, sar (for Memory)   iotop, iostat, sar (for Disk I/O)   bwmon, sar (for Network)   jstack, jstat, jconsole, visual VM (for debugging JVM).

IDC Tech Journal — January 2014

Response Time

36 p

Web UI launch (Tomcat/Jetty)   Web stat launch (javaws)   Events/data delay from sources   Rendering results in Web UI. For example, time

JMeter   REST API’s, CURL   Product specific logs/scripts, etc.

taken to create/delete X users.

Throughput (Load/ Stress/Scalability)

Numberof simultaneous user logins.   Slow performance of product specific features.   Product (features) limits testing.   Performance snapshot information from the   Product benchmarking (sizing). application logs.   For example, maximum events per second (EPS) on   How the JVM reacts when you are finding Sentinel 7.1.

Maximum logins in Novell iManager.   Maximum SSL VPN connections per Access Manager server.

Maximum number of Database connections.

the throughput of your application? Use the below Java tools as appropriate.   jstack, jstat, jconsole, visual VM (to debug JVM).


Prerequisites   Make sure JDK is available on the application server where you need to analyze JVM.   JDK version matches the JRE version that the application uses.   When using Java debugging tools, it is important that the version of the JDK being used exactly matches the JRE version used by the application; otherwise, you may see errors while running the jtools.   Make sure PATH variable is set for using jtools/commands (jps, jstack, jmap, jstat, jconsole, etc.

Monitoring Performance Attributes on Application’s JVM   Create test environment for executing the tests with the available toolset.   Generate test load (input to the system) as defined in the test.   Monitor the test for the system behavior, system response, input load, deviations from the expected values defined prior to the test.   Follow the below steps to monitor the performance attributes for any deviations from the expected values that were defined prior to the test. CPU Bottlenecks - Threads Holding the CPU   Monitor the CPU utilization when the test is running using the built-in Linux ‘sar’ or ‘iostat’ commands as below.   Make a note when you observe a high utilization of ‘%user’ for application specific problems, ‘%system’ when JVM spends most of the time on garbage

collection, ‘%iowait’ when the application spends more time on Disk Read/Write operations. Sentinel-8Core:~ # sar -P ALL 1 1 Linux 3.0.13-0.27-xen (Sentinel-8Core) 10/24/13 _x86_64_ (4 CPU) 00:33:13 CPU %user %nice %system %iowait %steal %idle 00:33:14 all 0.26 0.00 0.26 0.00 0.00 99.48 00:33:14 0 0.00 0.00 0.00 1.01 0.00 98.99 00:33:14

1

1.16

0.00

0.00

0.00

0.00

98.84

After you confirm these bottlenecks, follow the below steps to determine which JVM threads are mostly occuping the CPU-.: 1.  Find the PID of the Java process by running ‘ps’ or ‘top’ command. Otherwise, you can use the jps command to find the running Java processes. novell@linux-1brg:~/jdk1.7.0_17/bin> ./jps -l 34350 com.novell.reports.jasper. RemoteJasperReportManager 19776 sun.tools.jps.Jps 31918 esecurity.util.service.Service (Sentinel)

2.  Find the thread id’s (TID) associated with the PID of the Java process, which are consuming more CPU, using the “top” command with “H” option.   The following are samples of some of theindividual JVM thread id’s running on the above Java process (id). Note: If you are running the “top” command without “H” option, the command returns a single Java process. Specifying this command with “H” returns individual threads running on the Java process. 3.  Take (request) the full thread dump (Java process stack trace) from the JVM using the “jstack <PID>” command. You may need to use -F flag if the command does not return any results.   Also, you can use jstack to list the native process stack trace. Though jstack is the preferred way of getting the stack traces, as it does not impose any restrictions on the running process, it may not

IDC Tech Journal — January 2014

4.  Monitor your test results accurately using the following Linux commands and jtools.   Make sure the application meets following prerequisites before you start monitoring the test results:

37 p


provide a proper stack trace when working with embedded Java processes. In such cases, you may want to use jdb. You must note that jdb requires an open port in the Java process.   For example, novell@Sentinel-Box:~/

jdk1.7.0_06/bin> ./jstack 31918> /tmp/ jstack_32298.txt

Note: You can execute the jstack command with user who have rights on your application if required. 4.  Open the jstack command output (thread dump file you created in the previous step) and find out which exact thread is occupying the most CPU from top listed [I.e 32298].   The “jstack <pid>” command returns something like below. The ‘nid’ is the native thread id in a hexadecimal value, so you need to convert the decimal format from the top command to the hexadecimal format and search for that hexadecimal string in the jstack output file. 5.  From the jstack output file, as shown in the below stack trace file, you can make out thread-id 32298 (0x7e2a in hexadecimal) which belongs to the “Raw Data Store” component in Sentinel is mostly utilizing CPU (Step 2, 94% from top “SHIFT+H”) and the state of the thread is Waiting.

IDC Tech Journal — January 2014

“RawDataStoreRetry” daemon prio=10 tid=0x00007f935cf3c800 nid=0x7e2a in Object. wait() [0x00007f936047f000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x000000075a649908> (a EDU. oswego.cs.dl.util.concurrent.WaitableBoolean) at java.lang.Object.wait(Object.java:503) at EDU.oswego.cs.dl.util.concurrent. WaitableBoolean.whenEqual(WaitableBoolean. java:110) - locked <0x000000075a649908> (a EDU.oswego. cs.dl.util.concurrent.WaitableBoolean) at esecurity.ccs.comp.event. SmallFileMultiDirectoryEventMessageCache.getNe xt(SmallFileMultiDirectoryEventM essageCache. java:607)

38 p

- locked <0x000000075a649920> (a java.lang. Object)

Note: Take multiple thread dumps (2 to 3 samples) to confirm that the bottleneck is from the same component (thread) of the application or same thread is being occupied by the resource utilization. To use decimal to hexadecimal converter, visit this site: http://www.statman.info/conversions/hexadecimal.html. Disk I/O Bottlenecks   Monitor your disk utilization when the test is running using the built-in Linux ‘sar ‘ command as below. Look for high %util at any point of time. Sentinel-8Core:~ # sar -p -d 1 Linux 3.0.13-0.27-xen (Sentinel-8Core) 10/24/13 _x86_64_ (4 CPU) 00:05:08 DEV tps rd_sec/s wr_sec/s 00:05:09 xvda 20.00 0.0 0 1220.00 00:05:09 xvdb 0.00 0.00 0.00

avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 90.00 0.00 0.00 0.00 0.00 0.00

00:05:09 DEV tps rd_sec/s wr_sec/s 00:05:10 xvda 24.00 0.00 1116.00 00:05:10 xvdb 0.00 0.00 0.00

avgrq-sz avgqu-sz await svctm %util 8.00 0.00 0.00 0.00 95.00 0.00 0.00 0.00 0.00 0.00

After you confirm that there is always a high disk utilization when the test is running, follow the below steps to find out which JVM threads are umostly utilizing the disk. 1.  Find the Java thread id [TID] from ‘iotop’ commands which are utilizing disk 100% as shown in below screen shot.   You can install the iotop command from yast2 or download it from http://software.opensuse. org/package/iotop. 2.  In the below nexample, you can see a thread id (TID) with 1028 is taking 99.99% Disk I/O . 3.  Identify thread 1028 belongs to which particular component of the application (example, Sentinel in this case). thread [TID] which is highly occupying Disk I/O using the iotop commnd as below. 4.  Run the jstack command and redirect the thread dump output to a file.


jstack_1028.txt

5.  Convert decimal 1028 [thread-id with 99.99% IO from above snapshot] value to the hexadecimal value 0x404 and grep for the hexadecimal value in the jstack output file.   The jstack output file shows that the bottleneck is from Lucene Merger thread (in Sentinel) which is occupying 99.99% Disk I/O. 6.  You can directly grep for deadlock threads from the jstack output file if there are any. Memory Bottlenecks Java heap and GC (garbage collection) are the key things to be monitored for JVM. Java Heap Usage 1.  Try to find the heap memory allocated to the JVM by the application. Periodically check how much heap RAM is being utilized from the allocated memory.   For example, Heap RAM used: 979.09 MB of 6006.69 MB (16%) by Server-8ADF1170-1348-10318242-0022190BB0A6 service. 2.  Input the Java heap dump using ‘jmap -heap <PID>’ from the command line terminal.   You can use the periodic dumps to analyze heap buildup that can occur in any application. You can create a heap histogram to get information about the objects that may be taxing the memory. On findingunexpected allocations, you can inspect the classes creating them.   Refer to this link for the heap snapshoting script: https://twiki.innerweb.novell.com/bin/view/Main/ Miscutils. novell@ismvvax03:~/jdk1.7.0_17/bin> ./jmap -heap 32483 > heap_dump_file.txt

3.  Open the dump file and look for detailed heap memory usage of the application’s JVM. For example, you can look at the following:   GC algorithm used (parallelGC, G1GC, ConcurrentGC)   Heap configuration details   Heap usage detail of (PS Young Generation, Eden Space, From Space, To Space, PS Old Generation and PS Perm Generation, etc.) 4.  In all of the above cases, keep an eye when heap usage reaches 100%, the JVM throws a potential memory dump .

Sentinel-ss7r:/home/novell/jdk1.7.0_17/bin # ./ jstat -gc 26155 2 1 S0C

YGCT 0.0

S1C

FGC

S0U

S1U

FGCT

129024.0

GCT

EC

0.0

1293312.0 3596288.0

EU

OU

PC

PU

YGC

129024.0 5502976.0

2649066.1

228239.2 158012 26191.065 26191.065

OC

0

228352.0 0.000

2.  To determine the GC threads which are mostly holding the CPU,   Follow the procedure mentioned in Sec 3.2.1.   Look for the below threads from the jstack output file which are occupying most of the CPU using “top” command with “H” option. “Gang worker#7 (Parallel GC Threads)” prio=10 tid=0x0000000000629000 nid=0x4f0c runnable “G1 Main Concurrent Mark GC Thread” prio=10 tid=0x00007ff25c020000 nid=0x4f16 runnable “Gang worker#0 (G1 Parallel Marking Threads)” prio=10 tid=0x00007ff25c0b0000 nid=0x4f17 runnable “Gang worker#1 (G1 Parallel Marking Threads)” prio=10 tid=0x00007ff25c0b1800 nid=0x4f18 runnable “G1 Concurrent Refinement Thread#0” prio=10 tid=0x00007ff25c008800 nid=0x4f15 runnable “G1 Concurrent Refinement Thread#1” prio=10 tid=0x00007ff25c006800 nid=0x4f14 runnable

Other GUI tools to analyze Java performance issues are jconsole, visual VM, Yourkit, Applicare from Arcturus. In-order to save time, quick debugging of the performance issues and saving historical JVM data via automation scripts and CUI tools is mostly recommended. Note: GUI tools may cause performance degradation of your application while running locally or monitoring the application remotely from clients.

Conclusion Based on the above methodology, you can identify the right performance attributes for various Javabased applications and accurately monitor them . You can debug the performance issues very quickly while doing performance tests on various Java-based applications.

IDC Tech Journal — January 2014

./jstack <Application_JAVA_PID> /tmp/

Garbage Collection (GC)

Another advantage is you can incorporate the Linux/ Java command user interface (CUI) toolset output into any automation framework for alerting/reporting results automatically if any deviation is found from the goal of the test defined earlier.

1.  Run jstat command from the Java bin folder to find the periodic GC utilization.

This solution addresses all kinds of bottlenecks in JVM run by the application. 39

p


Short Article

A Programmer’s Experiences

W

IDC Tech Journal — January 2014

hen I was in primary school, I copied my brother’s handwriting style because of the success he used to get in exams. As expected, it did not help with my marks. What happened was - along with my marks, my copied hand writing also deteriorated. However, intuitively, I knew that successful people have a marked preference to a particular style of expression & used to copy their expressions hoping I would get their success. The only models I knew were role models. Then came video games and computers. With a computer, I realized that every time I type, however disinterested I am when I type - the character on screen comes as neat as it can get. I got addicted to typing and programming and ditched my fascination for role models. A lot of program writing I did with the hope of finding a magic bullet for my deficiencies - much later than many of my friends. Happily, it never dawned upon me that a good model for expression is not sufficient for success.

40 p

What else do we need for success? Other models … of course. Model usage is very subjective and each problem requires a separate model. They are plenty of them - one for each class of problem. Mathematicians and software engineers share a rich set of models - E.R diagrams, UML diagrams, data structures, algorithms, models in graph theory, models in set theory. My craze for programming graduated to a craze for discovering problem oriented models. The quest still continues... All through my career, these models have always worked well as predictably as the computer and overwhelming problems have become easily solvable.

To sum up... in a famous physics book, there is a thought that summarizes the requirement for understanding physics (in the context of chapter for Units and Measurements) - ‘until we express something in numbers, we know nothing’. For an engineer, it is - ‘until we express in models, we know nothing’. It doesn’t mean that we know nothing now because we do not do modeling. What it means is that the illumination necessarily happens after the modeling is done - even if done unconsciously.

Some Practical Usages Reading hefty technical books: Technical books are several hundreds of pages. They are an perennial source of input for all my experiments. What do we do and remember when we read books? What I discovered is that the only activity we need to do is to create an index (like the ones that DB creates) on one important attribute that can be extracted from every chapter, paragraph, line and word: “motivation for why some information exists where it is”. What is concrete about this free advice is that motivation has a neat structure that looks like a UML diagram. Problem solving is a decoupled activity. We read books solely with the intention of building an index. We solve problems when the time comes.

Keshavan  I have been a java developer in industry for 10 years. Joined Novell 1.5 years ago


Test Driven Development (TDD)

In TDD, after the design of a feature has been frozen, instead of jumping into implementing the feature, we first write a set of test cases that define the feature that is being implemented. The test cases would range from high level test cases which define the feature being implemented as a whole to all minor sub-features and functionalities which were identified in the design. For example, if a HTTP proxy has to be implemented, there would be a test case to check if the product is able to proxy a HTTP request from a browser to a HTTP server. There will also be a test case to test the smallest functionality like parsing of URLs. In this way, there would be a set of test cases covering a set of overlapping functionality which would together help us develop a reliable solution. The below diagram is a logical representation of the relationship between a feature and the constituent functionality. In TDD, before implementing these sub-features, test cases are written to test each of these sub-features as well as the feature as a whole i.e. there will be a test case for Main

feature, Sub feature 1, Sub feature 2 etc. With the development of each feature or sub-feature, the developer ensures that the corresponding test case passes. This is an excellent way to ensure that each function we write is properly tested. This ensures that we will face few bugs while developing, hence spending less time in debugging and fixing. This automated framework will test the smallest functionality which means that now we have a self testable mechanism and any future change to the product can be tested by running it. In order to achieve this, several tools are available for development. For C language, cgreen is a great tool and for C++, Google’s mocking infrastructure is very useful.

Example Problem:  Implementation of a function that can count the total number of words in a sentence. Solution:  We will use cgreen to implement this function. As we have learned above, we will first write a test case and then start implementing features. We will first create an empty test case in the file word_test.c #include “cgreen/cgreen.h” TestSuite * words_tests() { TestSuite *suite = create_test_suite(); return suite; }

The following main test program all_test.c will execute all the test cases registered with it. #include “cgreen/cgreen.h” TestSuite * words_tests(); int main(int argc, char **argv) { TestSuite *suite = create_test_suite();

Ankur Kumar  9+ yrs of experience in Network Protocol development and WAN Optimization

IDC Tech Journal — January 2014

F

or a software developer, coding and testing are routine activities. We use several different testing strategies - unit testing, integration testing etc. From its name, TDD seems to be yet another testing method. This is, in fact, not so. TDD is a new way of software development. In the traditional development model, the emphasis is more on identifying features, designing, coding, unit testing and so on. In TDD, the entire development process is driven by testing. Several questions may arise in our mind from this statement. Is it possible? Who will identify test cases? Since there is no code yet, what will be tested? We will answer these questions in the following sections.

41 p


add_suite(suite, words_tests()); if (argc > 1) { return run_single_test(suite, argv[1], create_ text_reporter()); } return run_test_suite(suite, create_text_ reporter()); }

design of a program.[citation needed] By focusing on the test cases first, one must imagine how the functionality is used by clients (in the first case, the test cases). So, the programmer is concerned with the interface before the implementation. This benefit is complementary to Design by Contract as it approaches code through test cases rather than through mathematical assertions or preconceptions.

Now, we will add a few test cases to word_test.c

Test-driven development offers the ability to take small steps when required. It allows a programmer to focus on the task at hand as the first goal is to make the test pass. Exceptional cases and error handling are not considered initially, and tests to create these extraneous circumstances are implemented separately. Test-driven development ensures, in this way, that all written code is covered by at least one test. This gives the programming team, and subsequent users, a greater level of confidence in the code.

This is the extra effort needed to hook the test framework to the code being tested.

#include “cgreen/cgreen.h”; #include “words.h”; #include <string.h>; Ensure(word_count_returned_from_split) { char *sentence = strdup(“Birds of a feather”); int word_count = split_words(sentence); assert_that(word_count, is_equal_to(4)); free(sentence); } TestSuite * words_tests() { TestSuite *suite = create_test_suite(); add_test(suite, word_count_returned_from_split); return suite; }

is the function (not yet implemented) that will be tested by this test case, words.c is the file where the function will be implemented and words.h is the C header file which contains the declaration of the function. assert_that() is a macro which takes two parameters - the value to assert and a constraint. The constraints come in various forms. In this case, we will use the most common constraint - is_equal_to(). When the test program is executed, the test report is displayed on the console. split_words(char

*sentence)

After this test case has been written, the function split_ words() is coded in the file words.c. The file is compiled along with the test program into an executable. bash$ gcc words.c all_test.c word_test.c -o all_test

Now, when all_test is executed, it will fail or pass depending on output of the function.

IDC Tech Journal — January 2014

This is a small example of the use of cgreen for TDD. This can be extended to include more test cases covering the entire functionality of the product to be tested.

42 p

For more information, please visit Benefits:  A 2005 study found that using TDD meant writing more tests and, in turn, programmers who wrote more tests tended to be more productive.[11] Hypotheses relating to code quality and a more direct correlation between TDD and productivity were inconclusive.[12] Programmers using pure TDD on new (“greenfield”) projects reported they rarely felt the need to invoke a debugger. Used in conjunction with a version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests may often be more productive than debugging.[13] Test-driven development not only offers more than just simple validation of correctness, but also drives the

While it is true that more code is required with TDD than without TDD because of the unit test code, the total code implementation time could be shorter based on a model by Müller and Padberg.[14] Large number of tests help to limit the number of defects in the code. Early and frequent nature of the testing helps catching defects early in the development cycle, preventing them from becoming endemic and expensive problems. Eliminating defects early in the process usually avoids lengthy and tedious debugging later in the project. TDD can lead to more modularized, flexible, and extensible code. This effect often comes about because the methodology requires that the developers think of the software in terms of small units that can be written and tested independently and integrated together later. This leads to smaller, more focused classes, looser coupling, and cleaner interfaces. The use of the mock object design pattern also contributes to the overall modularization of the code because this pattern requires that the code be written so that modules can be switched easily between mock versions for unit testing and “real” versions for deployment. Because no more code is written than necessary to pass a failing test case, automated tests tend to cover every code path. For example, for a TDD developer to add an else branch to an existing if statement, the developer will first have to write a failing test case that motivates the branch. As a result, the automated tests resulting from TDD tend to be very thorough: they detect any unexpected changes in the code’s behavior. This detects problems that can arise where a change later in the development cycle unexpectedly alters other functionality.

Reference 1.  http://cgreen.sourceforge.net 2.  http://en.wikipedia.org/wiki/Test-driven_ development


Vagrant

Over the last few years, development and testing environments have changed from a static and enterprise-level controlled environment to dynamic and distributed environment. Couple that with the ever increasing customer demands and agile development processes. Consequently, the complexity is very high and managing it is a daunting task. “Vagrant” and similar devops tools help manage this complexity and enable development team to be more agile. Vagant brings configuration automation all the way down to the developer’s desktop!. In the Vagrant world, each team member of the project has Vagrant and VirtualBox (or other hypervisors) installed on their laptop or their test machines. A Vagrantfile with all the configuration is used for distribution. The developer can create the required environment just by typing vagrant up.

So what is Vagrant? Vagrant is an open-source tool for creating and configuring lightweight, reproducible and portable virtual development environments. Vagrant lowers development environment setup time and increases development/production parity. It can be considered as a layer on top of Virtualization/Cloud solutions such as VirtualBox, VMware, Amazon Web Services, etc. and Configuration Management software such as Chef, Salt or Puppet.

Installation All you need to get started with vagrant is a Hypervisor like VMWare Fusion or Oracle VirtualBox. It’s best to

start with VirtualBox because it’s free. And Vagrant of course, the installers can be found at the Vagrant Download page. With Vagrant, it’s easy to move from VirtualBox environment to other Hypervisors or Cloud services like Amazon Web Services or Digital Ocean. With a line change you can have the same environment created in cloud.

Boxes Boxes are the skeleton from which Vagrant constructs the Virtual Machine. They are portable files which can be used by others on any platform that runs Vagrant to bring up a working environment. They are basically your VMWare VM or VirtualBox VM and some configuration files in a compressed format. Boxes are specific to the Hypervisor you are using, so you must obtain the proper box depending on which Hypervisor you are using. For our examples in this article we will be using Oracle VirtualBox Vagrant Boxes. Boxes can be created automatically from existing Vagrant environments, or manually from existing non-Vagrant managed VirtualBox machines.

Chendil Kumar  Senior Engineer at NetIQ and is designated specialist within the QE team working on the NetIQ Identity Management suite of Products.

IDC Tech Journal — January 2014

Introduction

43 p


make modifications to the Vagrantfile, since it is mostly simple variable assignment. Let us examine the above Vagrantfile in detail: The first line starts the configuration block, “2” is Vagrant API version we are using, second line names the Virtual Machine, third line specifies the location of the base box, the fourth line configures the port forwarding ( accessing port 8080 on the Host box will forward to the port 80 on the Guest OS). The fifth line executes the shell script which will install the apache HTTP server. The last line ends the configuration block.

Up and running Vagrantfile

Now that the Vagrantfile is ready, you can get the VM up and running.

Once you have the base box, you are ready to use Vagrant to bring up a VM and configure the software you need.

Navigate to the directory where Vagrant file is located and execute the following command “vagrant up”

Vagrant is configured per project. A project is denoted by the existence of a file named “Vagrantfile”. Each project has a single Vagrantfile.

$ vagrant up Bringing machine ‘default’ up with ‘virtualbox’ provider... [default] Box ‘precise64’ was not found. Fetching box from specified URL for the provider ‘virtualbox’. Note that if the URL does not have a box for this provider, you should interrupt Vagrant now and add the box yourself. Otherwise Vagrant will attempt to download the full box prior to discovering this error. Downloading or copying the box... ?[0KExtracting box...ate: 1122k/s, Estimated time remaining: --:--:--) Successfully added box ‘precise64’ with provider ‘virtualbox’! [default] Importing base box ‘precise64’... ?[0K[default] Matching MAC address for NAT networking... [default] Setting the name of the VM... [default] Clearing any previously set forwarded ports... [default] Creating shared folders metadata... [default] Clearing any previously set network interfaces... [default] Preparing network interfaces based on configuration... [default] Forwarding ports... [default] -- 22 => 2222 (adapter 1) [default] -- 80 => 8080 (adapter 1) [default] Booting VM... [default] Waiting for machine to boot. This may take a few minutes... [default] VM booted and ready for use! [default] Mounting shared folders… [default] -- /vagrant [default] Running provisioner: shell… [default] Running: /tmp/ vagrant-shell20131030-4774-a93557 Installing Apache and setting it up… $

The Vagrantfile is a simple text file that Vagrant reads in order to determine what needs to be done to create your working environment. The file is a description of what operating system you want to boot (the base box), physical properties of the machine you need (e.g., RAM), what software needs to be installed on the machine, and various ways you’d like to access the machine over the network. Let’s look at a simple Vagrantfile that will create a Ubuntu 12.04 LTS 64-bit and install Apache HTTP Server in the VM. The Ubuntu base box is already available in the web.

IDC Tech Journal — January 2014

Vagrant.configure(“2) do |config|   config.vm.box = “precise64”   config.vm.box.url = “http://files.vagranup.com/precise64.box”   config.vm.network : forwarded_port, guest: 80, host: 8080   config.vm.provision “shell”, path: “provision.sh” end

A file named “VagrantFile” with the above contents should be placed in a directory. In the same directory we will have the script provision.sh to to install apache. #!/usr/bin/env bash echo “Installing Apache and setting it up…” apt-get update >/dev/null 2>&1 apt-get install -y apache2 > /dev/null 2>&1

The syntax of Vagrantfile is in Ruby, but knowledge of 44 the Ruby programming language is not necessary to

p

The VM is created and running successfully.


$ wget http://localhost:8080/ --2013-10-30 15:05:50-- http://localhost:8080/ Resolving localhost (localhost)... 127.0.0.1 Connecting to localhost (localhost)|127.0.0.1|:8080… connected. HTTP request sent, awaiting response… 200 OK Length: 177 [text/html] Saving to : ‘index.html’ 100%[================================================= ========>] 177 --.-K/s in 0s 2013-10-30 15:05:50 (15.6 MB/s) - ‘index.html’ saved [177/177] $

SSH access is provided by default, we could see that the ssh port 22 in guest is automatically mapped to the host port 2222 when we started when VM. You can directly access the VM using “vagrant ssh” or use ssh clients like putty. $ vagrant ssh Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64) * Documentation: https://help.ubuntu.com/ Welcome to your Vagrant-built virtual machine. Last login: Fri Sep 14 06:23:18 2012 from 10.0.2.2 Vagrant@precise64:~$

So if anybody needs a Ubuntu with Apache all they need is just this file with 5 lines (of course they need to have the Vagrant and VirtualBox installed) and with just one command they will have a VM up and running in their laptop or desktop. Vagrant handles the entire lifecycle of the VM for you (SSH access, Halt, Destroy, Suspend/Resume). Additionally, it supports   Port forwarding   Distribution   Environment setup (Network Interface)   Shared folders   Provisioning of software onto the VM using Chef, Puppet, Shell, etc.

Use of Vagrant in NetIQ Identity Manager Let’s look at how we use Vagrant in NeIQ Identity Manager. The following Vagrantfile creates a

SLES11SP2 VM with eDirectory and Identity Manager installed. Vagrant::config.run do |config|   config.vm.box= ‘IDM402_CustomerSetup’   config.vm.box_url = “http://bullseye.qalab. cam.  novell.com/chendil/sles11sp2.box”   config.vm.provision : chef_solo do |chef|   chef.cookbooks_path = “cookbooks”   chef.add_recipe “edirectory”   chef.add_recipe “idm”   chef.json = {   :edirectory=> { :tree_name=>”VAGRANT”,:version=>”8 8SP6”} } End

It uses Opscode Chef configuration management language to configure eDirectory and Identity Manager. With Vagrant and chef it’s easy to create environments than ever.To create IDM with eDirectory 88SP6, the developer has to just specify the version of edirectory in the JSON block and use the vagrant up file. This brings up the environment. To create VMs with different edirectory versions, all that is required is to change the version in the Vagrantfile. The knowledge of how the chef-recipes is written is not required to create the Virtual Machines. Anybody can bring the VM up.

Conclusion The possibilities are endless. At NetIQ, we have a private cloud setup where people could request for a Virtual Machine with required RAM, Operating System, etc., which is more of a PaaS (Platform as a service). With Vagrant, in future we can request for VMs with Software installed as discussed above. Thus, we are, moving towards SaaS (Software as a Service). Also, Vagrant supports creation of multiple VMs that could communicate with each other. For instance, in the above example, we can provision IDM in one VM and iManager in the other VM, both able to talk to each other. Vagrant saves lots of time and space, no need to take snapshots for each configurations anymore. It’s worth trying Vagrant in your projects.

IF This Then That is a service that lets you create powerful connection with a simple statement. For example: create a minutes of the meeting template in your Evernote 15 minutes prior to the meeting https://ifttt.com

IDC Tech Journal — January 2014

And of course with Apache HTTP Server installed and running on the default port 80. You could access the server from browser in the host using http://localhost:8080 or use wget.(Remember we mapped the ports)

45 p


Profile-Based Scale Testing

H

ow to perform scalability tests of a product that supports huge volume of clients with limited hardware.

Proposed solution In Client-Server architecture based products, you might want to scale up the number of clients supported per server or number of clients supported by a product as an entirety. Server treats any client within its service radar as an entity with few unique characteristics such as Serial Number, IP, MAC, hostname, any product specific identifier. We can leverage this behavior of server to simulate more virtual clients from actual client.

Profile In an actual client, one state of unique characteristics and its associated workspace become a virtual client. This state of virtual client is referred as Profile. The number of profiles that you can create on a client depends on the actual device hardware resources and unique combinations possible with the characteristics of a client. IDC Tech Journal — January 2014

To give simple analogy, Profile or Virtual client is very much similar to a user profile exists on windows desktop. In case of a shared desktop environments, two or more users use the same desktop. Their look and feel, configurations, applications might be different. When one user logs off, all his workspace, environment is saved like book keeping activity mentioned in profile switch activity. Once the next user logs in, all his previously saved configuration is restored and he can experience the 46 same workspace.

p

Profile Switching To simulate huge number of clients in test environment, create the possible number of profiles on the actual clients available to you. After performing the tests (designed for client) on one profile, you can do some book keeping activity to save the state of profile such as logs, work space, cache. This state of profile (for which is test is finished) is resumed later for performing the next iteration of test if required, so that the state is not lost. After performing the book keeping activity of a profile, make the next profile as active by making the application to read the folders, files and properties of the new profile. This process of saving the current profile status after test and making the next profile active is referred as Profile Switching. Typical profile switching operations are copy folders or files, exporting registry, creating folder soft links, importing registry and configuring environment variables. Tests are performed on the client after every profile switch operation. If you have X number of actual client devices in your test environment, and Y number of profiles created in each device, you could simulate X*Y clients in the test bed and thus check the performance of your product at higher scale.

Ravella Raghunath  9 years of

experience in IT industry. He has got ample of experience in Scalability and Performance tests. He is currently working with ZENworks QA Team


Advantages   Cost effective when compared to alternate methods such as renting Amazon or Verizon cloud infrastructure   Less development effort required when compared with creating stripped version of agents or stubs   Easy to create the environment   Easy to identify the issues and corresponding clients Note: This technique does not simulate the concurrent load with all the profiles rather simulate huge volume of unique clients to test the scalability of the application.

Success Story: ZENworks Scalability Testing In ZENworks 11 SP3, we are required to support 1,00,000 devices in the Management Zone. To accomplish this, we used the profile-based scale testing technique to simulate close to 1,00,000 clients in the ZENworks Management Zone. For this purpose, we created 7000 virtual machines on 2000 physical devices situated in Novell’s Super lab at Provo.

Profile Creation To create a ZENworks profile, following steps are followed:   Identified ZENworks agent folders with client uniqueness, configuration, logs, cache and content information   Folders are moved under directory named ZENworks-<GUID>

Short cuts or Soft links for those moved folders are created under actual ZENworks folder that is accessed by application.

Profile Switching To switch ZENworks profiles, following steps are followed:   ZENworks services are stopped   Existing cache, conf, logs, work folder shortcuts in ZENworks folder are deleted   Folder shortcuts are created again pointing to the folders in next profile   ZENworks services are started ZENworks folder structure is as shown below. Shortcuts for the corresponding to a profile folders are starred. ZENworks profile folders are as shown below. Actual zenworks folder accessed by the application is starred.

IDC Tech Journal — January 2014

ZENworks identifies its clients using 32-digit hexadecimal unique identifier called GUID. ZENworks uses the device basic properties such as hostname, MAC, Serial number as reconciliation parameters to distinguish the clients. To simulate multiple clients from a virtual machine, we disabled

the reconciliation functionality. We created 14-15 profiles per VM

Figure 1  Folder structure of ZENworks Agent

47 p


Figure 2  ZENworks profile folders

IDC Tech Journal — January 2014

Figure 3  Data in ZENworks profile

48 p

Folders represent a ZENworks profile are as shown below. Thre is a shortcut created for these folders in actual ZENworks folder. By using the Profile-Based Scale Testing technique, we have uncovered many scale related issues in the product. Few are listed below:   Device registration is slow with high volume of objects in the zone

Administration console navigation of pages with objects rendering is very slow   Running or exporting reports is slow   Client requests are timed out before servers compute the response   Out of memory issues on server with large number of requests


Short Article

Application Streaming Most of us are familiar with many applications and some thought provoking ideas are as follows:   Is it possible to use/deploy the application without installation? Example: Using the GroupWise, MS Office suite without installation.   Is it possible to run multiple versions of same applications? Example: IE6, IE7, IE8, IE9 & IE10 – Usage of any combination in parallel at a time.   Is it possible to stream the application from the Web server? Example: Like videos from YouTube, can we download the application components on need basis?   Is it possible to use the application without connecting to a Web server? Assumption: Application is available in a Web server. Application streaming is the ultimate solution for all these ideas.

What is Application Streaming? “Streaming” is a generic term. It basically means that the data being transferred can be used immediately, without having to download the “thing” in it’s entirety before it can be used. Application streaming is a form of on-demand software distribution and it is a type of Application Virtualization. Application Streaming is an alternative to installing an application in the local system. Instead of installing

the application, the application is streamed from the central server. It downloads/streams the application components as and when it is required.

Why Application Streaming? Organizations are spending more time and money for day to day operations in installing , deploying, and configuring the many physical applications. Application streaming provides many benefits compared to physical applications. Some of the benefits are as follows:   Reduces the installation, deployment, and support cost for the applications as the applications are streamed directly from the central server when required.   Reduces the application maintenance cost and time.   Better utilization of available (hardware) resources.   The cost of software licenses and license administration can be reduced.

Features of Application Streaming   Does not require installation of the application before use. It saves the installation and deployment time.   Application maintains its own file systems; Application files, registry entries, and data. Everything is separated from physical file systems.

Raghu Babu KVM  Working in Novell from past 5 yrs and has around 10 yrs of exp in the industry. He has worked in different technologies such as Telecom Mediation, Billing & OSS, Virtualization, Cloud Computing, Collaboration and Mobility. Research interests are Cloud & Mobile Computing. Raghu received MS(Software Systems) from BITS-Pilani and is currently working in Corporate Interoperability Test (CIT) team

IDC Tech Journal — January 2014

I

n the recent past many organizations are giving importance to Virtualization and cloud computing due to various reasons. In this article we are going to discuss about Application Virtualization.

49 p


Multiple versions of the same application can run simultaneously as the environment is isolated.   Once the application is streamed to the client machine, it will cache the required files instead of downloading every time from the Web server. It saves the network bandwidth consumed as it uses to access the application offline.   End user may not differentiate the virtual/streamed application with physical application.

How to Achieve Application Streaming? Streaming of an application involves four steps i.e. packaging, profiling, modeling and configuring in the streaming server. Packaging:  Means creating the virtual application as a single file using a template or snapshot method. Profiling:  Once the virtual application is created, use the application for some time so that profiles will be created for the application. Main usage of profiling is to understand the end-to end scenario of the application usage to build the transcripts for the application. Modeling:  Once the transcripts/profiles are ready in profiling, application is ready for streaming after modeling. In this process the modeling files (components) are stored in a specific directory and the same is moved to a central server for streaming. Configuring in the Streaming Server:  After moving the modeling files to the central server, configure the application parameters ( name, icon , description etc.) in the Streaming server . The streamed application is displayed / available for the subscribed users of the server.

IDC Tech Journal — January 2014

Limitations of Application Streaming

50 p

Not all the applications can be virtualized/streamed. Example: Applications that require device driver and 16 bit applications that need to run in shared memory space, are not possible for Application streaming.   Anti-virus and other software that require heavy integration with OS are difficult to virtualize/ stream.   In software licensing, application virtualization is facing many issues. Organizations are investing more time and effort in resolving these issues.

Players in Application Streaming Many organizations are playing an active role in Virtualization domain specially in application streaming area. Key players and their products are: Organization Product Name Novell ZENworks Application Virtualization (ZAV) Microsoft

Microsoft Application Virtualization (App-V)

Citrix

XenApp

VMWare

ThinApp

Symantec

Workspace Streaming

ZENWorks Application Virtualization (ZAV) ZENWorks Application Virtualization is a simple software for achieving the application virtualization/streaming. ZAV Studio and ZAV server are the two modules in ZAV. Both are simple executables and are easy to install in any Windows platform (including Windows Server 8 and Windows Server 2012). ZAV maintains its own file structure to manage the virtualized applications.   ZAV Studio is used to virtualize the application in many ways i.e. Template method, Desktop Scan, and Snapshot Method.   ZAV server is a centralized Web server to store the virtual application components. From this application streaming occurs.   Application streaming can be achieved through ZAV console (small desktop application) or from a web browser (with ZAV plug-in).

Some of the ZAV features   Easy to publish the Virtual application to ZAV server from ZAV Studio.   Easy to publish the virtual application to ZENworks server, which in turn can distribute to managed devices.   Easy to collect the Virtual application statistics and the same can be uploaded to ZEN inventory report.   Easy to restrict the applications based on LDAP authentication / ZEN agent restriction.   Easy to restrict the application based on time lines. i.e. can build the application which can expire after n no. of days.   Inventory, usage, and different types of reports for easy administration. More information visit www.novell.com/products/ zenworks/applicationvirtualization


IDC Tech Journal Volume 3 – January 2014

Contents


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.