IDC Technical Journal Issue 4

Page 1

Issue 04 "VHVTU 2014

IDC Technical Journal Where technology starts

AUTOMATION

In depth articles:

P Automation of web service secure R token communication using advanced O features of SOAPUI G R Enterprise Usability: Beyond the Button A Datomic for historical Data reportingM M I N G

The Importance of Capacity Planning and Monitoring NoSQL (Not Only SQL) Data Warehouse DMS Reactive Progamming on JVM Security Challenges and Solutions

Short Articles:

Bits and Bytes:

Easier Device Certificate Management by integration SCEP into Zenworks Mobile Management

Breaking the Paradigm - Strategies to Test

Role of EQ (emotional quotient) in Workplace Productivity

App Reviews:

Datomic

Java

Review of Mobile Apps Whats new in the mobile apps world.

WBF

CSS

AngularJS


EDITORIAL BOARD Jaimon Jose Pradeep Kumar Chaturvedi Shalabh Kumar Garg

CONTRIBUTING EDITORS Archana Tiwary Mridula Mukund Vijay Kulkarni

COVER DESIGN Sandeep Virmani

BOOK COMPOSITION Arul Kumar Kannaiyan


Contents

IDC Tech Journal Volume 4 – August 2014

InDepth Articles 1 3

Improve Mobile Application Performance

Anmol Rastogi

Automation of Web Service Secure Token Communication by Using Advanced Features of SoapUI Anup Kumar R

7 9

Miscellaneous 5

Vijay Kumar Kuchakuri

ii

Enterprise Usability - Beyond the Button

Harippriya Sivapatham

A structured way of executing Exploratory Testing

13 17

6

21

Short Articles

Historical Data Reporting and Datomic

16

Rajiv Kumar

Multi-Factor User Authentication by using Biometric Identification – Windows Biometric Framework (WBF) Case Study Innovation Sparks - 1

GNVS Sudhakar

The Importance of Capacity Planning and Monitoring NoSQL (Not Only SQL) Data Warehouse DMS Sudipta Roy

27 23

Mobile Apps (Android)

Karthik

Responsive Design

Nirmal Balasubramanian

Sachin Keskar

19

Editorial: Learning from the Hackathon

Jaimon Jose

Reviews

Harish Mohan

11

Bits & Bytes: Breaking the Paradigm - Strategies to Test

Asynchronous Reactive JVM Programming

Sureshkumar Thangavel

Smart Testing Ideas

Vikram Kaidabett

MDM Certificate Management – Using SCEP the right way Ashok Kumar

25

A Role of Emotional Intelligence Quotient (EQ) in Workplace Productivity Radha Devaraj


EDITORIAL

Hackathons with a collaborative spirit can foster an environment of success and triumph over long odds and obstacles in a competitive environment.

Learning from the Hackathon

W

e recently concluded a very successful Hackathon where more than 100 engineers sweat it out 2-4 days to build something that they nurtured over a period of time. This was the first time we conducted such an event though innovation and freedom to do something different have always been encouraged. There are a few interesting takeaways from this event.

Number of ideas submitted Organizers kept the idea submission page public intentionally. It was interesting to note the rate at which new submissions came in. Though engineers were initially confused with what qualifies for a Hackathon it appeared that reading through already submitted ideas gave them an opportunity to review their proposals and generate more such ideas. Our friends generated over 100 ideas by the time we started hacking with over 150 engineers participating in the event.

Participation Overall, hackers felt the arrangements in the hackdens and other conveniences helped them to focus and deliver what they intended to do in a short duration. Attention deficiency is common in our workplaces today. Such events help participants to get uninterrupted time and focus on their ideas. In two days, they had to connect with their teammates, arrive at a consensus on their idea, assign tasks and avoid overlapping.

iv IDC Tech Journal — August 2014

Impact of the work One may not expect to solve a big problem in a Hackathon or releasing a product after being holed up in a hackden for a couple of days. They can make small dents in some of the most intractable problems. They get to test out a new idea. More importantly, they realize their potential and getting the work done within time constraints boosts their confidence. It’s important to note the energy such events usher into the team. People felt invigorated and recharged to do more.

Quality of the work Though Hackathon is meant for unlocking creative skills, there were proposals that ranged from a mix of complex problems to simple bug fixes. The hackers completed 63 ideas in two days of Hackathon (some members worked over the weekend!) and out of these, 8-9 was really worth considering as major features of various products. This is an encouraging number because this was our first Hackathon. In short, such Hackathons will do good for the organization and participation in such events will motivate everybody and boost their confidence. Teams should explore ways and means to implement similar events at their team level. They are great for generating insights into how you work and assess your capability when you are completely devoted to a single pursuit.

Jaimon Jose


Anmol Rastogi A Specialist @Novell in Product Development with 11 years of experience. I received M.C.A. degree from UP technical university in 2003 and eventually earned my spot as “Senior Software Engineer� with Symphony Software Services.

Improve Mobile Application Performance

View Point Now a days, applications are deployed on almost every mobile devices like iPhone, Android. These applications provides the continuous accessibility to online activities like access to broadband media via sites like YouTube, and visit to popular social networking sites like Facebook, MySpace and LinkedIn, and even financial transaction can also be performed. Of course, the application performance depends on few major factors like Phone processor, RAM etc but the behavior of the wireless network is other important factor to deliver the continues online accessibility at ease. This paper is intended to understand the network factors which have impact on application performance. Also in this paper, there are strategies which can be followed to mitigate the network factors.

Data vulnerability through the most common type of network traffic There are few most common type of network traffic outlined below which could lead to the slow response time and loss the productivity. Social Networking, Online Gaming, Broadband Media Social networking applications are increasingly used for sharing text, photos, personal profiles, videos and more hence a major factor for data explosion Deliberately misleading Applications Application running on client/server architecture and required connectivity each time hence slow the network and causes the UI to become less responsive. As the number of smart mobile users grows drastically from past 2-3 years and due to the data connectivity either from Wi-FI

or 2G/3G network, there are more chance for data explosion which could lead to the application performance down if not tested well.

Application (Client/Server) Performance through data delivery The data delivery to the device This process involve the request from the device to the server for data which leads the response generation at the server and then sending the response back to the device. In a process to revive the request and sending the request back, there few typical factors involve like load on the server, type of network connection and load on the network connection. These factors can be actually tested well using the emulators. These emulators are easily available free. Data display at the source device This process involve the display of data which received from the server. There are various factor include to display the data in efficient way like the type of OS used, configuration of the device. This is out of the scope and will not be discuss in this document.

The Network Factors The network on which the application is used impacts performance tremendously, especially for mobile and cloud. These factors are as follows: Inconsistent bandwidth There are multiple user connected to the same network at the same time thus reducing the available bandwidth of the network. High jitter and increased latency IDC Tech Journal — August 2014

1


Condi!on

Recreated By

Workload type in terms of where the requests are genera!ng from Example: Web applica!on using na!ve iPhone and Android applica!on Load on server in terms of number of users from different workload types Example: 200 from Web applica!on, 50 each from iPhone and Android na!ve apps Load on server in terms of number of users from different geographic loca!ons Example: 50% from US, 50% ROW

Create the scripts specific to a Any load tes!ng solu!on (HP workload type LoadRunner, Soasta CloudTest, etc.)

Condi!on Network type and quality Example: 3G/2G/WiFi -average/best/worst Network Load Example: 50% bandwidth u!lized Network by Geography Example: AT&T 3G in New York, Airtel 3G in Bangalore

Available Solu!on

Create load tes!ng scenario specific to the load and the associated scripts for a workload ty

Any load tes!ng solu!on (HP LoadRunner, Soasta CloudTest, etc.)

Generate load from the load generators at the iden!fied loca!ons

Cloud-based load tes!ng solu!on (Soasta CloudTest, Gomez Web Load Tes!ng, Keynote Web Load Tes!ng)

Recreated By 1. Emula!on 2. Devices in realnetworks by mobile carriers or ISP Possible only by emula!on

1. Emula!on 2. Devices in realnetworks by mobile carriers or ISP

Available Solu!on 1. Infosys Windtunnel, Shunra 2. Keynote DeviceAnywhere, Gomez Synthe!c Monitoring Any Network Emula!on Solu!on (Infosys Windtunnel, Shunra) 1. Infosys Windtunnel, Shunra 2. Keynote DeviceAnywhere, Gomez Synthe!c Monitoring

As most of the end user applications used over the Internet are based on TCP, latency is crucial to the broadband experience. TCP requires the recipient of a packet to acknowledge its receipt. If the sender does not receive a receipt in a certain amount of time (ms), TCP assumes that the connection is congested and slows down the rate at which it sends packets. Short data transfers (also called Web “mice”) suffer more from reduced TCP performance. Packet loss As there are more request to process the response received from the server over the network at the same time, this causes the more congestion on the network and open up the possibility of packet loss. As most of the application are using TCP/IP

sopacket loss is taken care but until the data is not received by the application UI may freeze.

Impact due to the network inconsistency There are various impact due to network inconsistency and discussed below UI responsiveness During the peak when the latency is increased, the user interface freezes and become less responsive thus frustrating the users. Data synchronization If there is any network failure, any transaction initiated from the device, it’s difficult to synchronize the transactions over the failed network. Functionality issue Network congestion, packet loss, slow response, all of these may cause the functionality issue of an application

Mitigating the impact due to Network variability There are ways of testing which can be performed to improve the application performance as follows: Recreating the real client/server communication condition This involves identifying and recreating the conditions as expected on the production servers. There are multiple conditions that need to be recreated. Below are the conditions Recreate the real network condition This involves identifying and recreating the conditions on the network to be targeted, while gauging the application’s performance on the target device After recreating the real-world conditions, the mobile app developer should measure the performance of the application for every delivery mode along with the other hardware and software components involved. It is important to measure the performance on every component as that will provide an end-to-end view of the application’s performance and not just an isolated one-device performance perspective.

Ever wondered if you could get a free US number so that friends and family can call you easily. Talkatone is an option that lets you pick a number of your choice and lets you text and receive calls freely. This service stands out in the crowd of other services such as Viber, Line, WeChat, Skype for the sound quality and ease of use. They charge a nominal Rs. 1 per minute for calling once your free monthly credit is over. Though Google Voice is an option if you are willing to pay a one time charge for a US number, it’s a pity that Google Hangout on Android doesn’t have the voice integration where as their IOS counterpart has full integration of Voice. Google Voice provides free unlimited US calls. 2 IDC Tech Journal — August 2014


Anup Kumar R QA Specialist currently working for NetIQ Access Manager. He is with Novell since 12+ years and has QA experience in various access management products and TCP/IP stack. He holds a masters in Software Systems.

Automation of Web Service Secure Token Communication by Using Advanced Features of SoapUI Introduction

SoapUI Project

Something which drove me real crazy over entire years of my testing career is to test the Web service secure communication. With the number of browser plug-ins, extensions, and stand-alone tools available, Web application testing has become far easier compared to when I started. Testing secure Web services could be difficult due to the lack of Web service testing tools available in the industry. But I would owe it largely to the lack of articles and solutions on various challenges in the Web service testing. This article is an attempt towards making the testing community aware about different possibilities in testing Web services and its security with the help of SoapUI. SoapUI and SOAPSonar are two of the leading tools available for testing Web services. SoapUI has a free version, which made me select it over SOAPSonar.

The first step is to create a SOAP project by importing the WSDL of the Web service.

In the 3rd edition of IDC Tech Journal, there was an interesting article titled “Web Services Testing Using SoapUI” by Girish Mutt. It was a nice introduction to SoapUI and its various features and that helps me to skip those stuffs. In this particular article, we would take a closer look at some of the specific advance features and other facilities available in SoapUI that can help you automate the complex requirements of Web service testing and its secure communications. The secure communication would cover the WS-Security and extension to it called as WS-Trust. This article will not explain WS-Security and its related standards. For more information about WS-Security and WS-Trust, see http:// www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss and http://docs.oasis-open.org/ws-sx/ws-trust/v1.4/ws-trust.html respectively.

SoapUI Test Design Let us start with some basics. SoapUI can create SOAP-based, REST-based, or generic project. We will restrict this article to only SOAP-based project.

During the creation of the project, SoapUI can automatically create the SOAP requests defined in the WSDL. For normal Web services, which do not have security requirements, these automatically created SOAP requests are sufficient to do testing. When the WS-Security has to be applied, it would need some additional modification. However, these requests give you the details of the endpoints to which requests need to be sent. Actual test requests with security headers can be built on top of these basic SOAP requests. Typically a test suite is created under the project. A project can have multiple test suites. The test suite is nothing but a logical grouping of multiple test cases. A test case is made up of several steps, called TestSteps. The following is an example structure of a SoapUI project: Project Interface

3 IDC Tech Journal — August 2014


TestSuites TestCases TestSteps LoadTests MockServices In the structure, you can see interface, load tests, and mockservices. Load tests would be very minimally touched upon later in the article. However, other two are out of scope for this article. Let us move into the details of designing a test case now.

SoapUI Test Case SoapUI has many interesting features to help automate a Webservice test case. As earlier mentioned, test case is made up of several test steps. It is in the design of test steps, you will know the real strengths of SoapUI. A typical test case will have multiple steps. The very basic four types of test steps are: a. Property Steps – Stores different properties that can be referred anywhere in the project. b. Test Request – Actual requests to the server. The response from the server for each test request can be intercepted and later manipulated by SoapUI. It is the scope for customization available for manipulation in SoapUI, which makes this tool so useful. c. Property Transfer – Helps moving properties between different steps. This helps us create continuity between the test steps. It helps to transfer information from one test requestresponse to another request to maintain the continuity between multiple requests. d. Assertions – Assertions can be made in test requests or can be used as a separate test step itself. It is used for validating the response. It has in-built checks to check whether it is SOAP response, whether it is schema compliant, or whether it is not a SOAP fault. However, the real value add is using it with the XPATH match. This will help us validate a certain element in the response. The XPATH match can be used with the property transfer also where a certain element from the response can be transferred to subsequent requests. With the basics covered, let us move to the advanced capabilities of SoapUI.

WS-Security SoapUI manages the WS-Security related configurations at the project level. This <what?> allows these configurations to be used in any part of the project. There are four different aspects of <WS-Security?> configuration:

4

Outgoing WS-Security Configurations: It details about what should be applied to the outgoing requests. It can be used for encryption, signing and adding SAML, timestamp, and username headers. Incoming WS-Security Configurations: It details about what should be applied to incoming responses. It is typically used for decrypting and verifying the signature of incoming messages. Keystores: It details about the keystores used for encryption, decryption, and signing. Truststores: It details about the truststores used for signature verification. IDC Tech Journal — August 2014

Scripting The most important capability of SoapUI in my opinion is its scripting capabilities and the script library. SoapUI provides extensive options for scripting by using either Groovy (http:// groovy.codehaus.org/) or Javascript. This article will touch upon only the scripting with Groovy. Scripting can be used at various places in SoapUI: Test step itself can be a groovy script that allows your tests to perform virtually any desired functionality. Groovy, as a scripting language, is very strong by itself in manipulating the XML documents in the SOAP requests and responses. For initializing or cleaning up different variables before and after your tests. For creating arbitrary assertions with the script assertion. Java Groovy has a rich set of built-in libraries for XML manipulation. However, if you find any difficulty in finding the right library from Groovy, you can write it in Java itself and those functions can be called from within the Groovy script. Here is how you do it: Compile the Java program (.java file) and package it in a JAR file Put the JAR in soapui\bin\ext Restart the SoapUI Now the groovystep in SoapUI can access this library by importing the package as follows: import ExamplePackage.* log.info “Package Imported”

You can also load a JAR at runtime with: this.getClass().classLoader.rootLoader.addURL

WS-Trust Though WS-Trust is not supported by SoapUI yet, using the above mentioned advance features of SoapUI, we could automate 100% of the WS-Trust test cases by using SoapUI. In our WS-Trust test cases, SoapUI was used to simulate the Web service client. The Web services in the WS-Trust environment would be protected and only the Web service requests with a valid token would be given access. So, the Web service client has to first request a Secure Token Service (STS) for a token and then present that token to access the actual Web service. A single test case in this scenario needs communication with the following two SOAP endpoints: STS endpoint Web service endpoint

Proof of Concept In this section, we will understand in detail about one of the test case in the SoapUI Test Automation for WS-Trust testing. The test case we have selected for discussion is SAML1.1 Issuance Token Request. In this test case, you can see there are 12 steps altogether. Image icons against each test step identifies what type of a test step that is. 1. RST – Sample (1): This is a property step. The sample request XML file is stored in a property. 2. Transfer RST – Sample to Issue SOAP Request: This is a property transfer step where in the sample xml stored in the property is been transferred to the actual SOAP request.


3. Issue: This is the actual SOAP request to the STS to obtain a SAML token. This step also uses the Outgoing WS-Security Configuration to add the security headers like usernamepassword header and the timestamp headers into the request. 4. RSTR – Write to file: This is a Groovy step, where in the response of previous step is captured and wrote it to a file. 5. add-timestamp: Another Groovy step to add the current timestamp to the request. This groovy script uses groovy capabilities to create current timestamps in the required format. 6. WSP-Request-Sample(3): Another property step to store the sample request XML file. This property step also stores the starttimestamp and endtimestamp values created in the previous groovy step. 7. Transfer request xml and timestamp to Request-beforesigning: Another property transfer step where the stored request XML and timestamp values are transferred to the request which needs to be signed thereafter. 8. Request-before-signing(1): Another property step to store the intermediate unsigned XML document.

9. WSP – Request – Write to file: Another Groovy script to write the intermediate unsigned XML document to a file in the hard disk. 10. Signing the WSP-Request using Java Code: Another Groovy script, which actually is using functions from the imported Java package. The entire signing of the XML document is done using core Java code and that is been used inside Groovy. The signed XML document is written to the file system again within the same groovy script. 11. create the wsp request from the signed XML file: Another Groovy script which will actually prepare the actual request to the Web service. 12. WSP Request: This is the final SOAP request to the Web service. This step has three assertions also part of it. Assertion 1 validates the response is a SOAP response. Assertion 2 validates the response is not a SOAP fault. Assertion 3 validates a particular element in the response for its correctness.

Reporting The test suite for WS-Trust automation had around 100 test cases. SoapUI reporting gives both the live view of test execution as well the test execution summary.

Conclusion The craziness which drove me in the beginning of testing Web services had changed to a satisfying affair at the end of this automation. With the nice UI and extended capabilities of SoapUI, it definitely makes life easier in Web service testing arena. In addition to the free version of SoapUI, there is pro version as well which adds few more super cool features to SoapUI. With the Groovy capabilities and option to use Java code within Groovy takes this tool to the next level where any types of complex testing challenges can be solved with ease.

Bits & Bytes

Sometimes, if not always, it is good to break the paradigm.

1. Being Adhoc: In every sprint and test phase, we can probably allocate the first few hours or days (based on the length of the phase), about 10% of the time to Adhoc testing. Good Test Cases are those which give bugs. Completion is important and so, while you log the defects and get them fixed, you have the rest of the time with you to work on a whole lot of other tests. 2. List Your Defects First: Traditionally, we are all used to writing test cases, executing them and filing defects against the observations/failures. Why not reverse this process once in a while? How about writing the defects first in a spreadsheet, then executing actions / working towards finding them in the product and build up the test suite towards achieving that goal. In short, it is defect - driven testing. 3. UI and Special Characters: Treat all special characters the special way atleast once. UI validation, Search, User login, etc. with all special characters in the keyboard we use may just yield right usecases. 4. Personified Tests: Imagine what an administrator would like to achieve. Administrator may have the following questions: a. Wants to deploy an update, look out for failures, debug and fix them. How easy is it to do that. b. Has triggered a quick task, has it been received by all devices. If not, why not. Do you know who triggered it (audit). c. Searches for all events (change/agent) which have a given target or initiator. d. Didn’t understand this feature. Is there correct information in the context-sensitive help and is it easy to correlate to what is required? 5. Big Net with Small Holes: We need to be agile, adept and creative. We need to know multiple components to understand the big picture. Switch components once in a while. Test what is not assigned. Cover areas which you don’t know. Seek information. Question the behavior. You might actually be touching upon the right areas/issues. We all need to have bigger nets with smaller holes to be able to garner a whole bunch of fishes of all sizes :)

Breaking the Paradigm - Strategies to Test — Vijay Kumar Kuchakuri IDC Tech Journal — August 2014

5


Reviews

Mobile Apps (Android)

– Karthik

Dormi (Monitors Babies) Developer: Jay Simon (Sleekbit’s Co-founder) Are you a first-time parent and are you facing middle of the night cries for attention from your child? Ever heard of Dormi? It is the app you should have in this mobile age.

Pre-Requisites Two Android devices (Either smartphones or tablets anything that contains android version 2.3 and above) Internet connection

Operations 1. Connect to Internet (such as Wi-fi, 3G/4G network etc) 2. Download and install this app on both android devices. 3. From first device, find that the other device is auto discovered and simply generate a 5 digit password to pair with second device. 4. Make sure one device acts as child device and other as parent device. 5. Place the child device in your child’s bedroom and keep the parent device with yourself. 6. Monitoring the Child’s sleeping patterns, noises, disturbances in the sleep etc. even when you’re away from the child through your device by sliding the circle button to the play button.

Features If you want to talk to your child, there’s a push to talk button from the parent device (walkie-talkie style). Parent’s device can be used for listening to noise and sounds detected in the child’s location.

6 IDC Tech Journal — August 2014

Notifications are received on parent’s device when connectivity gets lost between parent and child’s devices. With this app, you can even enjoy a quick holiday or stay out of home.

Merits Look and operational feel is smooth. UI design is great. Simply slide to begin the monitoring. It’s absolutely free for 4 hours a month and more using credit points (For a trial version). (Paid version costs 7$). Available from Google Play store.

Rating based on My Experience: 4.5/5


Harippriya Sivapatham A specialist in the Access Manager team. She has 12 years of experience developing java based UI applications from end to end. Appreciated for initiatives to improve the usability and for simplifying the Access Manager Administration Console.

Enterprise Usability - Beyond the Button

E

nterprise products are complex, powerful tools targeted for skilled users. The usability challenges of enterprise software are beyond building intuitive GUI. For example, usability can be improved by making the configuration screens intuitive and easy to use. Extending this and creating a wizard using these screens to simplify frequently used functionality will take usability to the next level. Similarly simplified installation, useful alerts, better status feedback, etc. result in improved usability. Enterprise products should focus on user centered design to help improve the user experience as a whole. Apple Inc. proclaimed in its developer conference that they don’t build products, instead they build user experience. This trend is catching on. This article focuses on usability in the context of enterprise software.

Value of Usability Let us start with the big picture of why usability matters for enterprise products, where the feature set is the king. The short answer is that usability has real economic consequences. Total cost of ownership (TCO) of a product is a critical metric for organizations. Poor usability results in decreased productivity, is error-prone and requires excessive training. This increases overhead hosts and undermines the business benefits of the product.

Usable products translate into real cost savings for organizations. Happy customers are repeat customers. This in turn helps software vendors by increasing the revenue from license renewals. Improved usability is a win-win in terms of revenue for the vendor and consumer.

User Centered Design User centered design is a philosophy that places the user at the center of the development process right from the beginning. The three main activities for usability aware development incorporating usability are:

Understand Users Engineers are not users and the product manager is not your user! Make no assumptions on users’ goals and priorities. Gather users’ actual needs. Create your own network of people who regularly interact with customers, like Technical Support, Consultants, and Pre-sales Engineers. They can provide information on the issues the users are facing and how the features will be actually used to solve those issues. User information will help to add right flexibility and make right assumptions during design.

Design Usability issues are usually discovered when the feature is ready to be tried out. It may be too late to make major changes at that point. Simply adding a vague ‘Improve User Experience’ to the requirements list will not help. Software architects require specific qualities they need to design for. This problem is solved by Usability Supporting Architectural Patterns (USAPs). USAPs include a list of usability properties and patterns that have been known to improve the user experience. These replace IDC Tech Journal — August 2014

7


Natural Mapping – obvious interface for the function it performs Multitasking and Delegation

Usability Evaluation

vague requirements with quantifiable ones that the architects can prioritize and support.

Usability Patterns Usability patterns are solutions for incorporating the usability principles in the design. Unlike software design patterns, they do not provide the implementation mechanism. They are merely concepts to use. Some of the patterns are: Wizards – for frequently performed tasks and for complex tasks Error Correction Undo – always provide a way to revert errors Cancel – provide ways to reliably cancel incorrect actions or long running actions. Human readable error messages. Error Prevention User Confirmation Data Validation Feedback Progress and status indication Alerts

Any testing is better than no testing. Perform usability testing using anyone available. When fixing problems, try to do the least you can do. Tweak but don’t redesign. The 80 – 20 rule applies: Fixing top 20% errors will most likely help 80% of the users.

Conclusion Market analysts say that the demand for usable enterprise software is snowballing. Moreover enterprise software companies can no longer sell software on the strength of feature set alone. Some of the world’s leading corporations are using the standard developed by the US National Institute of Standards and Technology (NIST) to make usability a key factor in their choice of software. NIST has developed the Common Industry Format (CIF standard) for usability test reports that allow buyers to compare product usability. Vendors who adapt fastest to the changing market will gain ground over their competitors.

References 1. Steve Krug. Rocket Surgery Made Easy. Pearson Education Inc, 2010. 2. Eelke Folmer, Jan Bosch. Usability Patterns in Software Architecture. University of Groningen, the Netherlands 3. ComputerWeekly.com. Software usability tests can save you millions.

Cam Scanner — Easily manage small paper documents Cam Scanner is an intelligent document scanning and management app with sync features. The app performs as a mobile scanner with very good editing features. It has gained popularity over a period of time with over a million downloads on Android platform and about 6 million users around the world. Compared to other pictures of documents through camera apps, CamScanner enhances the text and graphics of the scanned documents to have a clear and sharp document. The app is especially useful for small docs like - business cards, payment receipts. It easily identifies the layout of documents on any flat surface and gives you an option to edit them. It is like carrying a small scanner in your pocket. There is also an option for optical character recognition (OCR), but like most OCR systems, this is just about ok. Scanned documents can be exported as PDF or JPEG files or merged with other PDF documents. Most of the times, I use it to backup important documents and save them in google drive. This way all my important documents are safe and readily accessible. The best part is that the free version of the app provides all the basic features required for normal usage. The app is available on all the major platforms (Android/ iOS /Windows Mobile). 8 IDC Tech Journal — August 2014


Harish Mohan Has 9 years of experience in the industry and is currently working on PlateSpin product suite.Holds Master’s degree from BITS and has extensive experience in virtualization domain. Apart from work pursues his photography interests and is an avid traveler.

A structured way of executing Exploratory Testing

E

xploratory Testing (ET) is a creative way of testing. The most important difference between exploratory testing and scripted way of testing is that the scripted testing talks about organized and structured way of conducting test execution and tracking using pre-populated test cases & reporting mechanisms. However, ET is widely considered as an ad-hoc way of conducting test execution with no prior test cases and definitely no tracking mechanisms. The benefits of ET are high because it takes out the monotony in the testing job and provides space to testers so that they can be more creative with their testing. It gives them a chance to do things differently that could expose weak areas of the products. But since these are not scripted, this type of testing comes with a caveat of being ad-hoc and difficult to quantify. Does it actually have that problem? Well… The need to track any engineering process is to make sure that the effort is spent in the right direction and in the correct amount so that the output is maximum By the virtue of the nature, tracking exploratory testing is a bit challenging but it’s is far from impossible. If followed properly, exploratory testing is a highly manageable and creative way of testing. One way of structuring Exploratory Testing is by time boxing it with proper mission and making it reviewable by employing Session Based Testing.

What is Session Based Testing? It’s a testing method where exploratory testing is done in small time boxed and manageable chunks with quantifiable results.

What is a Session? A session is a reviewable test effort done in a pre-decided format. Each session is associated with a mission followed by a summary called as Session Summary. This type of testing is all about the session, it’s not about test cases or a bug report. The Mission: The mission of a session can be generic areas of the products that have to be tested or it can be specific functionalities of the product that have to be tested. This can also be specific test configurations like for example, testing a product with remote DB or it could be specific test strategies like performance testing or it could be configuration related testing. The agenda of the session is purely based on the tester and the mission. There should be no scripted test cases that have to be followed. Length of a Session: A session can be of 90-120 minutes. It should be short enough to be interesting and should be long enough to achieve the mission of the session. The Session report: At the end of every session, testers need to brief the test managers about their sessions. The session report should consist of the following sections: 1. Mission Statement (areas of product, test configuration, test strategies, system configuration related ) 2. Tester Name 3. Date Time 4. Bugs Filed 5. Issues or Impediments faced 6. Notes The report is created to capture the details of the session and it needs to be preserved for all sessions so that the efforts are tracked

9 IDC Tech Journal — August 2014


and also to make sure that there is no duplicate work that is happening. The report template should be created before the session starts and has to be saved in a central location. The tester is supposed to enter in the details of the session towards the end of his session. However the mission of each session needs to be decided for a longer period or so, so that testers have choice to pick up their areas of interest for each session. After every session the report is reviewed by the test manager in a brief review session. It’s important to preserve these session based reviews for future references because if another tester picks up the same theme during a later session and avoids duplication. The Session Review: The Test manager runs this review meeting completely and the agenda of the review meeting is to understand the following: 1. 2. 3. 4.

What was tested during the session? Were there any bugs or is there a difference in expectation?. Were there any impediments affected productivity?. Was everything that was called in the mission statement completed? Is there something pending? If yes, where and how will it be covered? 5. How do you feel about the area that you tested? What can be done to make you feel better about it? 6. Was any opportunity testing done? What was the result? – (Testing outside the scope). Test Manager should try to probe and guide the tester working on a particular area of the product such that almost all possible ways related to that area are covered. If the manager finds that a particular tester is no longer able to churn out defects in a particular area/ module of the product, then testing for that product area/module must be halted for some time or assign a different tester to that area and proceed. In the latter case, the manager should brief the new tester and make sure that none of the already done testing is repeated and that’s where the mission statement & the review process become very important. Over a period of time when a particular theme has been tested by most of the testers and the confidence of each one of them is considerably high, it indicates that the particular areas is robust and further sessionson that area can be terminated. The stake holders can rely on automation or regular sanity check of that feature. Session Testing with Peers: In these sessions a common area is assigned to two testers where they test the areas independently during a session with a spirit of competition to churn out more issues on a particular area, however to make sure that the competition is in the right spirit only one session report should be created. During the review process, both the testers must review the report together with the manager so that individual performance does not become an obstacle. This approach would bring in the advantage of having more eyes looking at a problematic area at the

10 IDC Tech Journal — August 2014

same time. However this needs to involve people with the right mind set who can understand that the bout exercise is to benefit the product and uncover issues and not prove individual competency levels. Session based testing in the Sprint Model: In an ideal sprint team, the session report can be discussed within the standup meeting itself by the individual tester. The tester can be questioned or asked for clarifications on some tests in the standup itself by developers and others. There is a high probability that those questions or clarifications can trigger more ideas for the testers. If the test lead or manager think differently, those details can be shared with the individual testers. Alternatively, the session review can happen with all of the QE team members in the sprint which includes the manager as well instead of having just with the test manager. For projects that are having less or no scripted test cases: For the projects that have the flexibility to have test strategies completely based on exploratory testing (no scripted test cases) this kind of session based testing can be followed right from the start and that can bring in a lot of clarity to the entire process. In these projects, these session can be planned per day basis may be 2-3 sessions per day. In these type of projects, it’s important to have one more section in the session report that describes about the “time taken” for testing a particular mission which captures time spent on designing a test plan and investigating a problem than actually testing. In such cases, the effort has to be captured and reflected appropriately. In other words, with this type of testing judging a tester by the number of the bugs logged would not be right. However for the projects that are having a mix of scripted and exploratory testing and are following sprints, session based testing can be planned during the start of every sprint. In this case the session can be per week basis may be 2-3 sessions per week. In this scenario, the testers or test managers should make sure that whatever new scenarios have been covered in these sessions are captured into their test case database or mind map so that next tester who is doing a session on the same theme can look for different issues and avoid duplication. The tester must be assigned a particular area of the pro- ductto carry out multiple sessions with that area. The number of sessions required for a particular area has to be reviewed with the Test manager and this must be improvised periodically. For Projects that rely on more of scripted testing and have very less time for exploratory based testing: There could be projects that rely heavily on scripted testing and it is essential to run their test cases very meticulously because the testers in these projects do not get enough time to do much of exploratory testing. For such projects, session based testing can be planned once in the release cycle, may be dedicate 1 or 2 sprints only for this testing. Also, the review process must be followed very meticulously in early cycle to ensure that the test sessions are successful.


Nirmal Balasubramanian User eXperience Specialist, working with Sentinel UI Team since April 2013. Certified Usability Analyst having over 12+ years of experience, well-versed in all aspects of Website and web app UI design and front-end development with an emphasis on user-centered design. I love typography and photography. Strongly believe in what Steve Jobs said.. “Design is not just what it looks like and feels like. Design is how it works.”

Responsive Design

R

esponsive Web design is the approach that suggests that design and development should respond to the user’s behavior and environment based on screen size, platform and orientation. The practice consists of a mix of flexible grids and layouts, images and an intelligent use of CSS media queries. As the user switches from their laptop to iPad, the Web site should automatically switch to accommodate for resolution, image size and scripting abilities. In other words, the Web site should have the technology to automatically respond to the user’s preferences. This would eliminate the need for a different design and development phase for each new gadget in the market. The idea of Responsive Web Design, a term coined by Ethan Marcotte, is that our Web sites should adapt their layout and design to fit any device that chooses to display it.

Why it is so important ? Majority of media consumption is screen-based. The spectrum of screen sizes and resolutions is widening every day, and creating a different version of a Web site that targets each individual device is not a practical way forward. This is the problem that responsive web design addresses head on. The Advantages of using Responsive Wed Design are as follows: Your Web site looks great everywhere. (On all device displays) You need not zoom on smaller devices to read the content. Consistent tailored user experience. All pages & functionalities are available on every device. Google recommends webmasters follow the industry best practice of using responsive web design, namely serving the same HTML for all devices and using only CSS Media Queries to decide the rendering on each device. Source: Google Developers

Anatomy of a Responsive Web site A Responsive Web site targets the width of the web browser that each user is using to determine how much space is available and how the web content should be displayed.

Fluid/Flexible Grids Make your layout flexible. Flexible grids use columns to organize content, and relative instead of fixed width to adapt the viewport size. Fluid layout is the best way to be ready for any kind of screen size and / or orientation. Combined with the right media queries you can adapt to any possible device. Essentially, it means that your grid, which was traditionally measured in pixels should now be thought of in terms of percent of the total page width. The actual calculated width of each column in a responsive web site changes every time the browser window changes size, and cannot be guaranteed to be the same across different devices. So, you must use a grid when designing for Responsive Web Design. It is a necessity, not a nicety. You cannot create a responsive Web site without a grid-based design; it’s simply out of the question, it wouldn’t work.

Flexible Images One major problem that needs to be solved with responsive Web design is working with images. There are a number of techniques to resize images proportionately, and many are easily done. The most popular option is to use CSS’s max-width property for an easy fix. img { max-width: 100%; }

As long as no other width-based image styles override this rule, every image will load in its original size, unless the viewing area becomes narrower than the image’s original width. The maximum width of the image is set to 100% of the screen or browser width, so when that 100% becomes narrower, so does the image. The idea behind fluid images is that you deliver images at the maximum size they will be used at. You don’t declare the height and width in your code, but instead let the browser resize the images as needed while using CSS to guide their relative size. It is a great and simple technique to resize images beautifully. Note that max-width is not supported in IE, but a good use of width: 100% would solve the problem neatly in an IE-specific style sheet.

Media Queries & Viewports Media queries are an excellent way to deliver different styles to different devices, providing the best experience for each type of user. 11 IDC Tech Journal — August 2014


A part of the CSS3 specification, media queries expand the role of the media attribute that controls how your styles are applied. For example, it has been common practice for years to use a separate style sheet for printing web pages by specifying media=”print”. However, media queries take this idea to the next level by allowing designers to target styles based on a number of device properties, such as screen width, orientation, and so on. For responsive design, we need to focus on the width conditions depending on the client’s current width, and load an alternative style sheet or add specific styles. Here are some methods to do the same Assign different stylesheets depending on browser window size.

HTML <link rel="stylesheet" media="screen and (min-device-width: 800px)" href="800.css" /> HTML <link rel='stylesheet' media='screen and (min-width: 701px) and (max-width: 900px)' href='css/medium.css' /> Using media queries within a single stylesheet.

CSS

@media only screen and (min-device-width : 320px) and (max-device-width : 480px) {…} @media only screen and (min-device-width : 768px) and (max-device-width : 1024px){…} @media only screen and (min-width : 1224px) {..}

Major: Target the first generation smartphones in portrait mode with a <480px rule. Use a <768px condition, for high end smartphone and portrait iPads. Everything bigger (big tabs and desktop) goes in a >768px triggered stylesheet. Nice to have: Add a <320px stylesheet for low resolution. Trigger tables, landscape iPad and big tabs precisely with a >768px and <1024px rule. Use a wide design for desktop with a >1024px stylesheet.

Responsive Frameworks A framework is defined as a package made up of a structure of files and folders of standardized code (HTML, CSS, JS documents etc.), which can be used as a basis to start building a site to support the development of a Web site. There are plenty of responsive frameworks that come fully packed with everything you need for your next Responsive Design project. Bootstrap and Foundation are the leaders.

Bootstrap Bootstrap, has to be the most widely used framework. It is built with the most comprehensive list of features and can be quickly customized for each individual project. This includes sleek, intuitive, and powerful front-end framework for faster and easier web development. Bootstrap utilizes LESS CSS, is compiled via Node, and is managed through GitHub to you to create interesting items on the web.

Foundation

View ports and breakpoints Common resolutions can be sorted in 6 major breakpoints, you can work with them this way.

An advanced responsive front-end framework. Foundation is built with Sass, a powerful CSS preprocessor, which allows us to much more quickly develop Foundation itself — and gives you new tools to quickly customize and build on top of Foundation.

Evernote Are you losing track of your notes? Do you find it difficult to access all your notes from different devices? Are you worried about regular backup of your notes? Evernote can help you solve all these problems. It is available in Mobile OSs like Android, IOS, Windows phone and also available on desktop OSs like Windows and MAC OS X. With Evernote you can take notes, organize them the way you want and access them across all your devices. It is possible to create notebooks with a collection of notes. The notebooks themselves can be nested into larger notebooks. It is also possible to save favorite web pages using a web clipper plug in in the browsers. Searching for any specific note across the whole collection of notebooks is much easier. You can also add tags to specific notes to make searching customized. Using different fonts of your liking, adding tables and such basic word processing operations are also possible.

12

You can even record voice and also add any external file attachments. Sharing any of your notes with others and collaborating is much easier. Evernote “Free Account” allows for a free 60 MB upload every month. Though “premium account” is available for a nominal fee with more storage and more features, the free account should suffice the needs of most of us. It will be a one-stop shop for dealing with all your notes. IDC Tech Journal — August 2014


Rajiv Kumar Rajiv is a Specialist working in IDM team.

Historical Data Reporting and Datomic

E

nterprise applications have some level of reporting built into them in order to address the compliance needs. A simple reporting infrastructure takes into account the current state of the system while generating reports. Typically for such reports, the reporting component directly interacts with the underlying database to generate a report. This may sufficiently depict the system’s current state, but gives no indication of any evolution that may be happening in the application. As a result these reports are not intelligent and cannot be subject to business analysis, which is a critical need for enterprises. This deficiency can be overcome by using historical data along with the current system data to generate reports. The reports can be more informative with the possibility of adding trends, and so forth. Also, the reports may not only necessarily be generated for the current time period, but for any duration interval, which can be subjected to further analysis on need. However, the problem lies in accessing the historical information. How to retrieve historical information? Should we rely on an auditing system and tap into the audit database to get this? The answer to this is possibly yes, but just audit data may not be sufficient, as it may not contain all the information needed.

Historical Database In order to do historical data reporting, we need to store all the changes that happen in the system over time, and thus the need for a historical database. The question that arises is what design principles should govern this database; should this be same as or similar to the Enterprise database? Why and how should it be any different? Immutability should be one property that

forms the basis of designing a database meant to store historical records. In a typical application database, any change to the data, updates the record in the database. Consider the following example of a database table that stores an employee details.

Typical application database empid first_name 1000 John

last_name Smith

Department OPS

empid Department 1000 OPS

Now, if the employee is transferred to a different department, the is updated table information will be as given below:

empid Department 1000 IAM In this case, when John Smith is transferred from OPS to IAM, his department information is updated, but details about John Smith’s previous department does not exist.

Immutable Database for Historical Reporting The thumb rule is that data is immutable. Once added to the database, it cannot be modified. This means that there will

13 IDC Tech Journal — August 2014


be multiple data records that will exist for the same user. So, we need to have a way to differentiate the records. One way of doing this is to add time information into the database, so that every change can be tracked in a timed manner. So, with this inclusion following is the updated database table information empid first_name 1000 John

last_name !me Smith 2014-01-01 20:19:43.888

empid Departmen! me 1000 OPS 2014-01-01 20:19:43.888 1000 IAM 2014-03-05 20:19:43.888

Notice that additional column to store the timestamp for the update. The table data now contains information about John Smith’s timeline that he was in OPS between 2014-01-01 and 2014-03-05 and after that in IAM department. So, by using immutable data tables, historical data can be persisted and used to provide intelligent reporting solution. However, this also means that now, you need to manage the time variance in your database. This does add an additional task of managing this time dependency especially around queries, where this needs to be handled. The above is a very simplistic view of the data and implementation API’s need to take care of persisting/providing the adequate timed data. In a typical implementation, you may need additional views to project the various metric data such as recent changes or changes in a time window, and so forth. There are added development/maintenance needs when it comes to creating a database to store historical data compared to current state reporting, but the business benefits achieved by using this greatly surpasses the complication. There is however a database that greatly eases out this effort and can make historical data storage/reporting a much simplified affair. Datomic Datomic(http://www.datomic.com) is a database of flexible, time-based facts, supporting queries and joins, with elastic scalability, and ACID transactions. The datomic database stores a collection of immutable facts or datoms, as it is commonly called. Every datom has time build into it. Newer facts

supersede older facts. A major distinguisher in datomic is that the read and write features are handled differently. Query/ reads are directed to the storage service which provide a readonly view of the database. Any write operation is performed through the transactor, which implies that the transactor never waits on a read operation. Datomic ships a Peer library, which includes API’s for read/ writes. The Peer’s perform extensive caching of datoms and any query is first checked against the cache. It is then sent to the storage service, only if it cannot be serviced from the cache. Upon any updates in the database, the transactor notifies the active peer’s with the new facts so that they can add them to their caches. One of the unique properties of datomic is that, it not only stores the datom and its time information, but also stores the transaction for which the updates occured. Datomic transactions use data structures rather than strings as in SQL. Transactions are first class objects in datomic, which can be queried like the datoms’s. Datomic queries are based on datalog, which is a deductive query system. Datomic has an option of in-memory or persisted database. The data access operations are via objects which makes

Fig 1 – John Smith’s !tle is that of an Analyst

14 IDC Tech Journal — August 2014


Fig 4 – John Smith’s !tle is shown as that of Manager at !me 2014-03-11 05:50 (a"er update !me)

Fig 2 – John Smith’s !tle changes to Manager from Analyst

Fig 3 – John Smith’s !tle is shown as that of an analyst at !me 2014-03-11 05:45 (before update !me)

development/debugging easy. The in-memory database is very useful for testing/debugging purposes. A closure based shell (part of the distribution) can be used to test commands. The above example shows common commands to interact with the datomic database. Detailed description about datomic commands and its usage is beyond the scope of this article and will not be discussed here. The fact that datomic transactions can be queried like database entities provides a much needed flexibility such that any changes that happen in the database between two time intervals can be discovered easily. The database can be queried for an entity’s state at any point of time to construct a time series.

Datomic for NetIQ Identity Manager’s Historical Data Reporting NetIQ Identity Manager (IDM) added reporting capabilities in the 4.0 release, and the advanced version of IDM has the ability to provide historical data reporting. Currently, an IDM driver (DCS) taps into the IDM event system for any changes and sends the changes to the data collection service, which versions (timed) the data and stores in a Postgres database. Audit

events also feed into the database for more details. Multiple views are created in the database to facilitate the generation of these reports. The generation engine uses complicated queries to retrieve the timed facts to generate the reports from the collected entities. The use of datomic as a fact store for changes in the Identity Vault (IDV), simplifies the overall reporting architecture in IDM. A Datomic driver is created to update any changes into the IDV to the datomic database. The datomic driver uses the entity id as an association attribute to bind the datomic entity with the IDM object. This ensures that the changes for an object in the IDV propagate to the appropriate database entity in atomic. Since, the database itself takes care of storing multiple datoms depicting the various values of an entity at a time instant, there is no explicit handling required in the data collection service. For most practical purposes of storing the IDV event data, the data collection service is no longer needed. The queries/views can also be simplified as there is no need to perform timed computations since the database performs these tasks. To understand this let us look at a user John Smith that exists in the Identity Vault. John Smith is designated as an analyst since he joined the organization (refer Fig 1). Now, John Smith has been promoted as a manager and the same is now updated in the Identity Vault (refer Fig 2). Now, when this change of title took place in the Identity Vault, the datomic driver which is running in the IDM updates this change into datomic, which is being used as a historical fact store. So, now if we query the datomic database for the title of John Smith, the database will return the appropriate title values, based on the query timestamp. As in Fig 3, when queried at a timestamp 2014-03-11 05:45, which is a time before the update happened in the Identity Vault (and subsequently in the db), the title is reported as “Analyst”. However, when queried at a timestamp 2014-03-11 05:50 (refer Fig 4), which is a time after the title update happened in the Identity Vault (and subsequently in the db), the title is reported as “Manager”. As can be seen above, any change in the entities can be queried based on the time. A time interval or a start time or a time instant can be used to get a database entity detail. This ability to construct a time-series from the database content without writing any infrastructure pieces to handle it is quite powerful. This makes datomic a good option for application that has the need to perform historical reporting. IDC Tech Journal — August 2014

15


Short Article

M

ost mobile platforms require the Identity Certificate on the device to identify itself with a Mobile Device Management (MDM) Server. With the proliferation of mobile devices and BYOD scenario becoming a commonplace, a scalable automatic issuing of such certificates to every mobile device became a challenge. This was where the popular and well tested Simple Certificate Enrolment Protocol (SCEP) got leveraged. Most mobile platforms include an SCEP client. SCEP was originally designed to allow network administrators to easily enrol network devices for certificates in a scalable manner. An administrator contacts the SCEP server for a one-time SCEP secret. Upon receiving the SCEP secret, the administrator configures the network device like routers to issue SCEP certificate request to the SCEP sever using the SCEP secret. The SCEP server authenticates the requester using the SCEP secret and then interacts with the Certification Authority (CA) server to issue the certificate based on the details provided in the certificate request. The major drawback in this work flow is that the content in the certificate request is not validated. Thus anybody who knows the SCEP secret can send a certificate request with the subject name as an elevated service or a user like Administrator. This is not an issue in the closed environment like network devices within a corporate setup because the actors, namely Administrator and Network devices are trusted and controlled, but the usage of this protocol for MDM exposes a security loophole. A rogue user could use valid corporate user credentials

to acquire the SCEP secret, and then can request for an elevated user’s identity certificate. Therefore, the preferred way to use SCEP in the MDM world is to have the MDM server act as a SCEP proxy so that it can validate the SCEP request to make sure the certificate requested is for the intended user.

Workflow The user on the mobile device opens up the MDM enrolment HTTPS webpage, accepts the MDM server certificate and enters user credentials. On successful authentication, the MDM server responds with a randomly generated SCEP secret for the exclusive use of this user on this particular device. The mobile device then sends a SCEP certificate request with the user and device details along with the previously received SCEP secret. The MDM server checks the request and validates if the SCEP secret belongs to the user and device mentioned in the request On successful validation, forwards the SCEP certificate request to the SCEP server which then interacts with the Certification Authority (CA) server to issue the certificate.

Ashok Kumar

Architect for the Novell ZENworks Mobile Management product, and leading the efforts in bringing out the first release of this ZENworks integrated product from ground up

16 IDC Tech Journal — August 2014


Sachin Keskar Sachin is Engineering Graduate from University of Pune and Masters from BITS Pilani. He works as Specialist with DCM group. His interests include Bioinformatics and Security.

Multi-Factor User Authentication – Windows Biometric Framework (WBF) Case Study Introduction This article explains the importance of multi-factor authentication. This provides key usability aspects to consider while designing biometric system for multi-factor authentication and gives an overview of how a Biometric Framework is implemented in Windows 7 (or higher versions). You can also see how an application can leverage Biometrics Services Framework to identify users by using a Finger Print Scanner. We live in a world where user security is a concern. In this age, the ability to identify users accurately is very important. Multifactor Authentication is one of the critical elements in establishing such an identity and Biometric Authentication is one of the approaches to support multi factor authentication for identifying interactive users. Here is a note from NIST on the importance of identity management in the area of governance and security. “Government and Industry have a common challenge in today’s global society to provide robust identity management tools and identify governance principles and how to deploy these tools intelligently to meet national and international standards - NIST Council Subcommittee on Biometrics and Identity Management.” A Multi-factor authentication typically is based on identifying a user based on following criteria (minimum 2 factors) – What do you have? – E.g. Phone, Authentication Device. What do you know? – PIN, Password. Who are you? - Biometric feature E.g. Fingerprint, Voice, Iris scan. The use of biometrics to confirm personal identity is a key component of this identity management puzzle. Since a user brings in qualities, attributes and knowledge which are important in the design of biometric systems, usability becomes an important factor to make these systems successful.

Usability Design Considerations The Key aspects to consider while designing biometric systems are Anthropometrics – Metrics which provide data on physical dimensions of larger set of population. E.g: How does an individual’s height affect the quality of sample? How does age change impact the quality of sample? Affordance – Properties of a system that allows user to perform interaction with that system. E.g: Does the user understand where to keep the finger while recording a finger print sample? Or Does the user understand that a sample has been successfully taken? Accessibility – Refers to ability of all types of users to access and use a biometric system. E.g. Is it possible for a visually impaired user to use finger print reader using cues? Since these aspects are user centric, here is a revisit to the definition of usability – Source: ISO 1347:1999 “Usability - The extent to which Products can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction is a specified context of use” While designing a biometric system it is important to consider these goals mentioned in the Usability statement Effectiveness – Indicates the ability of a user to provide a successful biometric sample. Efficiency – Indicates the most efficient way to provide a sample with least error rate. Satisfaction – The metric about how a user feels after using the system. Repeatability – The metric which tells if a user can easily repeat the usage, with minimal learning. 17 IDC Tech Journal — August 2014


Biometric Unit which works in the context of a Biometric Provider is a logical unit of components which perform storage, retrieval and processing of biometric samples. Windows 7 and other higher versions allow applications to add customizations to the biometric process (by implementing custom sensor pools) or use Biometric Framework as part of standard Windows Authentication system.

A system that scores high on these goals would be the one that both Solution Designers and Users will look up to and which has a chance of achieving success.

Windows Biometric Framework Case Study Windows 7 contains a Windows Biometric Subsystem, integrated within Windows Identity and Security Subsystem. This section describes the Windows Biometric Framework as implemented in Windows 7 and higher versions. Windows 7 or other higher versions add support for Biometric Services through Windows Biometric Framework (WBF). The WBF API enables Windows Application to interact with native Biometric services to perform the following tasks. Enroll, Identify and Verify users using Biometric samples. Locate Biometric devices and query its capabilities. Manage Biometric sessions and monitor biometric events. Store Biometric data and credential mappings securely. This diagram shows the high level architecture of the Windows Biometric Framework and its key components. (Diagram Source MSDN) The WBF contains the following key components – Windows Biometric Service which is a privileged service which manages all biometric devices by using the Windows Biometric Driver Interface. Windows Biometric Service Provider which are vendor specific services using biometric systems like Finger Print Reader using a specific Biometric Factor (E.g. FingerPrint Biometric Method) and a user defined Biometric Sub Factor (E.g. A Thumb or Index Finger used for FingerPrint) Sensor Pool is a collection of Biometric units exposed by the Biometric Provider under a common management policy. There are 2 pools: System Sensor Pool – Used by System services to validate Windows principles. (E.g. User Authentication used by LSA to identify logged in users). Private Sensor Pool – Used by Applications and allows proprietary authentication methods. (E.g. Employee management application, without Windows Security Credentials)

18 IDC Tech Journal — August 2014

This section describes how biometric system is integrated into Windows Authentication System through a System Sensor pool and mechanism for client applications to use it. Windows Authentication system contains following core components to support interactive user authentication. Windows Logon Manager (WinLogon.exe) process – which runs in each session instead of session 0. LogonUI Process – which provides consistent UI for users to log on to Windows. Windows Credential Providers (replaces GINA starting Windows 7) - which supports multi factor authentication. These credential providers are used by LogonUI to get the Authentication Credentials which can be verified by WinLogon using LSA. If Biometric Authentication (E.g Finger Print Scanning) is available on a system, the associated credential provider is registered under HKLM\ Software\ Microsoft\ Windows\ CurrentVersion\ Authentication\ Credential Provider and is used by LoginUI to store and retrieve credentials. Finger Print Credential provider uses its own mechanism to interact with the sensor device to read finger prints, compare it and return the credentials for authentication. Finger Print Vendors and OEMs can implement and integrate Finger Print Products into the system or make it available externally.

Accessing WBF for User Identification use case: An application interacts with the Finger Print Credential Provider using Windows Biometric Framework. Here is an output of a sample program to take a biometric sample to identify a registered Windows user, and once the user provides the sample, it identifies the user using and prints the user name. Note: The code has not been included due to space constrains and will be available on request.

Conclusion In this article, you can see the usability criteria of designing a biometric system, architecture of WBF which provides a Biometric Framework for Biometric System Providers and how it integrates with the Windows Interactive Authentication process. Output of a sample program which uses the WBF to identify Windows users using Finger Print Authentication using a System Pool has also been provided.


GNVS Sudhakar Done MCA from NITK and been with Novell for the past 13+ years. IC (Inventions Committee) member.

Innovation Sparks - 1

Introduction I will start with a question. Usually the data on servers gets backed up to tapes which will be stored in a secure place. How do you logically delete these backups without touching them? Think for 2 minutes before going ahead and reading the rest of the article. Here is a coded answer (to decode: replace with next character a->b, b->c ....): Dmbqxos adenqd. Cdkdsd jdx. Well this is the first patent I studied. A nice, simple but very utile innovation, isn’t it? Innovations in the beginning are abstract. In Shreodinger’s “Image of Matter” we find these lines: “....the physicist lives in two worlds: He manipulates such tangible objects as coils and vaccum pumps, but his ideas must be appropriate to atomic dimensions. He must enter a world in which mutually contradictory hypotheses much supported by incontravertible evidence must both be accepted ...” In a way, being continuously inventive is just about living in abstract uncommon thoughts but verifying them with the litmus test of utility. As an example, suppose we say trees sneezing causes wind and support the statement by taking analogy from wind caused by sneezing, how will it be looked like? But when we say earth’s attraction force causes fall since attraction force (magnetism as a better example) causes fall, we accept it now since it is verified. But to him who discovered, the struggle must have been there to prove. I would like to quote what Einstein has said: “Common Sense is actually nothing more than a deposit of prejudices laid down in the mind prior to the age of eighteen”

Practical aspects Let me touch some practical aspects of the innovation in the context of the litmus test mentioned: Whatever may be the source of inspiration, we need to spend time to think. However, in a corporate setting, where do we get so much of time? Well there can be many reverse answers like questioning back how do we get time for so and so things like watching a cinema. But the problem is a real felt at least. There is no substitute to spending time and here is where the real innovation starts I guess. An idea can be as simple

as taking a leave and spending your time at office(nobody can assign tasks)/home for the entire day towards innovation. Broadness of the idea is another aspect I would like to touch. When the idea is broad enough the level of acceptance is more. A better acid test is, can you start a company of your own with this idea. We need to aim for such broader ideas even to result for apparently small innovations. An essential ingredient for broader ideas is security. There may be many working in software development but not all are well versed with Network security aspects. I would recommend to atleast implement SSL client and server and subscribe to openssh mailing list for some time atleast to fill the requirement. “Cryptography & Network Security” by William Stallings is a nice book which I would recommend. The mechanism of the idea actually need not be worked out to a granular detail. The thumb rule is when it can be explained to a person in the same field, just so much of detail is enough. When we find an idea with sufficient detail, even without an implementation, we can create a White paper, product presentation, or file a patent. All these do not require you to implement your idea, though it may help.

Problem sources But, how to get the source of the problems? The work we do is definitely a source. In my case, I have also found the text books and sites like Google scholar were helpful. Sometimes, a problem may appear to be uncrackable, but it might be fun to solve it. An example is to factor a prime number product. Though it is not solved till now in public knowledge, I took it up and it helped me understand and discover the properties of prime numbers, which resulted in two patents.

Reinventing innovation Somebody said the best way to understand a wheel is to reinvent it. I agree with it. Also in doing so, I realized many truths about numbers. They are difficult to explain, but I would still try to share 19 IDC Tech Journal — August 2014


my understanding below. I feel we have not been taught the basics about the number system properly in our schools nor innovation. Here is a link about how pi was discovered: http://betterexplained.com/articles/prehistoric-calculus-discovering-pi/ A comment in the above site reads as “Why was this never explained like this in high school?” This is the whole point I would like to concentrate on. Mark Twain rightly said: “I’ve never let my school interfere with my education”. We know children are very innovative. Growing up should not mean decreased innovation. The approach to mathematics can be made fun and fulfilling. For example, Srinivasa Ramanujan often said, “An equation for me has no meaning, unless it represents a thought of God”. Einstein said, “But the creative principle resides in mathematics. In a certain sense, therefore, I hold true that pure thought can grasp reality, as the ancients dreamed”. We can find more interesting examples in vedic maths. So we need to agree, it is possible to think differently. When you innovatively solve a problem from a source, try again in a different way. This is another way to generate new ideas. When you find that your solution to a problem is already discovered, you can still consider it as a sign of success and progress.

Innovation Philosophy Actually, in nature everything exists as one whole thing. Dividing itself is accepting unknown. Consider these examples: Which artist has drawn the colors of parrot like nature? We find Fibonacci arrangement in nature right upto arrangement in DNA. Aeroplane was designed after noticing birds. Music was inspired by cuckoos. Science + art + movement + .... exists as 1 in Nature. In nature we get everything together as one. It is we who divide it into different things depending on our limited range of perception. We have seen different things about innovation. But why do we need to innovate in the first place? I would say it is seriously for the fun of it. No other reason really fits in there. Inspiration for Ideas If you think there is no inspiration for innovation, then reading so far proves you are interested and you have it! Inspiration for ideas comes from mystery. We call this by different names. For example, in case of science we call it as entropy. It can range from difficulty to compress random bit stream in computer science to the holographic theory in modern physics it is the entropy which is the inspiration to many things. Strength of secret for RSA algorithm comes from the fact that nobody knows (publicly claimed) how to factor large numbers. Entropy here is also not knowing how to factor. Stuff of this kind generally gives inspiration.

Innovation Definition Innovation is uniting inside. Innovation comes from resolving conflict which can be identified through increased sensitivity to conflict. This comes from a “Not to fear” mindset. This is the most important quality needed for innovation and works at the subtlest of subtlest levels.

Areas for Innovation I have seen patents in all imaginable areas in technology. So we cannot say that an area is not suited for invention. A good example is the email. I have observed many people trying to innovate the email. As it is the most used concept, many people generally tend to start there. While I do not discourage you, it is both easy and 20 difficult to innovate with email because by the time an idea comes, IDC Tech Journal — August 2014

it is already implemented since it is easy, somebody else already did it. Because of this it also becomes too difficult to innovate in email space. But, this shows any area can be taken invented when it becomes familiar.

Inventions review When it comes to presenting an invention to the reviewing body, we need to understand that the entire review discussion is not typically conveyed back to you. In case the decision is NO, it does not mean the idea was not properly understood. The reason conveyed back to the inventor generally does not contain all details. This is true for most of the cases. It is also the responsibility of the innovator to convey the idea in a simple, easy to understand manner. Prior art research means searching about your new invention related work in Internet to see whether it already exists. Generally 20 to 40 minutes of concentrated effort is enough in case of patent idea submissions. Also when understanding a patent, reading the first two claims and the abstract gives good idea about the patent. This method of understanding works for most of the patents.

Innovation Patterns Some of the recurring innovation patterns I have seen are intellectual cross pollination, piercing through abstract layers, relaxing the original purpose, and so on. We can find many examples for first one. I read somewhere that IBM brings people from diverse backgrounds like physics, biology, chemistry and more to work on a single problem. For the second one though there are many, I would give this interesting example patent filed by a fellow Novellite from Bangalore: Raw sockets interface was used in a VPN product to provide a new feature otherwise not possible. For the third pattern we can find many examples, some are: Rsync, where integrity of data is relaxed or diluted to give faster synchronization, coating razor blades with plastic kind of material and decreasing sharpness to increase durability.

Effort for Innovation Another aspect about time is, do ideas take time or do they come in a jiffy? We know the case of Archimedes and the state in which he got his famous idea. Deeper contemplation is definitely needed. The flash moment may be at a different time but the mind must be tuned and hence spending time for innovation is definitely required. Mozart says he gets ideas for music when he wants. But we know the effort required to master classical music. Similarly kekule got the benzene structure in a kind of meditative state. The effort behind it can be understood. James Rothman, who won 2013’s Nobel prize in medicine said he was “nuts” to attempt to reproduce the cell’s complexities.

Summary In the end, I would like to summarize as “Innovation is uniting inside. It starts with identified conflict. Identifying conflict requires the necessary “Not to fear” mindset. The “Not to fear” mindset comes from harmonious understanding.

Epiolgue This is the first of the article series which I have planned about innovation. I will try to cover structured innovation and more in the next article. Please reach to me for anything in general. I will try to cover any shortcomings or requests in the next article.


Sudipta Roy Working as Specialist in NetIQ DCM group. He is a neophyte in Novell and having over 11 years of experience in sweeping technologies of Software development and service offerings. His key interest is to be the part of software-making that is soft to equipt, easy to use, and capable of riding in the cloud to accomplish the need of mass.

The Importance of Capacity Planning and Monitoring NoSQL (Not Only SQL) Data Warehouse DMS

A

ccording to the recent IBM research, the 90% of the data that exists on this planet has been created in last two years. We are creating everyday around 2.5 quintillion bytes of data. Data is coming from everywhere and everything that we do starting from making bank transaction to social networking to social media, every moment we are making this data grow bigger and bigger. To address this uncontrolled mammoth data, the growth of NoSQL systems over the past few years has prompted the development of more and more companies working on integrating NoSQL and big data into traditional SQLcentric systems. As an outcome, the fledgling NoSQL marketplace is going through a rapid transition – from the predominantly community-driven platform development to a more mature application-driven market.

Why NoSQL can be better than SQL? For large organizations the relationships and tables in SQL databases can reach to millions. When millions of users try to lookups (lookup or search) in these tables, systems can suffer major performance issues, as Google and Amazon discovered the hard way before switching to non-relational systems. Large-scale programming projects using complex data types and hierarchies, such

as XML, are difficult to incorporate into SQL. These data types, which can contain objects, lists, and other data types themselves, do not map well to tables consisting of only rows and columns. NoSQL databases, in comparison scale up horizontally, adding more servers to deal with larger loads. Auto-sharding lets NoSQL systems automatically share data across servers, without needing to perform some complex coding maneuvers. This balances the load across several servers, providing a more robust system in the event of a crash of a particular server.

NoSQL database categories Apart from Graph databases, there are mainly two categories of NoSQL databases: Databases that allow data to be stored as JSON documents such as MongoDB, CouchDB and BaseX. Key/Value pair NoSQL database such as DynamoDB, Riak, Redis and Cassandra, which store data as key/value pairs.

Why to monitor NoSQL databases? The downside of most NoSQL databases today is that they traded ACID (atomicity, consistency, isolation, durability) compliance

21 IDC Tech Journal — August 2014


of RAM is sufficient for deploying applications. Application performance can go down and can even generate OOM (Out of Memory) error if RAM is not sufficient. It is essential to monitor the memory consumption of applications running on NoSql database environments and display the used, free and total memory of the server.

Connections Statistics By monitoring and tracking the number of used and available connections between the client and database the chances of application performance irregularities can be avoided in NoSql environments as sometimes the number of connections between the clients and the database can engulf the ability of the server to handle requests.

Database Operation Statistics

for performance and scalability. Many also lack mature management and monitoring tools. In a cross-platform document-oriented database system like Mango DB, what has already been adopted as backend software by a number of major Web sites and services, including Craigslist and eBay, Foursquare, SourceForge and the New York Times, it is eminent to have cross-domain monitoring approach spans server, storage, network, virtualization and applications to automatically cross-correlate metrics in real-time, freeing user from the task of gathering this information from multiple sources. Since NoSQL databases allow for virtually unlimited scaling of applications, they greatly increase application infrastructure complexity. Monitoring is a critical component of database administration in this case, for diagnosing issues and planning capacity.

What to Monitor? Now some key questions that arise here, are as follows: What are the key metrics that need to be monitored to ensure the application is meeting its required service levels? How to know when it is time to add shards? How to take preventive measures when working set exceeds available RAM and the system encounters page faults? With an appropriate monitoring capability user can essentially gain in-depth visibility into the right metrics to optimize their data infrastructures with various statistical data which can help to make the proper capacity planning. Some attributes that can be monitored to find the solution for the technical challenges of NoSQL databases and its sharding mechanism are mentioned below:

High-level overview NoSQL environments scale horizontally across a multitude of distributed nodes. A high-level overview of the different nodes can provide integrated view of the links between the different nodes in the replica set or sharding server. It can retrieve details on live, leaving, moving, joining and unreachable nodes.

Memory Utilization 22

NoSQL Database uses memory mapped files to store data. These memory mapped files make it difficult to determine if the amount IDC Tech Journal — August 2014

It can be ensured through monitoring the database operation statistics along with replication and sharding operation details, that operations are happening in a consistent manner by monitoring the total number of database operations (insert, getmore, delete, update and command) per second since the start of the last instance. This data can help analyzing and tracking the load on the database.

Lock Statistics Since some of the NoSql databases use locking systems, if certain operations are long-running, application performance slows down as requests and operations wait for the lock. In such scenarios, Lock Statistics can be monitored, like number of operations that are queued and waiting for the read-lock or write-lock and number of active client connections to the database currently performing read/write operations.

Journaling Statistics NoSQL databases like MongoDB uses journaling to guarantee operation durability, which means that before applying a change to the data files, MongoDB writes this operation to the journal. Journaling ensures that MongoDB is crash-proof. By Monitoring journalizing statistics it can be known that how much time taken in writing the data to disk.

Storage Statistics In case of significant amounts of data, disk space usage can vary over time within a NoSQL (e.g. Cassandra) environment. A monitoring tool can monitor disk utilization and storage statistics over defined time periods to help identify and remedy performance issues.

Thread Pool Statistics Monitoring can provide statistics on the number of tasks that are active, pending, completed and blocked. Monitoring trends on these pools for increases in the pending tasks column can help user plan add additional capacity.

Dropped Message Statistics Monitoring can also help user to deal with overload scenarios in NoSql environment by keeping a lookout for dropped messages. User can receive a log summary of dropped messages along with the message type. In this, user can establish thresholds and configure alarms to notify for dropped messages.


Vikram Kaidabett More than 5 years of experience in Software Testing, Test Automation,Product R&D engineering. He holds M.Tech degree from VTU and works as Software Consultant in Novell. Aspires to become an automation specialist and IT infrastructure management consultant.

Smart Testing Ideas

Just-In-Time Testing How do you know you completed testing? How do you know that the application is ready for end users? Did we test the right things? Did we apply the best testing in the best way? How to address problems with hardening sprints and regression sprints? Just-InTime testing approach may be an answer to few such questions. What is Just-In-Time (JIT) Testing? Just-In-Time testing termed as “Smart Testing” - is all about “Smart Thinking”. It’s a mindset – a particular way of looking at the problem and a skill set – a particular set of things that we practice and get better at, mainly focused on how to do effective testing more quickly, less expensively, with excellent proven results. Just-In-Time Testing approaches are successfully applied to many types of software projects especially in Agile environment where we have continuous integration of new functions, features and technologies. It tells how to cope with testing systems with minimal specifications, no detailed requirements and without any upfront analysis. When to use Just-In-Time (JIT) testing? Just-In-Time testing works well when you don’t have the benefit of detailed test analysis, test plan, test cases or test procedures. When you have very less time to test the features, before starting the exploratory testing or an automation testing. Where do we find Just-In-Time (JIT) testing approaches useful? It’s best suited for projects which run in Agile environments. This approach is successfully implemented in different domains like Data Center Management, Storage, Security, Banking, Insurance, so on. Why should you go for Just-In-Time testing? To increase the overall confidence level of testing. To know how to adapt to changes, prioritize tests and find critical bugs.

How to implement Just-In-Time (JIT) testing approach? In general, JIT testing includes the following workflow: 1. 2. 3. 4. 5.

Discover new testing on the fly. Triage testing ideas. Elaborate your testing opportunities. Implement your testing. Track testing status.

Discover: As soon as you get any information or insights about the software to be tested, you can start building a collection of testing ideas. How to find them? Does the system do what it is supposed to do? Does the system do things it is not supposed to? How can the system break? How does the system react to its environment? What characteristics must the system have? Why have similar systems failed? How have previous projects failed? The above ideas are compiled and grouped under different categories. The important test cases are called charters. The defined charters will be represented through mind map for a quick understanding. Triage: Triage the whole activity by associating the Business Risks and Technical Risks. Business Risks can be evaluated from the product management perspective. A business analyst or a release manager can scale the benefits in terms of High, Medium and Low. Technical Risks can be evaluated from the Engineering perspective. QA Functional Manager, Developer can rate the consequences in terms of Significant, Neutral and Minimal. A priority column should be created as a conclusion to this Business and Technical Risks. Further, associate the details of estimated time effort, actual time effort and the status of defects through triaging. Create burn down chart for the test effectiveness. During regression and hardening sprints, test cases can be 23 selected based on priorities defined. IDC Tech Journal — August 2014


unanticipated failures. List these ideas against Failure Modes. Usage Scenarios: Ideas can be derived from end user product usage, experience gained through customer defects, identifying who is using the system, what are they trying to achieve and in what context etc. Think on whether a user can achieve their tasks with the software under test. Also, thinking on How to model operational workflow? Integrations, Interfaces drives lot of user scenario charters. Creative Ideas: Use lateral thinking techniques, imagine on what can go wrong, how different the product can be used from the customer stand point, generate fresh new ideas and alternate possibilities to bring effective solutions. List down all your ideas against this charter. Quality Factors: It can be understood as a characteristics of a system that must be present in it. Include the usability, availability, scalability and maintainability aspects into this charter while creating a test plan. Quality factor test ideas often involve experiments to determine if a quality factor is present. Example includes performance, load and stress testing. Environments: Explore how the application behaves in different operating environments. It can be related to different operating systems, hardware, software, third-party software and so on. Taxonomies: Bug taxonomies gives a rich source of test ideas. As these are organized, documented collection of bugs - It gives a more insight into the product and the issues faced by customers.

High Medium Low

1 2 3

Significant 1 P1 P2 P3

Neutral 2 P2 P3 P4

Minimal 3 P3 P4 P5

Elaborate: Elaborate the whole concept with different testing approaches as below: Scripted: Create test scripts if required. Exploratory: Systematic exploratory testing involves concurrent test planning, test design and test executions. An exploratory test charter is defined. The scope of testing is defined. Templates for capturing testing notes and observations are established. Implement: A typical workflow is as below: Confirm that the test objective and context are understood. Test the software, wrap up, report bugs, collect notes and data. Track Status: Summarize JIT project status with information about test ideas, test results and bug summary. Take through the excel file in the retrospective meetings. A list of different sources of test ideas that can be drawn based on the JIT approach: Capabilities: Make sure that application does what it is supposed to do. Requirement and Functional Specification documents,VersionOne exit criteria’s; defect tracking tool data, customer info can be used as a source of capability-based testing ideas. Failure Modes: Concentrate on “What if ” questions that are often inspired by how a system is designed. What data can be wrong, missing or incorrectly structured? Does the system have to synchronize with other sys24 tems or events? Ask yourself, what if they break or think on some IDC Tech Journal — August 2014

Data: Data is a rich source of testing ideas. Data flow paths can be exercised, different data sets can be used, data can be built from combinations of different data types and stored procedures can be verified. States: Use state models to identify test ideas such as getting to states, exercising state transitions and navigating paths through system.

Summary In JIT approach, we create a test plan for each stories which contains The “Charters”: Prime area of interest which has lot of test cases. Test Plan of a story should essentially contain: Brief description on what is been implemented. An acceptance criterion Epic, Project and Sprint name Detailed charters generated from different idea sources. Mind map diagram to demonstrate the test ideas. Risk associating columns, time estimations and status Priorities defined for charters In brief, JIT approach delivers a right balance between planning, documentation and test execution. It shows different smart ways to test application with minimal specifications. It’s not about urgency or speed rather it deals on thinking critically on what has to be tested and thus illuminating the necessary work to ensure that the product is tested effectively.


Short Article

“You cannot control what happens to you, but you can control your attitude toward what happens to you, and in that, you will be mastering change rather than allowing it to master you.” — Brian Tracy

Y

ou may think, why this reference to” emotions” in the work context? This thought is obvious because often we tend to associate emotions with our kith and kin and more so with the home environment.

Emotional intelligence increases individual occupational performance, leadership, and organizational productivity.

In this rapidly changing world, our priorities are changing with increased importance attached to work, roles, achievement and success. We are constantly gearing ourselves up to face the competitive world in an efficient and hazardless manner.

An employee with good emotional intelligence (EQ) manages challenges and changes in workplace with adaptability.

Reactions to stressful situations in personal life might be to shout, argue, or pity one-self but it cannot be the same at work place. You need to manage emotions well under all circumstances to be productive and professional. Emotional intelligence (EQ) is the ability to identify, use, understand, and manage emotions in positive ways to relieve stress, communicate effectively, empathize with others, and overcome challenges. In this fast paced environment, emotions occupy a significant role than ever before because productivity at workplace demands greater understanding of the roles assigned to us and that of others, expectations, priorities and so on. Successful organizations are made of not just buildings and other material assets, but to a great extent of human resources. Intelligence might help one get a job but emotional intelligence equips one to face the problems and challenges that are likely to arise while managing the job. A good understanding and awareness of such workplace situations should help us cultivate emotional IQ consciously because man is a social being who evolves with time.

Workplace Scenarios - Emotional Quotient An employee with high emotional intelligence manages one’s own impulses, communicates with others effectively, manages change well, and solves problems. Such employees have empathy and remain optimistic even in the face of adversity. The composure in stressful chaotic situations and clarity in thinking is what separates top performers from weak performers in the workplace.

EQ - Individual Professional Performance Veronica an employee with good EQ is aware of the tricky situations created by Madeleine one of her coworkers who constantly tries to divide the team for her own selfish gains. A minor disagreement crops up between Veronica and her team members regarding a work related issue. Madeleine adds much more to the issue and goads Veronica with suggestions to complain to the manager about the other team member. Veronica declines this suggestion thinking such minor frictions in the work environment must be ignored. On the contrary, imagine a situation where Veronica was influenced by Madelein’s goading to complain. The relationships would have been at stake for silly reasons and unnecessary effort and time (of both the manager and the team members) spent dealing with minor conflicts which otherwise get better in due course of time by better communication. Sally an employee accepts the suggestions made by her manager during 1-1 meeting for further improvement of certain skills and decides to implement them. This is mainly because of good self-awareness and self-assessment (one’s strengths and limits). People with good EQ have this flexibility to be receptive. On the contrary, imagine a situation where an employee who lacks conscientiousness and has a bloated perception of one’s capabilities gets into an argument with the manager and refuses

Radha Devaraj A technical writer in

the EPM documentation team, longs to be creative, positive and, passionate in whatever she undertakes - big or small. She holds an M.A in English Literature

25 IDC Tech Journal — August 2014


to consider suggestions for improvement. Such people who lack EQ and rigid are a threat to the progress and productivity in workplace. Rebecca is an optimistic employee in ABC organization and manages to hold on to the current organization despite of the demotivating strategies of her team members such as false accusations, discrediting behavior, etc. This is because of good selfmanagement which includes: achievement drive, commitment, initiative, self-control and adaptability. There is passion to work for reasons that go beyond money and status. On the contrary, if employees in identical situations were to impulsively take decisions to quit the organization or bother the leaders in the hierarchy with frequent escalations, productivity would be at stake. Peter is an employee who is well aware of the hurdles that are placed by co-workers in the new project and it is affecting to make progress. Owing to his self-confidence and assertive skills, he is able to clear the misconceptions woven around him and thus continue to be a productive employee. With his EQ, he is able to overcome the vicious problems created by some of his jealous coworkers and thus marching ahead productively. On the contrary, imagine what would happen if Peter were to succumb and thus get into depression. Roger is an employee with good EQ and remains calm even under pressure both in personal and professional situations. He has challenges to meet both in the home front and at workplace. He never lets his personal problems interfere with the quality of his work. With focus and good control of emotions, he has always proved efficient and accountable at his workplace. On the contrary, imagine an employee with poor EQ giving personal problems as excuses for not delivering well at workplace and also not taking measures to organize time and effort.

EQ - Leadership Productivity Managers and Team Leaders who shoulder major responsibility of handling varied projects need to have good emotional intelligence to be able to get the work done, to keep up the deadlines intact and thus drive growth by making the team work in fine co-ordination. Daniel, a manager understands the blame game antics in the team. When one of the team members deliberately blames another team member of poor performance and bad behavior. Having his own yardsticks of measuring each one’s actual performance, he does not entertain the complaining member. He knows very well that such tendency does not create a healthy atmosphere in workplace and blaming is a way of devaluing others. Blame game takes the enthusiasm away from the project in the team. He has the required emotional intelligence to understand the possible workplace negativity that lurks in and the ways of purging it. On the contrary, imagine a situation in which a manager or team leads were to entertain such blame game tactics of team members without any discretion. The accused team member might be hurt, feel insecure and decide to quit the job or contribute less deliberately. Delia, a team leader has a good EQ to identify loss of group motivation because of social loafing lurking in her team since the last project. John, the team member has started taking less

26 IDC Tech Journal — August 2014

responsibility for certain tasks assuming that one of the other group members will take care of it. Few other members in the team have started finding team work frustrating mainly because of carrying the weight of the work during the previous project. Delia has the EQ to realize that all this is due to the sucker effect in which people feel that others in the team will leave them to do all the work while they take the credit. With this understanding, Delia assigns specific tasks for members in the team and recommends creating a system for measuring individual performance and rewarding those who excelled above and beyond the team goal. On the contrary, imagine a situation in which a team leader with poor EQ randomly assigns tasks in a team with no proper tracking and awarding system in place. Some of the team members might withdraw from contributing well in a team for the fear of carrying an unfair share of the workload, while few others might free-ride assuming that other team members will anyway complete the task. This makes the teams less productive. David a manager has a good EQ to be able to concentrate on people development in his teams. He is able to sense what his teams need in order to grow, develop, and master their strengths. Harris a manager with sound EQ is able to identify some of the workplace negativity factors such as jealousy, workplace mobbing. Some of the co-workers gang-up to force someone out of workplace through humiliation, discrediting and isolation. Harris knows the motives for co-worker backstabbing such as disregarding others’ rights in favor of one’s own gain, self-image management, revenge, jealousy and personal reasons. As a result of good EQ, Harris is able to see through the ill motifs of some team members and ensure fair treatment. Thus his EQ is conducive to productivity at workplace. Through fair measures, Harris is able to prevent employee dissatisfaction and withdrawal, thus preventing a big loss to the organization in terms of human resource and output.

EQ - Organizational Productivity Real business is based on relationships. A good team, an able leader, managers and others in the hierarchy with good EQ can make a lot of difference in terms of customer service orientation. Good EQ helps each of them to anticipate, recognize, and meet needs of customers. People are willing to do business with those they know, like and trust. Leaders, team members and others in different roles with sound EQ can increase the overall productivity of an organization.

Concluding Message Be emotionally intelligent and self-aware. Growth is an all-round phenomenon. Remember, when others grow with your help, you grow too. When you let others do their job well, you are doing your job well. The moment you start thinking of others’ growth as an impediment to your success, you stop growing because from then onwards you start wasting your effort and time on finding ways of snubbing the other person. In the process, even your psychological health suffers because negative feelings such as jealousy, frustration, fear, anxiety often produce some unwanted chemical reactions that are harmful to one’s health and well being.


Sureshkumar Thangavel A tech savy passionate programmer & architect. His interests include security, functional programming and massive parallel scalable systems. He likes natural food and likes to do organic farming. He works for Access Manager at Novell.

Asynchronous Reactive JVM Programming

Introduction A long waiting network call or a CPU intensive computation blocks the current thread execution. The asynchronous programming model delegates these computations to other threads or to operating system, and thus makes full use of the system resources to schedule other computations. Even creating additional threads may degrade performance as threads are limited resources. Event based reactive programming model executes codes only when a useful event is triggered, such as network call returns some data or a user clicks a button. Event based programming model also introduces nested callback hell. RxJava Observables provide a way to escape from this hell and write declarative code to deal with asynchronous results. In this article, we will look into briefly about primitives to make long computations asynchronous and advanced powerful reactive programming concepts of combining asynchronous computations and scheduling them by using RxJava’s Observable sequences.

Groovy Closure The code examples in this article makes use of the Groovy language mostly for passing block of code, called Closures, to Java methods. Groovy is a dynamic language. When Groovy code is compiled to Java byte code, it can be run on JVM as normal Java code. Closures are important concepts in programming styles closest to functional programming. It has immense applications in places such as passing callbacks in GUI programming. Closures are computations enclosed with the data it operates on. Groovy Closures particularly are objects deriving out of Runnable and Callable interfaces. So, Groovy Closures can be passed to places where a Runnable or Callable instances are expected.

Thread.start {println("hello")}

Code 1: Creating threads All examples in this article can also be be written with plain Java. RxJava uses Action and Function class variants to represent closures. Refer to references [1] for closure and [2] for RxJava API.

Java.util.concurrent utilities We will revisit some important utility classes in java.util.concurrent package to make computations run in another thread. To create a computation to run in a separate thread, we will be using “Thread” objects. Code listing 1 shows how to create a simple thread and run it immediately. Huge number of threads, more than your system can handle, also degrades your application performance. To avoid that, thread pools should be implemented. Thread pools create a fixed number of “worker” threads. The “work” will be distributed to these worker threads. When there are more requests than workers, they are added to a queue and will be processed later. In this way, your application performance is maximized according to the capability of the system rather than the function of incoming requests. Thread pools in Java concurrency package are implemented by using the Executor interface. For example, the above code can be executed under a thread pool as shown in Code listing 2. Executors.newFixedThreadPool(5).execute {println("hello")}

Code 2: Thread Pool Executor To abstract a task that has some computation and returns nothing, the “Runnable” interface is used in Java. In all the 27 IDC Tech Journal — August 2014


examples above, we used the Runnable interfaces in the form of Closures. The concurrent package also provides another interface “Callable”. This interface represents computations that also return a value as shown in code listing 3. This code runs the Closure in a new thread and waits for result. The new thread sleeps for 1 second and returns “hello”.

asynchronous. In the following example, it creates two asynchronous tasks:

def v = {sleep(1000);return "hello"}.call() println v

Code 5: Sequencing Futures

Future

Even though, the result of 2nd task is available within 2 seconds, we cannot do something on the result until 1st task is completed which takes 3 seconds. This is because we assumed that 1st task will complete soon, so we called participant1.get on it which makes the program synchronous at this point. To avoid this, we have to poll all asynchronous tasks one by one and decide what to do when it is done as shown below:

Code 3: Asynchronous computation returning result

In the callable example above, we have seen how to run a computation in another thread and access the result of the computation in the calling thread. But, the computation does not get executed unless we call the “call” method on the Callable. When called, the current calling thread waits for the result. This is not asynchronous. To model a computation to run in a new thread without blocking the current thread and access the result later, “Futures” in Java help. Futures are useful when a caller is making a time consuming network call or computation. Instead of waiting for the computation result, Futures can do other things and access the result when it is ready to do so. For example, the following example shows how to make a network call involving latency, for example, searching for an object in a LDAP directory and process the result when the results are ready. Executor executor = Executors.newFixedThreadPool(5) Future<String> future = executor.submit({ println("running...") LdapContext ctx = new InitialLdapContext(get_env("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell")) SearchControls sc = new SearchControls() sc.setSearchScope(SearchControls.SUBTREE_SCOPE) def sr = ctx.search("o=novell", "(mail=admin@novell.com)", sc) if (sr.hasMore()) { SearchResult result = sr.next() println("result is ready") return result.attributes.get("cn") } return "unavailable"; } as Callable) // do something else. when ready to access results of cn println("doing something else") sleep (1000) println ("ready to call get...") println future.get()

def printtime(x) { println("time: " + new Date() + " value: " + x) } Future<Integer> participant1 = executor.submit({sleep 3000; return 100} as Callable) Future<Integer> participant2 = executor.submit({sleep 2000; return 200} as Callable) printtime("main checkpoint 1") printtime participant1.get() printtime participant2.get()

def printtime(x) { println("time: " + new Date() + " value: " + x) } def participants = [executor.submit({sleep 3000; return 100} as Callable<Integer>) , executor.submit({sleep 2000; return 200} as Callable<Integer>) ]; while (! participants.isEmpty()) { def itr = participants.iterator(); while (itr.hasNext()) { def p = itr.next(); if (p.isDone()) { printtime p.get(); // do something useful on result of p itr.remove() } } }

Code 6: Polling Future completions and getting results

To avoid such complexities and be able to operate on the results whenever they are available, a new reactive model of programming is recommended. Reactive programming with RxJava provides “Observables” to solve this. This is covered in the next section.

RxJava Observables RxJava is a library, which provides reactive programming patterns and methods to compose asynchronous and event based computations. RxJava introduces two constructs: Observable and Observer. Together they provide ability to operate on discrete events asynchronously and compose those asynchronous computations together. To understand clearly about Observables, let us compare the synchronous and asynchronous computations involving single result and multiple results.

Code 4: Asynchronous Ldap Search

Single Result

The Excutor.submit method submits a Callable to be executed asynchronously and returns an object of “Future”. Future is asynchronous until you call the “get” method on it. When “get” is called, the current thread gets the result immediately if the Future is completed already. If not yet completed, the “get” call blocks. This is useful when you have finished with some other tasks and you cannot proceed further without the result of the asynchronous computation. Futures can also be canceled. For example, if you are using a Future to populate a list box, but before the future is completed, the user navigates to another screen destroying the list box. In this case, you can cancel the Future. Future will be canceled if it is not completed already. Futures are good for single level asynchronous operations. It is easy to make an asynchronous call and do something else and wait for the result. When it is necessary to wait to get a result of a “get” call, it becomes synchronous again. If you want to do multiple simultaneous asynchronous calls, 28 it becomes difficult or impossible to make subsequent calls IDC Tech Journal — August 2014

synchronous asynchronous

Multiple Results

T

Iterable<T>

Future<T>

Observable<T>

The synchronous computations either return a value or a sequence of values of type “T”. The sequence is represented with the “Iterable” interface. This operates on a pull model, where the caller pulls the result of the computation when required. This pulling of the result is a blocking operation making the current thread wait for the result. Whereas the asynchronous computations return immediately without waiting for the results. They return references to the computations that can be checked for the result later. When it involves a single result, they return “Future<T>”. Whereas for multiple results, they return the “Observable<T>” object. The result of a single result need to be pulled out of Futures. Whereas for Observables, an object called “Observer” needs to subscribe to them and the result will be “pushed” to the observers whenever it is available. This makes the Observable reactive and intimate the observer as soon as the result is available.


The creation function can be used to do blocking computation in the Observable which,blocks the calling thread. For example, the following examples use blocking network calls: Executor executor = Executors.newFixedThreadPool(5) def ldapConnect(env) { Observable .from(executor .submit {new InitialLdapContext(env, null) } ); }

Code 8: Creating observable from Future

Illustration 1: Marble diagram representing Observable pushing data to subscribed Observers

Observable is an extension to the “Observer” pattern. It also provides mechanisms to report errors and notify completion of data. So, it is suitable for asynchronous event based streams. The two main concepts in RxJava are: “Observables” and “Observers”. Observables pushes data. Observers subscribe to the Observables. Observer registers three callbacks with Observable, namely onNext to receive data, onCompleted to get notified on completion of event streams, and onError to get notified in case of errors. When an error is encountered, the Observable stops sending the data to the Observer. The subscribe method on Observable returns a “subscription” object, using which the Observer can stop receiving the data before it completes. This relationship is described in following class diagram:

With Observerable.create, you can create custom code to push arbitrary values asynchronously to the observer through observer. onNext. Observable’s create function receives an onSubscribeFunc object, which holds reference to the observer. This code inside the OnSubscribeFunc object is called every time a subscriber subscribes to it. Observable.create { observer -> SearchResult answers = ldap.search(base, attr, sc) while (answers.hasMore()) { observer.onNext(answers.next()) } answers.close() observer.onCompleted() }

Subscriptions.empty()

Code 8a: Observable to return ldap search results

Only creating these Observable objects does not execute them until a subscriber subscribes to it. The code above with the loop is not called until some observer subscribes to it. Such Observables are called “cold” observables. They don’t start emitting data until being subscribed. When it is subscribed, it becomes “hot” Observable and starts asynchronous computation.

Subscriptions

Observable Observables are created by using one of the many factory methods available in the Observable class. Observable.from family of methods converts any synchronous objects into asynchronous events. For example, Observable.from(Iterable) converts the sequence of data into Observables, which can be subscribed. The Observable. create factory method creates a new Observable based on the Closure that is passed to it. This Closure is executed synchronously when subscribed. For example, Observable .from([5,6,7]) .subscribe {x -> println(x)}

Code 7: Creating Observable from array

creates an observable from an array, subscribes to it, and prints.

Subscription objects holds reference to asynchronous computations. To cancel an asynchronous computation, the subscriptions referencing it has to be unsubscribed. Unsubscribing disconnects the Observer from the Observable. The subscriptions can also be used to run clean up code when unsubscribing happens or to terminate the asynchronous computation. For example, in the code listing 6, when an Observer unsubscribes, it wastes CPU cycles to continue the inner loop iterating all SearchResults. So, it is required to signal the loop to terminate prematurely before computing. Using the Subsciption.isUnsubscribed() method, this decision can be taken. The modified example looks like the following example: def observableSearch (LdapContext ldap, String base, String attr, SearchControl sc) { Observable.create { observer -> Subscription sub1 = new BooleanSubscription() def answers = ldap.search(base, attr, sc) while (answers.hasMore() && !sub1.isUnsubscribed()) { observer.onNext(answers.next()) } answers.close() observer.onCompleted() sub1 } }

Code 9: Ldap Search Results that can be terminated There are many variations to Subscriptions. One of the important one is CompositeSubscription. You can combine IDC Tech Journal — August 2014

29


multiple subscriptions with CompositeSubscription. When this composite is unsubscribed, all child subscriptions are also unsubscribed.

Composing Observables If there are multiple asynchronous computations, the data may be returned at random times. It becomes very difficult to wait for the result and then combine it with results of other asynchronous data. If the caller has to wait, it becomes a blocking and synchronous call. For example, in the above examples, we have used one Observable to create the LdapContext object and in another Observablewe created search results. The search results observable requires the result of first LdapContext Observable. So, we need a way to sequence them together one after another. Many of the Observable’s composing operators come to help here. Simple and common sequencing is one after another. This is usually done by map and flatMap methods on Observable. Map method takes a closure as argument and passes the result of first Observable into the closure. The closure can return another type of result. The following is an example of how search can be sequened: ldapConnect("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell").flatMap { ldap -> SearchControls sc = new SearchControls() sc.setSearchScope(SearchControls.SUBTREE_SCOPE) return observableSearch(ldap, "o=novell", "(objectClass=Top)", sc) }

Code 10: Composing two Observables

If the closure also returns Observable, the overall resultant Observable becomes as Observable of Observable That is Observa ble<Observable<SearchResult>>. FlatMap flattens these multiple Observable types into Observable<SearchResult>. This simplifies the caller to directly SearchResult instead of subscribing twice nested. When subscribed, the first Observable pushes the result into the variable “ldap” to the closure after flatMap. The closure consumes this variable and passes it to observableSearch. The closure also returns the result of observable Search. This combined Observable is still “cold”. No code inside this will be executed until an Observer subscribes to it as shown in the following example: ldapConnect("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell").flatMap { ldap -> SearchControls sc = new SearchControls() sc.setSearchScope(SearchControls.SUBTREE_SCOPE) return observableSearch(ldap, "o=novell", "(objectClass=Top)", sc) }.subscribe {SearchResult result -> println(result.attributes.get("cn")?.get()) }

Code 11: Subscribing to Observable

Because the final Observable is of type Observable <SearchResult>, the Observer subscribes to it by calling the subscribe method and passing a closure again. This time, the closure takes argument of type “SearchResult” into result. The Observable pushes the SearchResult data into this variable for every result. The closure then prints the “cn” attribute of the result if it is not null. If you look closely, the above code is as similar as synchronous imperative code, but the closures are executed asynchronously. But, what happens if there is a network communication error or any exceptions arising inside these observables. The above example does not print any error and ignores them. To catch errors, the observer again has to subscribe to errors. The subscribe method has an overload, which takes another argument, Action closure, for calling on Error.

30 IDC Tech Journal — August 2014

ldapConnect("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell").flatMap { ldap -> SearchControls sc = new SearchControls() sc.setSearchScope(SearchControls.SUBTREE_SCOPE) return observableSearch(ldap, "o=novell", "(objectClass=Top)", sc) }.subscribe {SearchResult result -> println(result.attributes.get("cn")?.get()) } { // second closure to capture errors in all the above Observables Throwable e → e.printStackTrace() throw e }

Code 12: Reacting to errors

Schedulers Well! Above examples are kind of cheating us. The above examples in Observable are not yet asynchronous! By default, the Observables and Observers are synchronous. The sample code in the Observables section run in the same thread as the calling thread. Thus, these examples are actually blocking synchronous code. The real magic happens with the Schedulers. Schedulers make the above Observables to run in a separate thread asynchronously. Schedulers are abstractions above Executor interfaces and by default, RxJava provides default schedulers objects for io, computation, threadpools. Observable has a method called “subscribeOn”, which makes the code in “OnSubscribeFunc” of create method to run asynchronously on a different given scheduler thread. Similarly, the other method “observeOn” makes the observer code to run on another scheduler. So, to make the above example really asynchronous, modify the code as follows: ldapConnect("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell").flatMap { ldap -> printThread(ldap) SearchControls sc = new SearchControls() sc.setSearchScope(SearchControls.SUBTREE_SCOPE) return observableSearch(ldap, "o=novell", "(objectClass=Top)", sc) }.subscribeOn(Schedulers.io()).observeOn(Schedulers.newThread()).subscribe {SearchResult result -> println(result.attributes.get("cn")?.get()) }

Code 13: Scheduling on threads to make observables really asynchronous

It runs the Observable in the IO scheduler thread and the callback observer on a newly created thread. In this way, the calling code can make a judgment whether to run the Observable synchronously or asynchronously. This separation of computation and scheduling greatly gives flexibility to the consumer of this computation. This is useful in the automated unit testing which runs the application logic synchronously to alleviate indeterminate. In production code, this can be made asynchronous.

Conclusion The Observables are not only used with asynchronous network calls as shown in the examples above, this can be used for any “event” based programming, where some codes run in “reaction” to “events”. Because the Observables can be composed and these work with functional closures, this functional programming model provides powerful abstractions to deal with asynchronous data. Another area this is mostly applied is with GUI tool kits. The GUI tool kits respond to user input “events” asynchronously with callbacks. Also, any GUI updating code should run in the GUI thread. This also introduces callback hell, global state modification, and cross cutting concerns across objects. For example, to react to button clicks, the code can be written as folows.


SwingObservable .fromButtonAction(btnConnect) .observeOn(SwingScheduler.instance) .subscribe { txtFilter.setEnabled(true) }

Code 14: Reacting to swing button clicks Furthermore, with Observables, GUI events and network events can be combined together. For example, the below code enables the “search filter” text box only after a user clicks the “btnConnect” button and a successful connection is made to LDAP by calling the “ldapConnect” closure. SwingObservable .fromButtonAction(btnConnect) .flatMap({ btn → ldapConnect("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell") }) .subscribeOn(Schedulers.io()) .observeOn(SwingScheduler.instance) .subscribe { txtFilter.setEnabled(true) } { println (“error connecting to ldap”) } // onError callback

With Observables, writing code asynchronously that reacts to events in close real time is made very simple. However, when writing fully asynchronous code, pay attention to the threads and which thread the code is executed. For simple single valued asynchronous computation, use Futures and in most more than trivial tasks, use Executors. For complex event based asynchronous tasks, Observables are unbeatable. A complete example of RxJava swing application to browse a LDAP store is given in my public git repository in link [5].

References 1. Groovy Closures: http://groovy.codehaus.org/Closures 2. RxJava wiki: https://github.com/Netflix/RxJava/wiki 3. Intro to Rx. Rx as introduced in .Net: http://www.introtorx. com/Content/v1.0.10621.0/01_WhyRx.html#WhyRx 4. Functional Reactive in the Netflix API with RxJava: http://techblog.netflix.com/2013/02/rxjava-netflix-api.html 5. Sample LDAP browser with Groovy Swing and RxJava: https:// github.com/tsureshkumar/pubsamples/blob/master/groovy/ async-groovy/src/main/java/browser.groovy

Code 15: Combining GUI events with network events

The ability to store files and access them from anywhere changed the way we store our personal data. There are a number of providers out there who offer anywhere from 2GB free data to 25GB. Many of us would sign up with a few of the providers and get a cool 60-80GB free storage which would be more than sufficient for storing our personal collection of documents, photos and music. Some of the services that you can look at are: Google Drive, Microsoft OneDrive, Dropbox, Box(personal), Apple iCloud, Amazon Cloud Drive and Bitcasa, One of the concerns everyone has is the privacy and security of your data. There are many solutions to encrypt the data stored in the cloud. For eg. Viivo, boxcryptor, sookasa and DataLocker. One solution that stands out in the crowd is Viivo. This is a free cloud file encryption service developed by PKWare, the guys who invented zip decades back. Viivo uses public key cryptography to secure files on the device itself before they are synchronized to cloud storage provider. Each user has private key generated from user’s password using PBKDF2 HMAC SHA2 and is encrypted with AES-256. They also use independent keys for each of the cloud storage providers. Viivo is very simple to use, secure in the sense data is never transferred to cloud service in clear and available on most desktops and mobile devices. They claim that defense-in-depth is the approach they take with Viivo. 31 IDC Tech Journal — August 2014


IDC Tech Journal Volume 4 – August 2014

Contents


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.