Page 1

No. 24 • June 2014 • •



Java 8 – collections processing Object modeling of Selenium tests

Visual Studio Online The Risk of Not Having Risks The Product Owner story The StackExchange Network QA professionals – growth and development The Joy and Challenges of Writing Quality Software

Securing Opensource Code via Static Analysis (I) „Cloud” – perspective matters! PostSharp Advantages of using Free software Options to prevent others from taking advantage of your trademark

6 Techsylvania Ovidiu Mățan

8 JS Camp 2014 Tudor Trișcă

10 I T.A.K.E. Unconference Cristi Cordoș

11 How to Web MVP Academy Irina Scarlat

12 IInterview with Dan Romescu Ovidiu Măţan

14 Java 8 – collections processing Ovidiu Simionica

16 Microsoft Business Intelligence Tools Cristian Pup

19 Adding value when developing. A simple example Alin Luncan

21 Object modeling of Selenium tests Corina Pip

25 The Risk of Not Having Risks Ramona Muntean

28 Product Inception Alexandru Bolboaca and Adrian Bolboacă

31 The Product Owner story Bogdan Giurgiu

34 Visual Studio Online Marius Cotor

38 The StackExchange Network Radu Murzea

42 QA professionals – growth and development Mihaela Claudia

44 The Joy and Challenges of Writing Quality Software Cătălin Tudor

46 What’s wrong with the Romanian IT Industry? Ovidiu Șuța

48 Why Is It That Our Children Do Not Dream Of Becoming Project Managers? Laurențiu Toma

50 „Cloud” – perspective matters! Florin Asavoaie

51 Post Sharp Radu Vunvulea

54 Securing Opensource Code via Static Analysis (I) Raghudeep Kannavara

56 Advantages of using Free software Attila-Mihaly Balazs

58 Why Invest in Professional Management? Dan Schipor


60 Options to prevent others from taking advantage of your trademark Claudia Jelea



Ovidiu Măţan Editor-in-chief Today Software Magazine

never thought it would be possible, but today I had lunch with Richard Stallman, the person who wrote Emacs, the GCC, created GNU/ Linux and started the free software movement. A legend who delivered a presentation in Cluj, through the foundation. His speech focused on the difference between open source and free software. The concept of free software was defined by Richard Stallman in 1985, when he started the GNU project and the Free Software Foundation ( Its purpose is to promote the user’s freedom to use and modify as he wishes the source code and to re-distribute it, provided that he maintains the same GPL 3 license for the resulted product, too. Thus, we have the certainty not only of creating an application, but an entire open ecosystem that is maintained like this. The opposite of this concept is proprietary software, which, in Richard Stallman’s opinion should not be used, as we are at the hand of the programmer and big corporations. The example given was representative: in 2009, Amazon deleted George Orwell’s book, “1984”, right from all the users’ devices, without informing or requesting the users’ consent. The issues invoked by Amazon at the time were copyright related. Alternatively, open source has the purpose of optimizing the code as well as possible, without taking into consideration the fact that the code is integrated in applications that affect the users’ freedom. Why use free software? Because that software is validated by the community and they do not accept pieces of code which might turn an application into malware. During the discussion with Richard Stallman, I found out that the first Emacs version was written in 4-5 months. We also got the promise of an interview which will be published in the following issues of the magazine. Another important event that took place in Cluj was Techsylvania, an event dedicated to entrepreneurship and startups. It featured famous entrepreneurs such as HP Jin, CEO Telenav, Josef Dunne, Co-founder Babelverse or Rohan Chandran Head of Product @ Telenav. For more details, you can read the article dedicated to this event. I would like to mention the magazine release event organized in collaboration with the Cluj IT Innovation Cluster, IT in Brasov - Collaboration opportunites, hosted by CRIsoft. We welcomed a numerous and very interested audience, whom we promise to come back. There were speeches delivered on the idea and motivation of the Cluj IT Cluster, the Cluj IT eco-system from the startup perspective, the importance of documentation in project management, Best Practices in Agile, frameworks for web development in Java, as well as financing opportunities for companies. This issue offers you a series of technical articles written in terms of the authors’ experience: Java 8 – collection processing, Business Intelligence Microsoft Tools and Visual Studio Online, PostSharp or Securing Opensource Code via Static Analysis. The Risk of not taking Risks reminds us of risk management, The Product Owner Story presents the challenges faced by a product owner. In StackExchange you can read about the creation and development of the ecosystem which the well-known site also belongs to. Object Modeling of Selenium Tests and QA Professionals cover the domain of testing, and the IT challenges are debated in What is wrong with the Romanian IT? We end with an article from the legal area: Options for your trademark.

Enjoy your reading !!!

Ovidiu Măţan

Founder of Today Software Magazine


no. 24/June |

Editorial Staf Editor-in-chief: Ovidiu Mățan Editor (startups & interviews): Marius Mornea Graphic designer: Dan Hădărău

Authors list Bogdan Giurgiu

Claudia Jelea

Group Product Owner @ Endava

Lawyer @ IP Boutique

Radu Vunvulea

Ramona Muntean

Senior Software Engineer @iQuest

Measurements & Best Practices Specialist @ ISDC

Tudor Trișcă

Alin Luncan

Copyright/Proofreader: Emilia Toma Translator: Roxana Elena Reviewer: Tavi Bolog Team Lead & Scrum Master @ msg systems România

Cătălin Tudor Software Engineer @ Accesa

Cristian Pup

Reviewer: Adrian Lupei

Principal Software Engineer @ Ixia

Accountant : Delia Coman

Alexandru Bolboaca

Laurențiu Toma, PMP

Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Project Manager @ SOFTVISION

Florin Asavoaie

Radu Murzea

DevOps @ Yonder

PHP Developer @ Pentalog

Ovidiu Șuța

Ovidiu Simionica

Made by

Today Software Solutions SRL str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania ISSN 2285 – 3502 ISSN-L 2284 – 8207 QA & Bid Manager @ ISDC

Software Developer @ Yardi Romania Team Lead @ Fortech

Corina Pip

Dan Schipor

Senior QA Engineer @ Betfair

Management Partner @ Casa de management Daris

Copyright Today Software Magazine Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code

Raghudeep Kannavara

Mihaela Claudia

Security Researcher, Software and Services Group @Intel USA

Senior QA Engineer @ HP România

Attila-Mihaly Balazs

Marius Cotor

Code Wrangler @ Udacity Trainer @ Tora Trading

Technical Lead @ 3Pillar Global | no. 24/June, 2014





for Cluj.

Ovidiu Măţan Editor-in-chief Today Software Magazine


no. 24/June |

n the 2nd of June, I took part in Techsylvania, an event 100% dedicated to entrepreneurship. Such an event, which can be considered to target only a limited part of audience, from the point of view of the local industry, represented a premiere

The first part of this event was represented by a local hackaton, where the participants had the opportunity to develop applications for devices such as Google Glasses, Small printer, Onyx Beacons, Leap motion, Sphero, Sony Smartwatch 2, Pebble, Oculus VR, EyeTribe Tracker. The result of the 24 hours hackaton was spectacular in so far as innovation is concerned, beating many of the startups we have seen lately. We ask ourselves the question: why is it that such devices aren’t there permanently in a Cluj IT club? We, TSM magazine, have been trying for a while to create such a place, but the biggest challenge is finding the actual space where the club can carry out its activity. Of course, its complete achievement is only possible through the involvement of the companies in the domain, which showed their willingness. Coming back to Techsylvania, it took place at Cluj Arena, the same place where

IT Days also went on, so, it was a pleasure to see again this place, in a new organization formula. The event began with the presentation of the executive producers: Vlad Ciurca and Oana Petrus. The keynote presentation was delivered by HP Jin, CEO Telenav, which consisted in the story of success and of the importance of perseverance in everything we do. He started as a mere student in China, but he struggled to receive a scholarship in the USA, which was the first step in order for him to have the chance to create his own software company. He managed to obtain a scholarship in Standford, given the fact that they allotted only one scholarship for each study domain for the entire China. Here, he studied aeronautics and astronautics and it was also here where he came up with the idea for his company: navigation through GPS. Thus, the first application of GPS navigation on mobile phones was released.


He raised money for the financing, and his way of getting money was a simple presentation, but he stated that personal relationships mattered foremost. Bad luck followed Telenav during its period of growth. The day they decided to come to Europe for fund raising, the attacks of 9/11 took place. A day before launching their intention to register on the stock exchange, Google announced its free maps. The series of bad luck does not end here, as, on the day IPO roadshow started, Microsoft also announced the free maps. The Iceland volcano, which interrupted the flights for several days, erupted during a trip to Europe, for the promotion of the company before the registration on the stock exchange. Despite all these shackles, Telenav became a successful company which has recently purchased Skobbler. The dream of HP Jin is to lead Telenav from Top 100 to Top 10, and in his investments portfolio there are projects such as that of building a flying machine. The advice he gave to the participants was: God favours you because you are persistent. Marcus Segal, former COO Casino Division at Zynga and entrepreneur, made an impression due to the multitude of domains he has worked in. He started from the music industry, as a VP in, a service of online sale of music albums at the beginning of 2000s. Being an utter novelty at the time, they faced the challenge of educating the users, who often did not understand why their shopping would not result in a material disc of the purchased album. Eventually, Napster appeared, which affected the sales, and the company was purchased by Universal. Marcus also gave some practical advice: belong to a community, do not search for pegacorns (pegas with a unicorn’s horn). We also mention the unique approach consisting in a legitimate question that should be addressed to a VC: is there anyone in your network who could help me?

Andy Peper, developer advocate at Twitter, presented an integration of Twitter and the Internet of things, such as monitoring the humidity of the soil where plants are growing and alerting the owner by a tweet the moment when watering is required. Of course, the Twitter API was also presented, with a few practical examples. Josef Dunne, co-founder of Bebelverse, presented the Nemawashi concept, a procedure of taking a tree and re-planting it somewhere else. His journey in the world of startups is spectacular; he started with the idea of Bebelverse in a Startup Weekend in Greece, from where he moved to Chile for the Startup Chile program. Among other adventures, to a great extent caused by the lack of financing, he reached the podium of LeWeb, TheNextWeb and TechCrunch Disrupt. His dream of having no constraints in the development of Bebelverse, caused him to turn down a 2 million dollars financing at some point. His disruptive and humanitarian spirit materialized in a 24 hours hackaton after the tsunami moment in Japan, on the 11th of March 2011. The result was a very useful application for communicating with those in distress. Voicu Oprean, CEO Arobs, presented the successful story of the company and ways of stimulation one’s employees to develop their own product ideas. In the end, I would like to also mention Rohan Chandran, Head of Product and Global Services at Telenav, who presented the professional side of product developing. The success of a product is not a lottery, but an iterative cause-effect analysis according to which decisions are made. It was a pleasure for us to take part in the first Techsylvania edition, which stood out in the entrepreneurship area of the local IT environment and we hope it will become a local tradition. | no. 24/June, 2014



JS Camp România 2014


n 3rd of June 2014, the international conference JSCamp (a JavaScript conference for Romania and Eastern Europe), the first of its kind in our country, took place in Bucharest. The event was dedicated mostly to the people from Web Design and Front End Development.

Tudor Trișcă Team Lead & Scrum Master @ msg systems România


no. 24/June |

This first edition was splitted into four intensive sessions about web development trends, case studies and international experiences, open-web technologies and advanced tools. With a full room, Robert Nyman starts the first session. He is a Technical Evangelist at Mozilla and the Editor of Mozilla Hacks. His presentation is called “The five stages of development” in which he makes a correlation between developing a software project and Kübler-Russ’ model, the five stages of grief: denial, anger, bargaining, depression and acceptance. He also presents a new project at Mozilla, Open Web Apps, which represents applications written in HTML5 and JavaScript that work on every platform. Another news is the launching of a feedback channel for Mozilla, where developers can express their opinions and ideas about

Mozilla projects: “Making the web a better place isn’t only about trying to figure out what’s right – it’s about listening to people, gather their thoughts and ideas and let that help up in achieving better results”. Tero Parviaien, a Software independent specialist, gives the second presentation, called “How to build your own AngularJS”. He talks about three strategies that can be applied when someone wants to use a JavaScript framework in their project: Rejection, Study and Build. He focuses more on the third strategy, where he says that for having a deeper understanding of the AngularJS framework, for knowing when and how it can be used, the recommend option is that the developer should build a system from scratch, something similar to an AngularJS clone, for forming a mental model. A demo is also presented,

TODAY SOFTWARE MAGAZINE where he implements an application called RocketLaucher, exemplifying the way the dependency injection is constructed in AngularJS. Sebastian Golasch, Specialist Senior Manager Software Developer at Deutsche Telekom, opens the second session with a presentation called “The Glitch in the game”. The topic chosen is that of testing web pages, which represents a real challenge nowadays. He presents different tools and techniques for preventing glitches, failures and strange behavior of web pages. He also talks about how to automate image differences, testing the CSS code, using monkey testing and testing performance so that it stays consistent over time.

Phil Hawksworth, JavaScript developer at R/Ga, is specialized in developing web sites since the end of the 90s. He talks about “Static Site Strategies”. His presentation contains the characteristics of static sites, a series of services and tools that can be used to create robust, high performance sites that can become more dynamic than others more heavy and costly. He also talks about constructing faster, smarter sites, without the need of complex back-ends. He presents the concept “Bake, don’t fry”, a “healthier” way that reduces complexity, makes development easier and increases portability. Martin Kleppe, cofounder and Head of Development at Ubilabs, talks about “Minified Javascript Craziness”. His presentation is about the art called cade golfing, that means how to write complex programs in less than 1K of JavaScript. Some of the examples demonstrated were: a globe that spins, the game 2048 and Flappy Bird. He also presented hot to bypass security by using only six different characters. Peter Müller closes the third session with a topic called “The no build system build system”. He is Frontend Lead at Podio and organizer for CopenhagenJS. In his presentation, he focuses on the build and optimization part of the development lifecycle. He explains why a build system is needed, what is the general problem of the build tools and he presents a new project called AssetGraph that helps creating workflow and makes the code more developer friendly. Patrick H. Lauke, Accessibility Consultant at The Paciello Group opens the last session with the topic “Getting touchy – An introduction to Touch and Pointer Events”. He talks about how touch devices are not only included in smartphones and tablets, but also in laptops and even in desktop computers. His presentation contains the handling of touch events, from the simplest tap interactions until multitouch, gestures and using pointer events introduced by Microsoft. The day ends with Vince Allen, Software Engineering Manager at Spotify, who talks about “Braitenberg and the Browser”. Valentino Braitenberg was a neuroscientist in the 1970s and he wrote a book in which he described invisible machines with analog sensors attached. Allen makes an analogy between these machines described by Braitenberg and the modern man with a mobile phone. He talks about the connection between

humans and devices. He also presents how JavaScript can be used to create such Braitenberg machines and other natural simulations in a web browser. Personally, for me it was a very successful conference and the speakers kept me captivated until the end. I am excited about the next edition which I hope will have more great speakers and many more participants! | no. 24/June, 2014




I T.A.K.E. Unconference

T.A.K.E. Unconference took place in Bucharest in 29-30th of May. The conference was organized by MosaicWorks. I T.A.K.E. Unconference wanted, and successfully managed, to break the classical conference pattern. Besides the classical presentations and keynotes, it contained workshops, coding katas, coding contests and the interesting concept of “open space”, where participants could gather spontaneously and approach various discussion topics. It was an event for geeks, the proposed themes being True Software Stories, Architecture and Design Practices, Beautiful Data and Technical Leadership. The first day started with Michael Feathers’ keynote. Michael Feathers is the author of “Working Effectively with Legacy Code” and a former member of ObjectMentor. His keynote revolved around Conway’s law, which states than a company’s organizational structure is reflected in its code design.

through his story of adding two features into a code base with dirty code and no tests whatsoever, refactoring it along the way. Then Thomas Sundberg showed how one can do Behavior Driven Development with Cucumber JVM through a session of live coding. The morning ended with Andreas Leiding, the administrator of the Softwerkskammer community, who delivered us the technical story behind the community’s website. The last afternoon of the conference started with Felienne Hermans, associate professor at the Delft University, who showed a series of assertions deducted by data-mining through spreadsheets, like: people don’t choose a programming language based on technical criteria, design patterns knowledge make one’s code more readable to others or that the most used programming languages are the least expressive.

Next we chose to check the Technical Leadership track, which included a workshop and a presentation held by Flavius Ștef, agile coach at MosaicWorks. The workshop was an interactive one. The participants have debated various aspects of technical leadership, like the qualities of a technical leader, how to drive change into a technical organization and the listening and communication skills of a leader. After lunch we stuck with Flavius for his “Scaling Agility: The Technical Angle” presentation. Flavius talked about some techniThe ending keynote was held by Tom Gilb, a veteran of the cal best practices which can help agile companies scale to a larger software industry, known for his contribution in the field of team. software metrics and evolutionary processes. Tom talked about the engineering approach of building software versus the empirical approach promoted by scrum (according to him). Tom emphasized that the biggest barrier to this engineering approach is the lack of proper metrics. Since the I T.A.K.E. Unconference was organized by MosaicWorks, a promoter of software craftsmanship and agile practices, it obviously ended with a retrospective. We are eagerly awaiting the next iteration!

The first day ended with an “open space” in which people heatedly debated not only technical themes, like “continuous integration”, CQRS or code review, but also leadership thems, like agile transition. In the last discussion, we met TargetProcess, a company organized in a democratic fashion, without management layers, similar to Valve’s model. And because people like to hear stories, in the second day we chose the “True Software Stories” track. Aki Salmi walked us


no. 24/June, 2014 |

Cristi Cordoș Engineering Manager @ Ixia



MVP Academy proudly presents its finalists


ucharest, May 26 2014 – How to Web MVP Academy, pre-acceleration program for CEE early stage tech startups, proudly presents its finalist teams. In between June 2nd and July 22nd they will work together with mentors and experienced professionals in order to develop their products and learn how to take advantage of the opportunities available on the global market.

Tech startups building products in software, hardware, internet of things, mobile, medtech or ecommerce have submitted their application for How to Web MVP Academy. They have been evaluated by specialists in the field taking into account their team fit & experience, the size of the market they address and its trends, the initial market validation & traction, the international potential and impact, as well as the overall feasibility of the product (scalability, customer acquisition cost). The 15 finalists have been chosen after a careful evaluation of the applicants made by the jury. The judges that worked on the analysis of the competing teams were Cosmin Ochişor (Liaison Manager hub:raum), Zoli Herczeg (Partner Business Evangelist Microsoft Romania), Cristi Badea (Co-Founder & Chief Product Officer MavenHut), Bogdan Iordache (Co-Founder & CEO How to Web) and Monica Obogeanu (Program Manager How to Web MVP Academy).

Who are the teams that made it into How to Web MVP Academy? 15 teams from Romania, Bulgaria and Slovenia were selected to attend How to Web MVP Academy. They are building products with disruptive potential in software, hardware, Internet of Things, mobile, or medtech. The teams that make the first batch that will attend the pre-acceleration program are: 1. Axo Suits – High-powered exoskeletons 2. Bravatar – Network of fan communities centered around brands 3. Complement – Software for educational management, online catalog and personal development platform 4. Doctrina d.o.o. – Platform that facilitates the product knowledge transfer to all employees in pharmacies in shorter time and lower costs through webinars 5. – Social platform that allows users to team up to fulfill dreams for friends 6. Fitter – Mobile app that offers mobile personal trainer experience and connects each fitness studio with his customers in a personal, intuitive and efficient way 7. Gloria Food – Free online ordering system that allows restaurants to directly interact with their customers 8. Hickery – Web and mobile music player for extremely social people that allows users to interact and make playlists by looking at the activity they have on social media 9. Inovolt – Product that combines hardware and software component and allows users to visualize, measure, record and analyze energy generation and consumption over multiple circuits in real-time 10. Lifebox – Mobile app that allows users to create common photo albums with their friends and thus create shared

memories 11. Pillbuzz – Bracelet set through a mobile app that functions independently and helps users respect their treatment plans 12. Pocketo – Mobile app that uses hardware components to support IT developers to do faster prototyping for Arduino and RaspberryPi with the help of your smartphone 13. Qalendra – Travel calendar dedicate to adventure travelers that allows them to get inspired by the collection of events, create travel wish lists and get alerts on upcoming events 14. StudyMentors – Online platform which connects international prospective students with current students/alumni from European universities 15. Wallet Buzz – Mobile app that connects retailers and smartphone users while delivering exclusive offers directly on their phones In between June 2nd and July 22nd, the How to Web MVP Academy finalist teams will learn essential business knowledge that will help them develop their products and go global. Besides the educational component, the startups will have access to the TechHub Bucharest co-working space and the international tech community and they will be connected with relevant mentors, angel investors, acceleration programs and early stage investment funds. Moreover, the How to Web MVP Academy team will work closely with the teams to help them monitor their progress and get access to relevant resources that solve their specific needs. How to Web MVP Academy is a program developed in partnership with Microsoft, Romtelecom and Cosmote, and supported by Bitdefender, Raiffeisen Bank, Hub:raum and TechHub Bucharest. On June 2nd the teams will start working from the coworking space at TechHub Bucharest and, one day later, they will get the chance to meet the local mentors and find out everything they need to know about the program during the induction day. During the 7 weeks program the startups will work intensively to develop their products and they will attend workshops and 1 to 1 mentoring sessions. The program will end with a Demo Day on July 22nd when the teams will have the chance to pitch their products and showcase their progress in front of an audience comprising representatives of the most appreciated acceleration programs in Europe, angel investors, early stage investment funds, journalists, entrepreneurs and technology professionals. Irina Scarlat Co-Founder of Akcees @ How To Web | nr. 24/Iunie, 2014



Interview with Dan Romescu


e had the opportunity to make an interview with Dan Romescu, founder of DruidCon and president of Augmented Reality Fundation. Dan will talk at DevTalks conference, in 11 june in Bucharest.

What is your opinion on Google Glasses from the perspective of private life? In one of your recent presentations you’ve mentioned a led that should be included in order to let the other people know about the fact that we are recording. [Dan Romescu] Google Glasses present, at the moment, a very high character of intrusion in everyday life. At the beginning, people are curious, but once they realize the fact that their private life can be trespassed, they become renitent. I personally consider that very clear rules and social laws should be defined. For instance, you should lift them up to 30%, just like you would do with your sunglasses, when you want your interlocutor to be able to see your eyes. This gesture makes your interlocutor confide in you and it helps you to develop open conversations. From the point of view of integrating Google Glasses in an already existing system, what aspects should be taken into consideration? In vertical markets, there already are solutions in logistics, health domain, sport management, etc. For the mass market, there should be more devices, people should be educated in respect to their proper usage and there should be more use cases and content for customers. What do you think about the evolution of the Internet of Things and what is your perspective on its future evolution? I would call it Web of Things; the Internet exists and it is inherent. The explosion in so many domains, from House Automation, to Health and to Quantified Self shows us that we can no longer talk about the future, but about today’s reality.




Meet some of the brightest minds in software development.


Enjoy great talks with our speakers and our partners from the exposition booths.

Keep your skills up to date with the latest trends.





Innovation Catalyst

Software Development Director

Senior Technology Consulting Director

Application Innovation Service Mobile Practice Leader IBM

The Lab part of Telefonica

Gold partners

Star Storage


Silver partner

Design powered by Media partners


What is your vision regarding the social networks and augmented reality? The social networks will become vertical; they will vary after the heavy centralization of the moment under the FB umbrella. I see AR as a tool created for a more accessible interface in the relation human – machine. If we recall, at the beginnings there were only commutators in the computer interface. Then, we moved on to perforated cards – my first program in ’82 was on perforated cards. Then, the command line, graphical interface with mouse, touchpad. Then, the generation of touchscreen display, part of gesture recognition. I think we are not far from controlling the machine with the power of thought, see the emotive interface which I consider the most evolved.

life; it’s just that it is in the audio domain. The Walkman was the first audio AR device. The dolby surround part has raised the level of audio AR. I see the context part as very necessary in order to render the relevance of the content. I would only give one example in the end: 4 iBeacons displayed in a pyramid can show the position of the video sensor and can create new experiences. The perspective we get an experience from can be sampled and reshaped. The rest is analytical geometry …

Ovidiu Măţan

Augmented reality is not really present in day to day life, maybe except for the auto domain. How do you see the evolution of this area? I can argue against that. Augmented reality exists in everyday


no. 24/June, 2014 | Editor-in-chief Today Software Magazine



IT communities


he summertime holidays have begun, and this is also reflected in the number of IT events. However, we still have the monthly events such as the release of the magazine or Mobile Monday, which will go on. Furthermore, if last month we went to Brasov for one of the magazine’s releases, this month we will go to Iasi for Entrepreneurship and Innovation, organized by Gemini Solutions Foundry.


Transylvania Java User Group Community dedicated to Java technology Website: Since: 15.05.2008 / Members: 577 / Events: 45 TSM community Community built around Today Software Magazine. Website: Since: 06.02.2012 /Members: 1545/Events: 20 Cluj.rb Community dedicated to Ruby technology Website: Since: 25.08.2010 / Members: 178 / Events: 40 The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile methodology. Website: Since: 04.10.2010 / Members: 428 / Events: 63 Cluj Semantic WEB Meetup Community dedicated to semantic technology. Website: Since: 08.05.2010 / Members: 183/ Events: 27 Romanian Association for Better Software Community dedicated to senior IT people Website: Since: 10.02.2011 / Members: 235/ Events: 14 Testing camp Project which wants to bring together as many testers and QA. Website: Since: 15.01.2012 / Members: 314/ Events: 30

11 June (Cluj) Launch of issue 24/june of Today Software Magazine 12 June (Iași) Entrepreneurship & Innovation 12 June (Cluj) Python and Ember.js 15 June (București) Workshop Arduino 16 June (Brașov) Innovation Monday Brasov 17 June (Cluj) Cluj-Napoca connexions – Office 365 for ONGs 19 June (Cluj) Monthly BA Community Meetup #8 24 June (București) SEETEST 30 June (Cluj) Mobile Monday Cluj #9 | no. 24/June, 2014



Java 8 – collections processing


ava 8 is beautiful. Yes, I class it as feminine even before reaching the magic number of 8. In this article I will dare to analyze to what extent this beauty is shape versus substance. I admit that Java has constantly left its fans dissatisfied when competing with other programming languages.

Ovidiu Simionica Team Lead @ Fortech

Lambda expressions. Yes, C# has Lambda expressions from version 2007 which was launched in 2007. Java needed 7 additional years to offer it. Functional programming came really late. Generics. An illusion in shape towards template meta-programming which failed through type erasure… Am I the only one having expected something else? I feel like a Nokia fan who dreams of high resolution and Android. You can imagine how I was waiting while repeatedly pressing the refresh button on the Oracle site for a download of the latest version on the day of the release, like any other Java guy. With Java on the surgeon’s desk for some time at the date of this article, I ended up sharing impressions on data collection processing, an issue that had been worrying me throughout my career.

What is data collection processing (also known as filtering)? For those familiar with SQL, filtering is a basic operation on a data set that looks like: SELECT * FROM Cars WHERE Cars.manufacturer = ‘VW’;


no. 24/June |

What can we do if the collection is already stored in the program’s memory and we want to process it according to the criteria from the previous example? Until Java 8, the developer had to write: ArrayList filteredCars = new ArrayList(); for(Car c : allCars) {   if (“VW”.equals( c.getManufacturer())) {      filteredCars.add(c);   } }

Those at the architect level could also write: interface Predicate<T> {  boolean apply(T t);  } class Filter {  public static Collection<T> do (final Collection<T> items, Predicate<T> pred) {      Collection<T> result = new ArrayList<>();      for(T item : items) {       if (pred.apply(item)) {         result.add(c);       }     }       return result;   } } | nr. 21/Martie, 2014



Then we had to call:

Furthermore, the thread that sends the parallel processing job is used itself as worker. The threads of one pool are thus mixed with a thread that has another purpose. If exactly that thread happens to catch a processing part that lasts longer than expected, we run the risk of blocking the processing in the pool just because of the Fork/Join concept’s design (the calling thread functions as worker and the other threads cannot add results while they wait In Java 8 we apply: for the parent thread to finish its job). We have a problem! Collection<Car> filtered = We have to deal here with the Paraquential phenomenon “[a c -> (“VW”.equals( c.getManufacturer())) ).collect( portmanteau word derived by combining parallel with sequenCollectors.toList()); tial] The illusion of parallelization. Processing starts out in parallel mode by engaging multiple threads but quickly reverts to And this is how Java 8 offers “high resolution”. Since we live in an era of “Big Data”, “real-world” applications sequential computing by restricting further thread commitment.” must be scalable and effective, so it is no longer enough to process (Edward Hardned, 2014). The solution proposed by Oracle is the explicit use of a millions of records sequentially (if this has ever been enough). Parallel processing and optimal use of processor cores is now ForkJoinPool controlled by the developer like the following: “everyday business”. ForkJoinPool forkJoinPool = new ForkJoinPool(20);  forkFoinPool.submit(()->allCars. parallel(). Java 8 comes to the rescue, enabling us to write: new Filter().do( allCars, new Predicate<Car>() {   @Override    public boolean apply(Car c) {     return “VW”.equals(c.getManufacturer());   } );

Collection<Car> filtered =  allCars. parallel().filter( c -> (“VW”.equals(c.getManufacturer())) ). collect(Collectors.toList());

Just through this simple call of the “parallel()” method I make sure the stream library will perform magic and divide the stream into tiny pieces that will get processed in parallel.

Job done. Or not? I like to read java docs, a habit that I consider to be worth cultivating since it can save us from many disasters. I have also been cultivating in time a sort of “sense of danger” for big words like “parallel”; the magic of the result follows shortly after. What execution order and threads does this method use? Does it use enough? How does it decide how many to use and what happens if the “parallel()” method is also called in parallel? According to the documentation: “Arranges to asynchronously execute this task in the pool the current task is running in, if applicable, or using the ForkJoinPool.commonPool() if not inForkJoinPool().” Consequently, we only use one pool by default, regardless of the number of threads the “parallel()” method calls through the entire application, since the stream library uses the ForkJoin library.

filter( c -> (“VW”.equals(c.getManufacturer())) ). collect(Collectors.toList())).get();

Thus, all the tasks generated by the parallel processing remain in the specified pool. A positive effect is that we can apply a timeout to the get() method; situation that is usually desired in a real-world application. This, in turn, takes us back to the fundamental problem: the management of pools is once again the developer ’s responsibility! And the difficulties keep piling up when other actors come into play, like the complex situations caused by the multi-thread environment and the careful adjustment of the configurations according to the hardware architecture (e.g. processor). Look how Java gives us half measures again, when we were expecting selfmanaged thread container (or at least easily-managed). While, for many of us, the default behavior of the framework for streaming is and will be more than enough, I will keep the parallel method in my “dangerous code” list in the programming and code review activities. In this brief analysis I only touched the tip of the iceberg. I invite those of you who are curious to learn about other pitfalls of the ForkJoin library to follow with interest a dude with over 30 years of experience, who delves into these issues quite well in his A Java Parallel calamity post.

Young spirit Mature organization A shared vision Join our journey! | no. 24/June, 2014




Microsoft Business Intelligence Tools


ith the coming of the Information Age we found solutions to previous problems but also new problems waiting to be solved. Information abounds in the digital environments but is not always easy to obtain a view of it so that we can take important decisions based on it.

Cristian Pup Software Developer @ Yardi Romania


no. 24/June |

For t h i s pu r p o s e t h e Bu s i n e s s Intelligence tools promise to provide solutions for transforming the raw information data that we have. Business intelligence (BI) represints a set of theories, methodologies, architectures, and technologies that transform raw data into meaningful, and useful information for business purposes . The BI purpose is to handle enormous amounts of unstructured data in order to identify, develop and create new opportunities. It’s saying that images are a better way to describe something than the words are. The human mind understands much easier the graphical explanation, than the theoretical one. In order to make decision, our information needs to be displayed with a proper presentation in terms of charts, reports, score cards etc. Initially the data warehouse concept was just about keeping historical data. Nowadays, data warehouse is the foundation for the BI. Business Intelligence consists from an increasing number of components which include the following: • Multidimensional aggregation and allocation • Denormalization, tagging and standardization • Realtime reporting • Interface with unstructured data source • Group consolidation, budgeting and rolling forecast

• Statistical inference and probabilistic simulation • Performance indicators • Ve r s i on c ont ro l a n d pro c e s s management1 BI is made up of several related activities, including data mining, online analytical processing, querying and reporting. Examples of companies which are using the BI tools are the restaurant chains. They use BI in order to make strategic decisions like: the new products they should add to their menus, which dishes should remove, which underperforming stores should they close, when they renegotiate contracts with the food suppliers and when they try to identify the new opportunities to improve the inefficient processes.

Microsoft BI Tools

Among all different software vendors from all over the world, Microsoft seems to be the only one authorized to claim that it delivers the „Business Intelligence for everyone”. This is how Microsoft advertises its new Business Intelligence platform. Microsoft used all the experience it has gained delivering the operating systems and office suites for end-users while designing the Business Intelligence platform. The Microsoft BI platform consists of the most usable tools that are operated from a truly intuitive interface. The platform is composed of these three solutions 1

TODAY SOFTWARE MAGAZINE Intelligence platform allow user to prepare and modify reports and analysis. The working environment is then secure and intuitive, so that no specialists need to keep an eye on the system. • Data mining and predictive analysis. Platform provides tools which help to automatically analyze data from different points of view and discover the trends. • Data warehousing. Microsoft Business Intelligence platform provides ETL (Extract, Transform, Load) processes and supports data segregation and warehousing.

PowerPivot and SQL Server Analysis Services (SSAS)

PowerPivot and SSAS are current Microsoft tools for the BI market. Microsoft does not have currently a BI product but gives people a reason to upgrade to MS Office 2010 or MS Office 2013 and promotes the idea of a self-service BI. The Microsoft BI-stack is based on multiple tools like Excel 2013 and SQL Server Analysis BI Semantic Model - Services and components like Microsoft PowerPivot. In order for archive/2011/10/17/the-bi-semantic-model-mdx-dax-and-you.aspx users to get MS Excel Dashboards many companies hire BI consul- Microsoft SQL Server, Microsoft SharePoint, and Microsoft tants. The back-end API of PowerPivot is available only packaged Office. The main strength for the Microsoft Business Intelligence with SharePoint and SQL Server. This means that enterprise users platform is that almost everyone knows how to use it, know its will need consulting services to integrate all these moving parts. components and the interface isn’t something new for the users. Microsoft Business Intelligence platform supports reports and Business Intelligence Semantic Model (BISM) analysis, preparing and sharing it with other users across the orgaIn addition to Excel, SSAS, SSRS and PowerPivot, Microsoft nizations. What truly is important for today’s companies is the published a new roadmap with a new BISM model in Analysis time needed for data to spread inside all the organization. The Services that will power Crescent (upcoming Microsoft Data shorter, the better, and the Microsoft solution is perfect from this Visualization technology) as well as other Microsoft BI front end point of view. experiences such as Excel Dashboards, Reporting Services and Microsoft BI platform can be used for: SharePoint Insights. • Preparing dashboards. Dashboards prepared within the When a BI Developer creates a PowerPivot application, the Microsoft solution are fully interactive; therefore one can model embedded inside the workbook is also a BI Semantic quickly dive into each area of data to watch it from another Model. When the workbook is published to SharePoint, the model point of view and find out something more. is hosted inside a SSAS server and served up to other applications • Stimulating collaboration. The efficient collaboration and services such as Excel Services, Reporting Services, etc. is very important to all modern organizations and Microsoft Business Intelligence platform supports it with well-develo- Data Integration or ETL (Extract, Transform and Load) Pipeline ped communication capabilities. It allows all users across the An organization might have one or more different types of organization work on the same version of data boosting their applications catering to the needs of the organization’s functiperformance. ons. When we discuss about designing and developing a Data • Self-sufficient reporting and analysis. The wide repor- Warehouse as part of the Business Intelligence System we also ting and analytical capabilities from the Microsoft Business need to define the strategies for data acquisition from all the

Objective C Yardi Romania | no. 24/June, 2014


programming Microsoft Business Intelligence Tools source systems and integrate it into a data summarizing these from the underlying Conclusions warehouse. data sources every time users query it. The Business Intelligence field is in continuous development. These technoloSQL Server Integration Services (SSIS) Information Delivery gies are at the beginning of a long journey Microsoft SQL Server Integration When SSAS cubes (multi-dimensional in a world where the key to success lies in Services (SSIS) is an ETL platform for structure) are done and have populated the ability to make better decisions in a enterprise-level data integration and data them with the data from Data Warehouse, shorter time than competition. Microsoft transformation solutions and a compo- different reporting tools can be used to did his step on this path and created a set nent of the SQL Server platform. SSIS analyze the data from different perspectives of products that enable a new vision for this provides the ability to have a consistent or dimensions. area. I’m waiting to see what will happen and centralized view of data from diffefurther, probably the next step will be the rent source systems and helps to ensure SQL Server Reporting Services (SSRS) Artificial Business Intelligence. data security through integration, cleanSQL provides a full range of ready-tosing, profiling and management. SSIS use tools and services to help create, deploy, References offers a fast and flexible ETL framework and manage reports for organizations. [1] Microsoft BI în cadrul Office 365 and has in memory transformation capa- Reporting Services includes programming bilities for extremely fast data integration features that enable to extend and custo- Is-Microsoft-Power-BI-a-Game-Changer scenarios. SSIS has several built-in com- mize the reporting functionality. You can ponents to connect to standard data create tabular reports as well as different [2] Pași necesari pentru instalarea modelului tabusources (RDBMS, FTP, Web Services, varieties of chart reports, graph reports, lar AdventureWorks în SQL Server 2012 XML, CSV, EXCEL, etc.), along with a maps or geographical reports. In addition, h t t p : / / w w w . s q l s e r v e r c e n t r a l . c o m / rich set of transformation components KPI based scorecard reports can also be b l o g s / s h e r r y - l i s - b i - c o r n e r / 2 0 1 4 / 0 4 / 3 0 / for data integration. created. dax-2-installing-adventureworks-dw-tabularPowerPivot, Power View, Excel services model-sql-server-2012/ Analysis and SSRS provide users with the ability to Once the Data Warehouse is created both define and execute ad-hoc reports and data integration components loads from a standard data model. By using the data into the Data Warehouse, next you SSAS cube, users have the ability to analyze need to create an OLAP multi-dimensional reports in a flexible manner based on the structure. Microsoft Business Intelligence exposed measures and dimensions. system can leverage SSAS (SQL Server Analysis Services) for making data availa- Collaboration and hosting platform ble for analytics and reporting. SSAS, as a SharePoint 2010 or 2013 can be used as leading OLAP tool, delivers online analy- a collaboration, hosting and sharing plattical processing (OLAP) and data mining form; the reports and dashboards will be all functionality for business intelligence appli- deployed or hosted by a SharePoint portal. cations. SSAS pre-calculates, summarizes SharePoint is one of the leading products and stores the data in a highly compressed for enterprise content management which form, which eventually makes the repor- provides collaboration, social networks, ting and predictive analysis extremely fast enterprise search, business intelligence, etc. and interactive exploration of aggregated capabilities out of the box. data from different perspectives. It’s not mandatory to have SharePoint as the reporting user interface because you Data Mining can very well have SSRS reports deployed Data mining models created in SSAS on report server. SharePoint hosting platcan help to identify rules, patterns and form offers several out of the box features trends in the data. In this way the users like Performance Point services for creating can determine why things happen and pre- nice looking dashboards, which provides dict what will happen in the future. SSAS out of box drill down, drill path and decomhas already included several data mining position. Further, Excel and PowerPivot algorithms as out of box capabilities. SSAS services can be used for deploying Excel also lets you define KPIs (Key Performance or PowerPivot to SharePoint in order to Indicators) to your SSAS cube in order to make it available to other people, turning evaluate business performance over time Personal BI into Organizational BI. against the set target, as reflected in the SharePoint built in security features cube data. Normally, the front-end repor- allows role based security. SSAS also allows ting is provided by data through SSAS defining role based security as well as cell cubes. These cubes aggregate data, and level security. These security roles or cell through cache management features opti- level security will govern data access and mize the query results. The predefined ensures that the right person has access to queries have quicker response time than the right information at the right time.


no. 24/June, 2014 |



Adding value when developing. A simple example


ast November „Fundația Comunitară Oradea” ( offered a few grants for ideas that can be put into practice within a year, to make the community we live in a better place. I was one of the lucky few whose project was approved and I wanted to share my way of thinking when developing for the project.

Alin Luncan Software Engineer @ Accesa

The proposed project was all about the process of continuous feedback to and from local authorities on the state of the city through a web app specially crafted for this purpose: change through technology. In most fields continuous feedback is used as an early indicator of progress and as a way to track down and identify mistakes in the process, but in the case of a city, the problem is that once it is big enough, it is nearly impossible to track everyone’s opinion in a precise way without major investments in trained individuals. That was exactly the idea behind the project: a way to gather people’s opinions and complaints regarding the urban environment they live in, at a smalls cost.

Adding value to the idea. The first question I needed to answer for my project before I started was: how are things done now? To my surprise, I found out the city I live in (Oradea) already had a system on the local website where one could address inquires/requests to the local authorities regarding the city problems, through a simple web-form with about 12-15 input fields that you can fill up with information like: name, address, description, email, description of issues and/or other personal data. By studying the annual reports of the city, it was clear that with all that information in place, no one was thinking the obvious: statistics. Here was the first opportunity to bring value with the proposed project: why not take

Figure 1. Application flow when adding an item. | no. 24/June, 2014


programming Adding value when developing. A simple example the input data and create statistical value out of it: a centralized map with all the cityâ&#x20AC;&#x2122;s problems sorted by different social issues, departments in the city hall or even by neighborhoods. And what better way to map problems, than letting users add their issues directly on the map, with a click or tap? Once the map is created, users should have to possibility to visualize and vote up and down other issues; that way, problems of importance would be highly voted and could have priority. With the map and voting system in place, the project still needed to create a feedback line between city officials and its users. To achieve this, for each issue reported the web app will show a start rating systems when either party wants to close a reported issue (mark it as solved or abandon it), with a small input box, so both the user and the city responder can rate the experience. For the community user base, it would mean a way of ranking the local departments and its responders and for the local administrators, it would offer a way to filter ill-mannered or abusive users; from a statistic point of view, it would mean a way to represent on a chart satisfaction in dealing with a certain type of problem (environmental, social, law related) as rated


by users; that way, city officials will know in application, value that can turn a bunch of what to invest more. input fields in a statistics tool.

Wrapping it up

To create an application that is easy to use, with self-explanatory UI and available to a range of devices without changing the experience from one device to another, the choice was a responsive web app build with ASP.NET MVC with the view based on Foundation 51. The map provider selected was OpenStreetMap, as this application would also be open sourced after development is complete, so other communities can benefit from it. After discussions with city workers and officials who are working with people, we found the following features a must have: e-mail confirmation, CAPTCHA, profanity filter, a way to report abusive users with a three strike rule and support for multiple languages. All this functionality is easily developed using the selected technology and it can easily scale when deployed. That is it, a simple web app in MVC that brings more statistical value to city officials than they will know what to do with. So, when developing your next project, donâ&#x20AC;&#x2122;t think only of the task at hand, but think in the terms of adding value to your

no. 24/June, 2014 |



Object modeling of Selenium tests

Usually, a test written in Selenium, Java and TestNG is meant to check the accuracy of items on a web page or that of a module on a web page. The classical approach of such a type of testing is represented by the high number of asserts, to compare all the desired properties to their expected values. This approach has plenty of drawbacks, among which we could

Corina Pip Senior QA Engineer @ Betfair

mention the difficult maintenance of tests, the waste of code-lines, lack of intelligibility. In order to avoid these drawbacks, a different approach of this type of tests is represented by the conception of tests based on the comparison of some objects. Case study Suppose the tested page or module is represented by a shopping cart, displayed on a shopping site. When shopping has been done, the cart contains a number of products. Each product, as shown on the page, contains: a label of its name, a description, an image, the price per product, the quantity of this product that has been put in the cart, the total price for this product (price per product * quantity) and a button by which the product can be taken out of the cart. Besides the purchased products, the cart also has: its own label, the total price of products, a link to continue shopping and a button for moving on to the next step, that of payment. The page may also contain other modules, such as a side module of suggestions for further shopping. This would be a minimal module, containing just a few products for which they would only display a name label, an image and its price. The shopping cart would look like the one in the picture. After adding all the desired products, the cart is ready to be tested in order to find out whether the information it displays is

correct: the products shown are the ones that have been bought, each product is there in the desired quantity, the payment details are correct, etc. Usually, the test created to check all these features of the cart would be a sequence of asserts. Even if someone wrote a method apart from the testing one, only for the checking through asserts (which would be used in several tests afterwards), the respective code lines would still be difficult to maintain and would not be efficiently written. In order to avoid the writing of these bushy and not too friendly tests, they can be object designed, as follows.

The Solution Analysis First, one should consider the overall image. What does the shopping cart page represent? A collection of different types of objects. Following the complex to simple | no. 24/June, 2014


testing Object modeling of Selenium tests structure, one can notice that the most complex object (the one containing all the rest) is the shopping cart. Its features are: the label, the list of purchased goods, a link, a button and a side module. Among the features, the label is represented in Java by a String, from the point of view of the test. The price can also be represented by a String. The list of products is actually a list containing “product” type objects. The displayed link is also an object, and so is the button or the side module. Following the identification of the highest level objects, one can describe the “ShoppingCart” object as follows:

According to this logic, one can identify all the features of all the objects displayed on the page and these objects can be built by breaking them until they are decomposed to features that are Java objects or primitives.

Constructing the expected content

After completing the object structuring of the shopping cart, one has to understand how they will be used in tests. The first part of the test (or the part done before the test) is represented by adding the products in the cart. The test only has to check whether there are the right products in the cart. public class ShoppingCart { private String title; In order to build the “shopping cart” object expected by the private List<Product> productList; test (the expected content), one must create the constructor that private String totalPrice; private Button paymentButton; assigns values of the specific features type to all its features. Thus, private Link shopMoreLink; one passes to the constructor parameters which correspond to the private SuggestionsModule suggestionsModule; } features of the object, having the type of these features (for example, for a String type feature, a String is passed in the constructor; Continuing the in depth analysis of the structure of objects, for an int, an int parameter is passed). one can notice that: a product contains (or has as features) – an Starting from the simplest object, constructors are created. For image (meaning another type of object), a label representing its the Link: name (a String), a descriptive text (another String), the price per public Link(String linkLabel, String linkURL) { piece (a String), the amount (a String), the total price of the prothis.linkLabel = linkLabel; duct (a String) and a button (which represents an object that has } this.linkURL = linkURL; already been mentioned as a feature of the cart). The object representation of the product can be done, according to the analysis, It can be noticed here that the Link object has a label and an as follows: URL, both of the String type, whose value is instanced with the values received from the parameters of the constructor. public class Product { private String productLabel; For the product, the following constructor is generated: private private private private private private

String productDescription; Image image; String pricePerItem; String quantity; String totalPricePerProduct; Button removeButton;

The link mentioned within the cart may include, as a minimal set of properties, a label (a String, the text seen by the user on the display) and the URL which opens by clicking on the link (a String). Its object representation can be done like this:

public Product(String productLabel, String productDescription, Image image, String pricePerItem, String quantity, String totalPricePerProduct, Button removeButton) { this.productLabel = productLabel; this.productDescription = productDescription; this.image = image; this.pricePerItem = pricePerItem; this.quantity = quantity; this.totalPricePerProduct = totalPricePerProduct; this.removeButton = removeButton; }

public class Link { private String linkLabel; private String linkURL; }

public ShoppingCart(String title, List<Product> productList, String totalPrice, Button paymentButton, Link shopMoreLink, SuggestionsModule suggestionsMod-



no. 24/June, 2014 |

For the shopping cart, the constructor looks like this:

TODAY SOFTWARE MAGAZINE ule) { this.title = title; this.productList = productList; this.totalPrice = totalPrice; this.paymentButton = paymentButton; this.shopMoreLink = shopMoreLink; this.suggestionsModule = suggestionsModule; }

Based on the constructor, the following objects will be generated (those that will serve as “expected”), by passing to the constructor some values having the type of parameters which these will be assigned to: • the link to continue shopping:

public static final Link CONTINUE_SHOPPING_LINK = new Link(“Continue shopping”, “”);

• the button for moving on to the payment step: public static final Button GO_TO_PAYMENT_BUTTON = new Button(“Proceed to payment”, “”);

• product 1 of the cart: public static final Product LATTE_MACHIATTO_2 = new Product(“Latte Machiato”, “Classic latte machiato with a dash of cocoa on top”, Image.IMAGE_LATTE_MACHIATO, “5 Ron”, “2”, “10 RON”, Button.REMOVE_PRODUCT); → here, the image and button type

objects were constructed by using the constructors specific to those objects • product 2 of the cart:

public static final Product CHOCO_FRAPPE_3 = new Product(“Choco-whip Frappe”, “Frappe with a twist of whipped cream and chocolate syrup”, Image.IMAGE_CHOCO_FRAPPE, “5 Ron”, “3”, “15 RON”, Button.REMOVE_PRODUCT); → here, the image and button

type objects were constructed by using the constructors specific to those objects. • product 3 of the cart: public static final Product CARAMEL_MOCCACHINO_1 = new Product(“Caramel Moccachino”, “Your favourite moccachino with a refreshing taste of caramel”, Image.IMAGE_CARAMEL_MOCCACHINO, “5 Ron”, “1”, “5 RON”, Button.REMOVE_PRODUCT); → here, the image and button

type objects were constructed by using the constructors specific to those objects. • the shopping cart: public static final ShoppingCart SHOPPING_CART = new ShoppingCart(“My Shopping Cart”, ImmutableList.of(Product.LATTE_MACHIATTO_2, Product.CHOCO_FRAPPE_3, Product.CARAMEL_MOCCACHINO_1), ”30 RON”, Button.GO_TO_PAYMENT_BUTTON, Link.CONTINUE_SHOPPING_LINK, SuggestionsModule.SUGGESTIONS_MODULE); → here, the

object of the type “suggestions module” was constructed by using

its specific constructor, and the list of products, the button and the link were exemplified above the line where the “shopping cart” object is constructed.

Constructing the actual content

To construct the actual object, namely to read the features of the objects directly from the page where they are displayed, a new constructor will be generated in each object, which takes as parameters either a WebElement or a list of WebElements, as many as necessary to generate the features of the object. WebElements represent the description of HTML elements in the Selenium specific format. As an example, for the link object: the two features, the associated label and URL can be deduced in one single WebElement. A link type element is represented from the point of view of the HTML as an <a> tag, having a ‘href ’ attribute (by the extraction of which, the URL is identified). By calling the getText() method of the Selenium library directly on the ‘a’ element, the value of the label is obtained. Thus, the constructor based on the WebElement is described below and it instances the features of the object, extracting them from the corresponding HTML element: public Link(WebElement element) { this.linkLabel = element.getText(); this.linkURL = element.getAttribute(“href”); }

For the construction of the actual object corresponding to a product, depending on the number of webElements necessary to obtain all the features, we will define a constructor which takes as a parameter either a webElement or a list of webElements. Supposing there is just a single element used, the constructor will look like this (for instance): public Product(WebElement element) { this.productLabel = element.findElement( By.cssSelector(someSelectorHere)).getText(); this.productDescription = element.findElement( By.cssSelector(someOtherSelectorHere)).getText(); this.image = new Image(element); this.pricePerItem = element.findElement( By.cssSelector(anotherSelectorHere)).getText(); this.quantity = element.findElement( By.cssSelector(yetAnotherSelectorHere)).getText(); this.totalPricePerProduct = element.findElement( By.cssSelector(aSelectorHere)).getText(); this.removeButton = new Button(element); }

It can be noticed that in the case of the product, in order to generate some features, we called the constructors of the corresponding type objects, namely the constructors that also take webElements as parameters. Basically, any constructor based on webElements calls only constructors having webElement type parameters for initiating its features. The features left are instanced according to the parameter given to the constructor. For instance, for the product Label, the getText() method of Selenium is called on an element relative to the element passed in the constructor. After the definition of all the constructors based on webElements, one can also generate the constructor for the most complex of them, the shopping cart: public ShoppingCart(List<WebElement> webElementList) { this.title = webElementList.get(0).getText; this.productList = productList; | no. 24/June, 2014


testing Object modeling of Selenium tests this.totalPrice = webElementList.get(2).getText; this.paymentButton = new Button(webElementList. get(3)); this.shopMoreLink = new Link(webElementList.get(4)); this.suggestionsModule = suggestionsModule; }

The Test Following the definition of the objects and constructors, one can move on to the test writing step. The requirement of the test was to compare the shopping cart to an “expected” one, that is, to compare all the features of the shopping cart to those of the expected cart. These features are, in their turn, objects, so their features too should be compared to the features of some expected objects. Since the expected values were constructed by means of the first type constructor (the one with parameters having the type of features that are being instanced), and there is a constructor for the generation of the actual content (through the interpretation of the features of some webElements), the test that must be written contains one single assert. It will compare the expected features to the actual ones, by simply comparing the two objects. We should mention that, along with the definition of objects, one must also implement within each the equals() method – the one that verifies whether two objects are equal or not. Thus, the test can be written as follows: @Test public void checkMyShoppingCart() { assertEquals(new ShoppingCart(theListOfWebElements), SHOPPING_CART, “There was an incorrect value in the shopping cart”); }

Of course, this test does not describe the steps necessary to the construction of the shopping cart (surfing on the shopping site and adding the products). These steps can be made within the test, if necessary, or in ‘@Before’ type methods. In case you want, for instance, the typing of the same content, but in different languages, you can pass a dataProvider to the test, which contains a parameter used in the test to change the language, as well as the expected value of the test in the respective language. In this case, the test will be the following: @Test(dataProvider = “theDataProvider”) public void checkMyShoppingCart( String theLanguage, ShoppingCart actualShoppingCartValuePerLanguage) { changeTheLanguageOnTheSite(theLanguage); assertEquals(new ShoppingCart( theListOfWebElements), actualShoppingCartValuePerLanguage, “There was an incorrect value in the shopping cart”); }

The DataProvider used in this test will look like in the example below: @DataProvider public Object[][] theDataProvider() { return new Object[][]{ { “english”, SHOPPING_CART}, { “german”, SHOPPING_CART_GERMAN }, { “spanish”, SHOPPING_CART_SPANISH} }; }


no. 24/June, 2014 |

Thus, suppose the shopping site is available in 20 languages, the translated test that checks the accurate display of the shopping cart will have a reduced number of code lines and it will be written only once, being, however, run on all the existing languages.


The manner of writing tests by comparing the objects generated from webElements to those generated from objects and primitives has numerous benefits. Firstly, the test is a very short one, with a well-defined purpose, doing one thing only: comparing the actual values to the expected ones. The test in itself does not require a lot of maintenance, since following the changes in the page accessed by users, it is not the test that has to be altered, but the manner in which the compared features are generated. The alteration of the value of a label on the page only requires the alteration of the expected value corresponding to an object. This alteration is made in one single place, but a great number of tests benefit from it. Thus, the testing part is separated from the part of generating the expected values. Another benefit is represented by the compact structure of the test, as it is not necessary to write numerous asserts, nor to pass many parameters to a method that has to check those values. Instead of those numerous parameters, one directly passes the object containing the features to be compared.



The Risk of Not Having Risks


isk Management is a very widely used term in today’s world of software engineering. Almost every project suffers from threats which can seriously affect its cost, schedule or quality in a negative way. According to an IBM study only 40% of projects meet schedule, budget and quality goals. Other analyses conclude that “between 65 and 80% of IT projects fail to meet their objectives, and also run significantly late or cost far more than planned.” In order to avoid critical deviations to project’s cost, schedules and quality it is important for every Project Manager and not only, to acknowledge that threats exist, that things could go wrong, and to be prepared to act either by reducing the probability of the potential threats or by reducing the loss if these will materialize.

What is a risk?

A risk represents a potential problem, an uncertain event that, if it occurs, will have an effect on the project objectives. The effect of the event could be either beneficial or damaging. When approaching Risk Management, which covers both positive and negative effects of an uncertain event, the Project Manager has to differentiate between Threats and Opportunities. In the current article we will be focusing only on risks that are threatening project objectives (have a negative impact on the project’s success goals). Any risk involves three attributes which need to be quantified: • Probability for the risk to happen: usually expressed in percentages or by categories: Low, Medium, High • Impact: the loss that will occur if the risk becomes a reality, can be expressed by categories: Low, Medium, High • Proximity is expressed in temporal terms (the most probably date the event can occur) or categories (e.g. imminent, short term, long term) considering the interval from the moment the risk is identified until the moment it is most likely to have an impact on the project’s objectives.

in a negative manner. The key point is that the risk event has not yet happened and it might not happen. • An issue is a result of an event that is happening right now or has already happened. An issue has a negative impact on the project. An issue is not a risk but a risk becomes an issue when we no longer can avoid the impact. There are 4 types of risks: • Project risks - the ones that are threatening the project plan; these are mostly linked to potential problems related to budgets, schedules, staffing, resources, requirements, customer; • Technical risks - the ones that are threatening the quality and timeliness of the software to be produced; these are mostly linked to potential problems related to design, implementation, interface, verification, maintenance. Specification ambiguity, technical uncertainty, technical obsolescence, and “leading-edge” technology are risk factors from this category. • Product risks – the ones that are related to the potential consequences of issues within the product (e.g. security risks). These risks may be related to misuse of the product or errors within the product which affect the outcome through calculation or logic errors. • Business risks  - the ones that are threatening the viability / success of the project; these are mostly linked to potential problems related to market, company’s strategy, sales force, management.

Manager’s accountability, everyone’s responsibility and should be performed in a proactive way. The objective of Risk Management is to give the Project Manager the necessary knowledge and instruments to be able to face any events that might have an impact on the project’s objectives. Many problems that arise in software development were first known as risks by someone in the project staff. Caught in time, risks can be avoided, negated or have their impacts reduced. Proactively managing risks implies determining a strategy that will prevent the risk from becoming a problem or will limit its impact if it does. The strategy to manage risks typically includes reducing the negative effect or probability of the threat, transferring the threat to another party, avoiding the threat or even accepting it.

Risk Management paradigm

During the Project Start-up phase and then continuously during the entire project’s lifecycle, risks need to be identified, analysed and furthermore adequate measures need to be planned, evaluated and implemented when it might be the case. In carrying out all of these steps the Project Manager is assisted by appropriate project stakeholders.


We start identifying risks with the acknowledgement of 3 categories of risks: • Known risks - can be uncovered after careful evaluation of the project plan, the business and technical environment in which the project is being developed, If we go further on and put all these and other reliable information sources; Difference between risk and issue information together we dive in the Risk • Predictable risks - extrapolated from • Risk is an event of the future; if it Management practices. past project experience (e.g. personnel occurs it may impact project objectives Risk Management activities are Project attrition rate, communication with the | no. 24/June, 2014


management The Risk of Not Having Risks customer); Once the risk has been identified, the • Unpredictable risks - occur, but Project Manager (or someone acting as difficult to identify in advance. risk manager for the project) will assess its probability, impact and proximity. In some Many of the known and predictable cases this step might require an impact risks may be identified in the start-up analysis which will consider all the objecphase of the project, some of them even tives that might be affected by the possible in the bid phase. There are a number of occurrence of the risk. Ideally, the Project tools and ways to help the Project Manager Manager should also determine the Risk efficiently perform this activity. A sample value – the cost of the problem if the risk risk checklist or a questionnaire where will occur. Risk value = probability x cost the answer will reveal risks can be used. to the project if the risk occurs. In the Software Engineering Institute (SEI) absence of figures relative classification is developed a software risk taxonomy and still useful. a taxonomy based questionnaire (TBQ) The Project Manager has to make sure which provide a basis for identifying, orga- that responsibility for all important risks is nizing and studying various aspects of explicitly assigned to a Risk Owner. The risk software risks. owner is the person with the best profile to manage the specific risk during its lifecycle in the project, this may be someone who is not directly involved in the project, such as line manager, sales, infrastructure, etc.

However, risks exist and emerge throughout the entire lifecycle of a project; that’s why risks must be identified continuously, not only in the start-up phase. Risk identification needs to be part of the project’s routine. It is every team member’s professional and moral responsibility to report risks as soon as they are identified, even if they seem to have a low importance or sound weird. Once identified, the risks should be captured and communicated or shared through a risk log or other tool that allows risks registration.



4. Fall-back, also known as contingency: Provides an alternative for the situation when the risk will materialize. 5. Acce pt : No re sp ons e w i l l b e implemented.

When dealing with a set of related risks the Project Manager should consider the consequences of any action: eliminating or reducing a risk may introduce others. The measures defined to respond to a risk bring additional costs to the project. Mitigations are effective when the gain obtained by mitigating the risk exceeds the potential loss that would be recorded if the risk was to materialize. When making this calculation, it is a good principle to consider the remaining risk, after mitigation. Otherwise stated, Risk Management is about deciding how much time, money, effort you want to spend on solving a problem which might never occur. A wise practice in project management is to always Plan consider a risk budget added to the project’s Once the probability, impact and original budget. proximity for a risk have been assessed, the Project Manager will work with the Track relevant stakeholders to define and plan The risk must be monitored and revisuitable measure(s) to manage the risk. The ewed regularly to ascertain the validity of measures a Project Manager can consider the planned response and, if necessary, to when responding to a risk are: plan new measure(s). There are hundreds 1. Reduce/Mitigate: Reduces the Impact of risks which can be identified during a and/or Probability of the Risk occurring. project lifecycle. It is impossible to track It is the primary strategy, a proactive one all of them on a regular basis. That’s why that handles the risks in a controlled and the recommendation is that only the top effective manner. It is most of the time 10 risks (considering the priorities estaachieved through a mitigation that inclu- blished) are monitored regularly. An des also a contingency plan. Impact/Probability matrix can help Project 2. Avoid: Eliminates the risk completely. Manager decide which risks need his This means not performing an activity attention. that could carry risk. Avoiding risks To successfully implement a proalso means losing the potential gain that ject, the Project Manager must identify would have been obtained if the risk was and focus his attention on moderate and accepted. extreme risks. 3. Transfer: Transfers or shares the The risks towards the top right corimpact of the risk with a third party (e.g. ner (high impact/high probability) are of insurance company). critical importance and the ones that the

no. 24/June, 2014 |

TODAY SOFTWARE MAGAZINE Project Manager must pay close attention to. It can also be the case that at one moment a moderate risk needs more attention because it’s likely to occur soon. Low-probability / low-impact risks can often be ignored.

Correct Because the project context can change, already planned responses for a risk might no longer be suitable, therefore corrections need to be done on the risk mitigation plans, sometimes even the risk needs to be re-analysed.

The risk of not having risks

Identifying and managing risks is the very essence of business survival and growth. The practices of risk management are still not formalized in most of our Romanian IT companies and this becomes a risk by itself. Although Project Managers identify risks based on their former experience, lessons learnt or just a gut feeling, the risks are rarely properly managed or monitored. And when things can go wrong they will go wrong (if no action is taken). How many times did you hear the expression: “I knew this would happen” after something bad has happened? And this is just because a person in the team knew that something can go wrong but did not act to prevent it. Sharing is caring and sharing is acting. It is our own responsibility to share risks with the person who can do something about it. Each one of us has its own perspective, knowledge and understanding on what’s going on and sees things that others don’t or can’t. It is said that on average each one of us foresees about 5 risks per day, but usually forgets them soon after.  Considering your company context and the aspects presented in this article do you foresee any risk of not having managed risks? Or otherwise stated: “Given that there is no Risk Management process in place is there a risk of losing business? Or failing

projects? Or losing people?” It is a challenge for each of us to identify and react on these risks. Some can rise these risks others can do something about them. Yet, an effective risk management starts from the top with clarity about risk strategy and governance, with proper understanding and accountability at the board and executive levels. At ISDC, we had understood the importance of Risk Management as a process, as a mind-set and as an attitude towards risks and we are continuously working on eliminating “the risk of not having risk” by: • Implementing an effective Risk Management process and framework across the organization. This includes policies, procedures, tools and responsibilities involved in the Risk Management activities; • Having a system in place, developed in-house, which encourages and supports risk identification, tracking and analysis; • Encouraging open communication, non-judgmental, non-attribution when identifying risks; • Including risk management related trainings in the Project Managers, Team Leaders and QA Officers development program; • Analysing projects’ risks (number of risks, risks categories and cumulative value of the risks). Thus, we are able to determine the categories that raise most risks, identify and understand the “risks that matter” and we have the possibility to quickly react and close the gaps. In order to facilitate ISDC’s Risk Management process we developed our own Risk Manager tool - a web application that allows risk identification, tracking and analysis. The Probability / Impact matrix is a board presenting the risks and their proximity in a nice, visual way. Risk Manager can be accessed by any interested stakeholder based on previously granted permissions. Next step is to embed Business Intelligence and Analytics layer in order to have an efficient reporting, thorough analysis of data and dashboards displaying risks’ key indicators. All these have been and are successful due to our Continuous Improvement Program and to the valuable contribution of Peter Leeson ( Peter has been inspiring us through his trainings, CMMI appraisals, consultancy and

support. We can conclude that risk management is an essential element of organizational success. The secret of successful projects is closely linked with the ability of taking risks, correctly managing them and communicating results. Also sharing the experience throughout the organization gives the future projects a library of best practices to call on.

Ramona Muntean Measurements & Best Practices Specialist @ ISDC | no. 24/June, 2014



Product Inception


ow fast could you create a software product starting from an idea? This is the purpose of this article. We will explain our concept of starting with an idea, finding the core value of the idea and then put it into the hands of users as fast as possible. Alexandru Bolboaca Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Your idea

ideas. This product may be your idea, but In order to start one needs to have a you are not the sole user. After this process clear idea on what would be the product. you will have a structured idea that you can Create a simple bullet point list of the continue with for the next steps. concept, the core values of the product and how it could be monetized. But then how could you understand if it is useful or not?

Feedback, the key ingredient

Adrian Bolboaca Programmer. Organizational and Technical Trainer and Coach @Mozaic Works

We all have ideas, but what if we had a system to put them into practice? The main tool of understanding if your idea can be monetized is by getting feedback from others. First of all, your friends and family. Then you can start around people at public gatherings, in communities. Wherever you go, pitch your idea to people and write down their likes, concerns, improvement ideas.

Your structured idea Take each item from each person you interacted with and think well if it could be used in order to improve your product. You need to be very open when filtering these


no. 24/June |

The Minimum Viable Product

The concept of Minimum Viable Product (MVP) comes from the world of Lean Startup. It describes minimum necessary of the core business idea that can be put into the market and into the hands of the real users. It needs to be useful, functional and ideally disruptive.

Personas Before dealing into the intricacies of how the product works, you need to start from understanding the users and their needs. For that we usually recommend creating personas. A persona is a fictional character that will have all the characteristics of a real person: name, age, sex, interests, occupation, and so on.

TODAY SOFTWARE MAGAZINE And more, the persona will have as many of estimating business value. We basically details as needed to understand how it will need to find a way to create a hierarchy of interact with the new product. user stories, with respect to the business value it can bring to the user. Also, we need Themes to take into account how easily this value is From your initial idea, spiced with the monetized by the business. suggestions from your friends, family, and other acquaintances you can have an idea User Story Mapping on how the main functionalities should What we would like to do is to map look like. These main functionalities are these user stories with the epics and thecalled themes. There should be just a mes. The top part of the above picture is the couple of them. The themes should be generic understanding of the product, preindependent from one another. A theme is sented as themes and/or epics. The bottom like a main functionality of the application. part is the user stories which add details to [ each theme and epic. blog/stories-epics-and-themes] This technique is extremely useful to add context for each user story. Each perEpics son involved in producing this software When you understand what the will always understand the whys and hows basic directions of development of your of each user story. application are, you can start splitting A good option would be to pick a couthese themes in smaller batches. The main ple of user stories from each of the epics so purpose of splitting them is that we want that you can have faster feedback on each to have fast feedback from real users. The of your ideas. So basically on the story smaller the functionality, the faster we can map you would draw a line and everything validate it with real users. In this way we that is above the line would be developed can learn about how the users really feel during the next period (1-2 weeks maybe), about the application. An epic should be and the rest would wait a bit more. The user delivered in 1-2 weeks to the real user. stories you would want to pick are those that have the highest business value. User Stories Remember that each of the user stories These epics are again split into several is user focused, because you want to underuser stories. A user story should be stand how the users would use this product finished, meaning that it could be added to and you want to improve the user’s experithe production environment, in 1-3 days. ence as much as you can1. Usually a user story and an epic are written in the following format “As a <persona> I would like to <user’s need> so that <some reason/business value>”. “As Charlotte, I would like to be able to easily send my meal preferences to my friends so that they could be able to cook dinner for us before I arrive at their place”. Basically Charlotte needs a way to send what she wants to eat in that moment, and knows that her friends are happy to cook anything. This would be an extremely useful application for foodies around the world.

Prioritize the business value After all the themes were split in epics and all the epics were split in user stories, we need to find a way to prioritize these functionalities. So we need to look on each of the user stories and understand how fast and efficiently we could monetize the value represented by the “so that <...>” part of it. For doing that, we need to create a scale

to say the least, in this stage of the product development2.

Identify the financial effort willing to take You need to balance how much it costs to product this first increment and what is your budget. You want to deliver the first increment as fast as you can because you might be able to get revenue very fast, if your idea is good.

Deliver the first increment As fast as you can, you need to deliver. This means days or weeks. Do not wait to make the things nice and perfect. Just deliver! Get feedback! If you developed less, it is easier to modify. You want to have the simplest increment possible delivered.

Get feedback on first increment With this increment you can ask your friends, family, acquaintances. You can also go to user groups, investors and show them your idea. You have something to show. This shows to people that you really are committed to improving this product, and this helps a lot to have valuable improvement feedback for your product.

Pivot You want to stick to the idea “fail fast, fail cheap” from Lean Startup. [http://] If the first increment does not have good feedback you can either improve it or just stop. Pivoting means changing the development path towards requirements that the users really want, need and would pay for. You might need to pivot several times until you find the good path.

Deliver the MVP Once you understood where you need to go with the first product increment, deliver it to real users. Put it on the market as fast as you can and get feedback from Image from the world. Think big, think globally. Try to wiki/File:User_Story_Map_in_Action.png market the MVP everywhere you go. Invest into presenting your MVP and keep your Do not estimate effort system open to any improvement feedback. You do not need to estimate effort. Just This feedback from real users is so valuable start doing the work and record how much that it might define the life of this product: each user story takes you and your team. a success or a failure. After a couple of weeks or a month you have data to understand your development The money speed for this specific product. Use this statistical data to make a forecast on what Monetize your effort you can really deliver during the next 3 If your first MVP starts being used, months. Estimating effort up-front is waste, it means you can start earning money. 1 David Hussman explaining User Story Mapping and presentations/story-mapping

2 Vasco Duarte on #NoEstimates com/watch?v=0SD7Qk4ffPw | no. 24/June, 2014


management Product Inception Probably not a lot, but it is enough. It is a sign that your product brings real value to real users. It means you are probably on a good path. Starting earning money means that you can reinvest it in further development. Also it means that the development team has a boost in morale: the product they are building is useful. On the other hand, if you cannot monetize the effort you might want to continue marketing it and adding features, or you might want to pivot to some other direction from where you are.

The product

As we described in this article, this is a way of developing an idea into a product that can be monetized. It might seem simple, but it is not. You need to be open to change and understand failure not as a bad thing, but as a real life fact from where one can learn so much. This is how you can start product inception. We would be happy to guide you through this process, so we will be happy to answer to any questions or remarks.

Get feedback on the MVP But this does not mean you do not need to improve the MVP. You always need to get feedback from users and to understand if you cannot bring even more value to your real users.

Pivot If you fail fast you can understand that you need to change the direction of where the product is heading to. Pivoting is always a challenge and you get used to it while doing it more and more often. Pivoting does not mean one failed, it means that one learned about the real needs of the users. And this is the best thing you can do: understand the users and their needs as fast as possible. During this process of understanding the users you might have a pretty good idea on the direction you need to pivot.


no. 24/June, 2014 |


The Product Owner story


he purpose of this article is to shed some light on the role and responsibilities of the product owner, as observed and implemented in the Cluj software world/environment by the author. The topic is broad and cannot be fully covered in a single article, but I would like to provide one of many points of view in which the subject can be tackled inside each team and company. Bogdan Giurgiu Group Product Owner @ Endava

PO Role

First, let’s set the stage and clarify one thing from the beginning: the Product Owner role is associated with the Scrum methodology and should exist in a healthy Agile environment. It is true that each company is Agile in its own way, and that is not a bad thing. Diversity is good as long as we stay true to the base principle of “Inspect and Adapt”. While I agree that one can call himself Agile as long as one respects the core principle, with Scrum is a different story. One either does Scrum, or does ScrumBut. It is important to clarify the direct connection between Scrum and the Product Owner role. In other methodologies the PO responsibilities are usually split between other team members such as (but not limited to): Project Manager, Product Manager, Architect etc. I would like to start by quoting Mike Cohn, one of the founders of Scrum: The product owner is commonly a lead user of the system or someone from marketing, product management or anyone with a solid understanding of users, the market place, the competition and of future trends for the domain or type of system being developed.

As stated above, there is a wide range of potential candidates for this role, therefore I would like to take this statement and slice it a bit. “The product owner is commonly a lead user of the system” … let’s stop here. I have worked in a similar scenario where there was a lead user of the system that took the ownership of the product. This lead user had the vision and it was this user that set the direction. But as the user came from a domain unrelated with the IT industry and Scrum, they needed additional help in order to translate the vision into a Product Backlog. This help can be provided in different forms. In one scenario, provided that the lead user has the necessary bandwidth and the desire to take additional responsibilities, the lead user can become a fully fledge Product Owner (as described in the Scrum methodology), with proper training. I haven’t seen this happen too often, as in most cases the lead user doesn’t have the bandwidth or the desire to move into this position. Therefore, the second (and most common) scenario is to split the responsibilities that come with this role in two where the lead user will own the vision and the direction and a person | no. 24/June, 2014


management The Product Owner story inside the Scrum Team, a Proxy Product Owner, will create and own the Product Backlog. This setup is portrayed in the Figure A below and this is one of the setups I worked in, playing the role or the Proxy Product Owner. From my experience, I noticed that the lead user who plays the role of the “Product Owner” as illustrated below, doesn’t have the ownership of the budget to execute on the vision, nor she/he is interested in the Return Of Investment (ROI) for the product. From the perspective of someone who doesn’t directly invest money in the product but is directly impacted by the new functionality, the Return of Investment is always high. In these scenario, another player will have to come into the arena and claim the main responsibility of managing the budget. Depending on the company and culture, this person can be a Project Manager or an Engineering Manager.

Figure A: Customer or the Lead User as Product Owner

The above scenario happens with a “twist” when a Customer takes the role of a Product Owner. Similar with the above situation, in most cases this Product Owner will work with a Proxy Product Owner, who is responsible for the Product Backlog creation. The “twist” sits in the budget ownership as this Product Owner will be


genuinely interested in the ROI mainly because the money invested in the product come out of his/her pockets. Let’s move on to further analyze the statement “The product owner is … someone from marketing, product management…”. It is my strong belief that the Marketing Department of a company should have one of the most creative teams. They need to be on the forefront of the business and they need to have that “strong understanding of the marketplace and the competition”. In the environment I worked, the Marketing had a close collaboration with the Product team. The ideas that were sometimes generated inside the creative Marketing department were usually digested and transformed inside the Products Team. This is another common scenario that I have encountered, where someone from the Product Team, in most cases a Product Manager, will own the product. In this setup, the person that plays the role of the Product Manager must have the necessary knowledge and the bandwidth to play the Product Owner role. This is the perfect scenario indeed, in which the Product Owner owns the vision, creates the backlog while also having the budget to execute the vision. This setup of Product Owner is represented in Figure B below, and I have created products from this role as well. It is an engaging role that comes with great power as you are directly responsible of the ROI for each feature and I had to balance the functionality delivered with the budget invested. In this setup we can clearly state that the owner is committed on all levels to the product and the team.

no. 24/June, 2014 |

Figure B: Product Team representative as Product Owner

So far we have analyzed different situations in which the Product Owner might find himself, as well as a wide range of persons that can fill in this role. Now that we have identified some of the responsibilities, we shall dig deeper into this area. I would like to emphasize on the ones presented below, although the spectrum of responsibilities can (and usually do) include other activities.

PO Responsibilities

The Product Owner needs to represent and manage the stakeholders interest. This collaboration is critical for the success of the product. The communication between stakeholders, the PO and the team is continuous and has the clear objective to keep everyone committed to the product and to provide constant feedback at all levels. Another important responsibility is to create and continuously groom the Product Backlog. In this activity, the Product Owner should seek help from the Scrum Team. Once the vision has been communicated to the Team and some high level features have been identified, the Team can help in crystallizing the more detailed Product Backlog. I personally had this happen at the beginning of the journey, during a Sprint Zero and continuously during the life of

TODAY SOFTWARE MAGAZINE the product during the Release Planning sessions. During these sessions, the fine details are identified in the form of User Stories that will go through a prioritization process, conducted again by the Product Owner. I would like to emphasize one important aspect: there has to be a symbiotic relationship between the PO and the Team. One cannot exist without the other, and through this symbiosis both parties involved should thrive. The environment is a key factor for this symbiosis to exist. What should the environment provide in order to encourage this relationship? I personally believe that a healthy environment needs to nurture creativity and innovation more than anything else. The environment in which people are empowered and encouraged to come up with ideas and solutions, it the environment where great things can will happen. The Product Owner has the responsibility to create and groom this environment as well, as his product will be directly affected by having a motivated team. When I think of a motivated team my thoughts go back to the battles fought by our ancestors, where in many cases, the smaller army defeated the larger invader army just because of great leaders that knew how to motivate their soldiers. The role of a leader in a team can be played by multiple persons, but it is critical for the success of a product to have a strong leader in the person of the Product Owner But how can one motivate a Team? I personally believe the best answer is: through passion. A Product Owner has to be passionate about the domain and the line of business he/she is trying to cover. If you do something out of passion, it will feel natural for you to stay on top of the

game and stay “plugged” with the marketplace, the competition and all the news that relate to the domain. If one carries one’s tasks passionately, it won’t be a 9-to-6 job, actually it will not be a job to begin with… it will be something that one does because ones loves doing it. If the PO has this passion for the domain, this passion will be contagious within the Team and it will undoubtedly motivate each and every one of the members. Jack Welch, GE’s former chairman and CEO portraits this very well in the following statement:

user In conclusion I envision the PO to be a passionate leader with great knowledge about the domain/ business and enough knowledge about Scrum to make this work, a combination that might not be that easy to find… To find more about this role and the responsibilities I encourage you to join the training course which will take place in Cluj on May 15th- 16th on this topic provided by Ralph Jochan, Effective Agile. More information on careers.endava. com/ProductOwnerTraining. “Good business leaders create a vision, articulate the vision, passionately own the vision, and relentlessly drive it to completion,” Another essential skill of a good Product Owner is his/her User Experience (UX) expertise. He/she doesn’t have to provide UX solutions from this role of Product Owner, but it would be great if he/she is able to provide a little bit more than a vision and a backlog. The Product Backlog is basically a set of functionalities, and don’t get me wrong, they are critical for the success of the product, but the way you “wrap” these functionalities and present them to the user, plays a critical role as well. Let’s take the classic example of the iPhone and the top competitors from the Android environment. The Android devices can easily match the functionality of an iPhone, in hardware and software, but the main difference today is in the User Experience and that has become a major, key selling point for Apple. It has become dramatically important for the Product Owner to be able to work with an UX expert in order to better define the experience layer for the | no. 24/June, 2014



Visual Studio Online Monitoring a web application using Application Insight


isual Studio Online is a platform developed by Microsoft that offers a collection of services for software development. Services available are:

• Source repository (Team Foundation Version Control and Git) • Tools for planning and tracking projects (work item tracking, planning, management – support for Agile: Scrum, Kanban) • Test environment (Load testing) • Continuous integration (build server)

In reality Visual Studio Online is a Team Foundation Server in cloud that brings specific benefits of cloud applications (requires no installation, configuration and the maintenance is provided by Microsoft). Users only need to logon to the platform and use it. Visual Studio Online has also a new application “Application Insight”, dedicated to monitoring and collection dates from applications that are running on production environment. Applications can be monitored can be of the following type: • Web service or web application • Web pages that use JavaScript • Windows 8 applications • Windows Store applications Next we follow how we configure the Application Insight application for using and how it is provided data for analysis.

Application configure

To entry in the „Application Insight” we have to login in the Visual Studio Online and from Dashboard click on the link „Try Application Insight” (Understand and optimize the performance of your application). Next we follow how to configure a web application. Through a short wizard (Add application) will specify which type of application will be monitored. The user must choose if the application is .Net or Java, if it’s Azure or not if you want to collect data from a server component. After that, in the next screen, the user specifies a name for the application and receives a configuration file to be copied to the main folder of the web application. To collect information in real time from the website is necessary to install a desktop application „Microsoft Monitoring Agent” on the computer on which the web application is installed. Also for collecting data about users (operating system, browser, location etc) are required some changes in the application. So to find out how many times each page has been accessed in the application we have to copy a JavaScript code in the header Our core competencies include:

Product Strategy

3Pillar Global, a product development partner creating software that accelerates speed to market in a content rich world, increasingly connected world. Our offerings are business focused, they drive real, tangible value.


no. 24/June, 2014 |

Product Development

Product Support


performance issues detected) Next we will look at a concrete example and will see how the information is displayed to the user.


And if we want to have information about the users login and The default Dashboard contains 3 widgets: one for Availability, activities, it is necessary that after login to save user information one for Performance and one for Usages. Using this dashboard we in the appInsights JavaScript object. can easily see if there are problems in the application.

Advanced Configuration

All information that is collected through this application can be configured later in that configuration file that was copied to the root of the web application. The configuration file contains two sections: Production and Development and we can set these properties individually for each profile. Properties that can be set are: Proprietate





ServerAnalytics attribute node - spe-















cifies whether or not the service is enabled data collection SendToRawStream

Send data to the Raw Event tile and Diagnostics/Streaming page


Specifies whether to collect the current user's username


Collect machine name of the current user


The time interval for send data col-


lected (in seconds)


Collect user’s data from HTTP




Collect IP address from HTTP

Only the information occurred in the last 24 hours are displayed initially, but it can easily set a different time interval. We can change to other predefined options: “Last hour”, „, „Last 4 hours”, „Last 12 hours”, „Last 24 hours”, „last 3 days”, „Last 7 days”, and through the option „Custom” you can select any desired time interval (e.g. 1 January to 15 January).

Application availability

The second tab (Availability) displays a more detailed graphic about application availability. Here we can define multiple areas to check whether the application is available.


There is a version of „Application Insight” integrated with Visual Studio IDE, which can be installed from the „Extensions & Updates” from Visual Studio. After installation, right click on the desired project and a new options appears for add „Application Insights” to the desired project. After adding the application to the project, a new application is automatically created in Visual Studio Online that will collect data about the project.


“Application Insight” has 5 main sections that display data collected: • Overview – here you can set up a dashboard page where you can define which widgets appear in the home page. • Availability – displays application availability. • Performance - displays information about the performance of the application, request’s duration, number of requests, machine information etc. • Usage – collects information about application’s usages: pages most frequently accessed, the number of users, location etc. • Diagnostic – contains metrics gathered from application (number of exceptions arising in the application, memory and

Definirea locaţiilor pentru verificarea disponibilităţii aplicaţiei

In the example above we defined several areas in the world: Australia, South America (Brazil), Europe (London, Moscow), North America (USA) and Asia (Japan), from where to check if the application is available. It also displays how long the request of each area was, so it can be detect if the application has problems only in a certain area. There is also the opportunity to define alerts. Thus, if the application is not available in one or more areas, you can specify to send a message (e-mail) to a specified address (or several) in which to notify of the problem. Such a mail is presented below. | no. 24/June, 2014


programming Visual Studio Online


This section contains several widgets that provide information about the performance of the application. These widgets helps to identify performance problems encountered in the application.

Page views - the number of times each page of the site was visualized


This page presents the various metrics that help diagnose problems that occurred when running the application.

Performance page

Widgets that appear by default are: • Response Time and Load vs. Dependencies - this widget displays the response time of the application, the number of requests, request execution time; • Response time distribution – displays a request distribution from response time point of view (how many requests response less than 0.5 seconds, how many response in one second, how many requests response in more than 5 seconds); • Exception Rate – display the number of exceptions/second; • CPU – what percentage of the processor is used by the application; • Network – displays how much network is used; • Memory in use – how much of memory is used by application; • Average instance count – how many instances of the applications are installed; • Top 10 slowest requests by issue count – displays the slowest top 10 request. Analysis of information received by this widget we will analyze later in the diagnostic page.

In the default page of this section is displayed a summary of the main events from the application. In the above picture appears only three metrics: exceptions, performance and memory, but other metrics can easily choose from a longer list of predefined metrics provided by Microsoft. In the next section we will analyze information provided by two of the metrics (exceptions and performance). Exception events – displays all exceptions that occurred during the execution of the application. In the first step are displayed only the number of exception that occurred in application. If we want to see more information about exception, using the double click we can open a new screen where we see a list of exceptions (description, exception type, computer and date).


This page contains useful information about users that accessing the application. We can see here the most accessed pages in the application, how many users have accessed the site, the browser used by the users, the resolution used etc. At first glance this information seem more useful to the marketing or management department; but may also provide useful information to developers. This can determine whether it is necessary to implement new functionalities (e.g. support for a specific type of browser, resolution or language).


no. 24/June, 2014 |

If we want to find out more details about an exception by double clicking on the desired error opens a popup where are displayed all information about that exception: parameters, stack, code line. With this information we can identify easily when the error occurred, we can reproduce it and then we can fix it.


Another very useful thing supported by „Application Insight” is integration with the application logger. Through a small wizard we can select what logger we use in the application (Log4net, NLog or Trace Listener) and then Visual Studio Online, generates a config to be added in the configuration file of the application.

In this way the information logged in the application will appear in the “Application Insight” from Visual Studio Online.


Perf events - displays all performance issues occurred in the application. As in exceptions widget, if we want to investigate in detail a performance issue, we can enter in the details window where we can find all the information about the issue (parameters, execution time for each method - including SQL code that was executed).

Gathering information about an application while running is not a simple operation. Until now we could use logging, performance counters etc. However, this requires additional time (performance counters for configuration and analysis; logging for the data analysis, sometimes we have to look in a few MB text file information to find some useful information to know what happened). “Application Insight” has the advantage of collecting this information very easily (lose few time to configure) and to present them in a very intuitive format and easy to interpret.

Marius Cotor

With this information you can easily identify where the application has performance issues and also we can start to fix them.

Technical Lead @ 3Pillar Global | no. 24/June, 2014



The StackExchange Network


f you’re reading this, you’re most likely a programmer. And, like any programmer, you had to search for programming questions online. I’m sure you noticed something interesting: in the last few years, when we search for a programming question online, a link to StackOverflow will usually be somewhere among the first 3 results from Google.

This is no coincidence: StackOverflow has somehow entered the live of programmers, slowly but surely. We use it practically every day, but I noticed that most programmers don’t know too much about how this site was born, what principles it works on and why it’s so successful. StackOverflow is just one of the 119 sites of the StackExchange network, the two are not the same thing. In this article, we’ll discuss the philosophy on which this network is built and we’ll take a quick overview on how its mechanics allows it to basically function independently. I hope the details I will present here will offer more comfort in using this network and I also hope that we’ll see a stronger participation by the Romanian programmers.

The network’s philosophy History and motivation Before StackOverflow, it was very hard to find a (correct) solution to a programming problem, except to the relatively common ones. The reasons for this are: a. The people that wrote documentation for programming languages, frameworks or technologies were incapable of putting it on the web and make it easily searchable. b. The solution might have been in a programming book. But the reality is that most programmers don’t learn from books anymore. This industry is slowly dying. c. The answers on forums were buried in many pages of discussions and comments. d. In most places, the answers had a ton of problems: from bad advice and fixes that only worked for some people up to vulnerable code and solutions that were basically hacks. There was no way to change them, fix them or improve them. e. A lot of problems ended up being fixed by the platform or the framework, but you didn’t know that, because the old solution was still among the top Google search results. f. If the problem was rare (maybe an API behaving strangely in a certain situation), then the search engine’s page rank wasn’t very useful: the problem only affected a few people, so no one posted links to its solution; therefore it didn’t show up in search results. g. There are too many ways to formulate a question: you have to use the right words to even have a chance.


no. 24/June, 2014 |

To fix these problems and to have a much better availability of solutions online, Joel Spolsky and Jeff Atwood decided in January 2008 to launch a Q&A website called StackOverflow. The site’s development started in April 2008 and was launched in August 2008 as a private beta site. After 4 weeks, in September 2008, StackOverflow became public. Joel’s blog was and Jeff had his own blog too: These blogs were fairly popular and they proved to be part of StackOverflow’s success because, through these blogs, Jeff and Joel increased the popularity of their idea. This was important because they wanted new visitors to feel welcome and to actually find useful content when they reached the site. StackOverflow wants to be a combination between a forum, a blog, a wiki page and a news aggregator. The basic idea is for people to ask and receive answers, not just to add useless comments. It’s a place where quality is voted up and promoted and where useless content is pushed down and disappears. StackOverflow wants to collect as much knowledge and as many programming solutions as possible. The community evaluates them through voting. As the votes accumulate, experts and trustworthy people will surface and the community will trust them more and more. It was an instant success and this convinced the founders to launch ServerFault in April 2009, a site for system administrators based on the same philosophy as StackOverflow. SuperUser followed in July 2009, a site for computer enthusiasts and power users. The success of these sites has laid the foundation for the StackExchange network, which now includes a variety of sites, all following the same structure and philosophy that StackOverflow was built on. Editing and maintaining the content in an up-to-date state is crucial on StackExchange sites. Content accessibility is also very important, there are strong SEO techniques applied on the sites. StackOverflow’s popularity and the fact that most programmers hang out online has an interesting consequence: when a new technology or programming language is launched, support sites or forums no longer being created for them; instead, users are redirected to the relevant tags on StackOverflow.

TODAY SOFTWARE MAGAZINE Variety and professional atmosphere As mentioned in the introduction, the StackExchange network currently has 119 sites. This number fluctuates (see the “Area51” chapter). Each site works on the same basic principles: they’re Q&A sites, they use the same platform and they have, as target audience, people that work in a professional capacity in a certain field. The difference is that each site has its own community that drives and administrates it, completely independent of the other sites. In fact, the only collaboration is when questions migrate from one site to another; this is possible since they all rely on the same platform. Each site has its own subject and variety mentioned in this chapter’s title is an understatement. The most popular subjects revolve around programmers and technologies, but there is great diversity: from math, computer games, poker, sports, politics and photography up to financial management, chess, graphic design, parental advice, history, religion and linguistics. There’s a very low chance for someone not to find at least one or two hobbies or interests among all of StackExchange’s site subjects. As mentioned above, every site’s purpose is to gather as much information from that site’s experts as possible. When someone is looking for something, the result must be a professional, objective and complete answer. That is StackExchange’s ideal. The philosophy mentioned in the previous chapter to constantly edit and improve the content makes the above ideal possible. In many cases, this is achieved.

The content’s license The network’s philosophy includes the concept of making the entire information completely public. Any question or answer that is posted on the network is automatically subject to the Creative Commons Attribution-ShareAlike license. This means that each author receives the appropriate credit on his contribution, that the content must stay 100 % public and anyone can use and modify it (with the condition that the modified version remains subject to the same license), even for commercial purposes. Making the content available under this license allows using the data in many ways. See the chapter “Big Data” for more details.

III). Working Mechanisms Q&A, reputation and privileges The system is very simple: a person asks a question and others post answers. Each question and each answer can be upvoted or downvoted. Depending on these votes, the author receives reputation points. As a person accumulates more points, he or she will unlock privileges and will gain more trust from the community. An upvote brings the author +5 reputation if it’s a question and +10 reputation if it’s an answer. The difference is because answers are the ones that provide the highest quality content, so they are more valuable. A downvote reduces the author’s reputation by 2 points, no matter if it’s a question or answer. However, when you downvote an answer, your reputation will also go down with 1 point. This decision was made to encourage improving the answers as opposed to just marking them as low quality. Every question has to have tags: at least 1, at most 5. These tags help categorize the questions so they’ll be easier to sort and find. For example, a question about how to apply a Look-and-feel in Java will probably have the “java”, “swing” and “look-and-feel”

tags. The question’s author is encouraged to pick an answer that he considers to be the best. In this case, the answer’s author receives +15 reputation and the question’s author receives +2 reputation. The privileges obtained as the reputation grows are diverse: from creating bouties, moderator flags and chat rooms, up to voting to close a question and more and more advanced editing possibilities and content protection.

Moderation Since StackExchange is based on a social network, moderation is done a little different. It is divided in 3 levels: a. The regular users do most of the moderation activities. They can edit the content on the sites, can vote to close or even delete some questions and they can flag for moderator attention if something happens. To do this, members have access to review queues that become available as the member accumulates more reputation. Those with very high reputations even have access to some moderator tools. Therefore, the most abundant and | no. 24/June, 2014


programming The StackExchange Network common problems are moderated by the users, it’s a community that moderates itself. b. Moderators are the network’s police officers; they step in when regular users cannot. Moderators handle flags, identify duplicate accounts, migrate questions between sites, manage tags and many more. Though, among the most important responsibilities is solving arguments and disputes. The number of moderators constantly fluctuates, there are usually between 700 and 1000 on the network. They are appointed by the Community Managers when a site is in private and public beta. In these stages, they’re chosen for their proven activity and diplomacy. Once a site gets to maturity, democratic moderator elections are held, where members nominate themselves for the position and are then voted by the rest of the community based on the speeches they give. Such elections are not held just once for every site. For example, on StakOverflow, this happens once or twice a year. c. When exceptional situations arise, the problems are solved by Community Managers. They are StackExchange employees whose role is to make sure everything is working properly, to monitor and guide the activity in Area51, answer questions on the Meta sites and offer guidance in using site tools.

The network as a Wiki The network is very wiki-like. Users are encouraged to constantly edit and improve the content on the sites. This is so strong that users are encouraged to add their knowledge in the form of questions and answers, basically to answer their own questions. This way, a question and its answers are considered to be like a wiki page on a certain topic. To get an idea: 39 % of questions and 19 % of answers are modified at least once after they’re posted [1]. There are situations when a question is so complex that it requires a long list or needs a lot of research and a very long answer or even a great number of authors for it to be answered. For example: “What are the best programming books?”. In this case, the question will receive a lot of answers, which will be edited by a great number of people. This, together with the popularity and the huge number of upvotes such answers will receive, raise an interesting question: if so many people contribute to that content, is it fair for the original author to receive all that


reputation? To fix this problem, such questions are marked as Community Wiki. This means that the reputation generated by the content won’t be attributed to anyone. The original authors will no longer be listed; instead, the members that contribute the most to the question or answer are displayed. Also, editing such questions will be much more accessible since a member won’t need 2000 reputation do it, like they normally do. Instead, only 100 reputation is needed for this.

Comments, chat and the Meta sites In the network’s early days, you could only add questions and answers. Members, though, needed a place to discuss the rules of the sites, the various exceptional situations that occurred and the overall content quality. So a system was introduced that allowed commenting on both questions and answers. These comments can receive upvotes from others but they won’t generate any reputation. Another similar system is the chat, where members can talk about anything and everything. The chat is divided in rooms, each with its own discussion. Generally, each site has its own room, but new rooms can be created by members that have sufficient privileges to do so. There are also rooms that are moderator-exclusive, so they can talk moderation issues without regular members interfering. Every site on the network has an associated Meta site. These are separate, but they work on the same principles. The only thing they have in common with their parent site is their moderators. Here, the discussions are only about the parent site, adding or removing rules, posting announcements and many more. Meta sites are created when their parent sites reach the private beta stage (see “Area51”).

Area51 As I mentioned in the previous chapter, the network has a lot of sites, each with its own subject. But how are these sites born? And who decides which sites are launched and what subjects they’ll have? The answer is: you, the regular member. The network is social at its core, so the community decides which new sites are launched and what their rules will be. This all takes place in Area51, a special place where new sites are defined and launched by following these steps: a. a new site is proposed. It needs to

no. 24/June, 2014 |

have a name, a subject (e.g. naval architecture) and a description. Anyone can make such a proposal. b. next comes the definition stage when the community tries to draw people interested in the new site and creating hypothetical questions that would fit on it. This is done in order to establish the new site’s domain name, acceptable subjects on the site, what a good question should look like, what a good answer should look like and so on. These questions will be edited and discussed until the community agrees on them. Once the magical number of 40 such questions is reached and there are enough people following the site’s progression, the site is considered to be “defined”. c. the next stage is commitment. In this stage, the community tries to get as many people as possible to “commit” that they’ll participate on the site once it’s launched. The members have to “digitally sign” a petition that confirms this. The reason for this commitment is because a site needs a critical mass of members that use it and make it popular in the first cycles of its existence. The site can’t go to the next stage without at least 200 such members. d. next comes private beta when the site is actually launched, together with a sister Meta site. In this stage, the site is only available to those who committed to it during the previous stage. The FAQ will now be defined, the first moderator positions are assigned and last minute changes are performed. Also, the existing members try to fill the site with content and also make it popular on Facebook, Twitter, blogs and so on. e. the next stage is public beta when the site is opened to the public and anyone can use it. f. If the site reaches the required activity and traffic levels, gets a critical mass of members and the community is comfortable, that it will remain popular, it will reach maturity. In this case, the site will get a brand new and unique graphic design that best reflects its subject. Also, democratic moderator elections are held.

Careers StackOverflow currently has 3 million members and 6.6 million daily visits, is the most popular site of the network and generates over 80 % of the network’s content and traffic [2]. Having such a huge audience of programmers opens the doors for a unique opportunity: jobs and careers. This

TODAY SOFTWARE MAGAZINE is how Careers.StackOverflow was born, a kind of LinkedIn only for programmers and IT professionals. Every member can create an account, an electronic résumé. On it, you can add all kinds of information: from job history, known technologies and authored articles up to projects you were involved and books you’ve read. For optimal functionality, the site integrates APIs from very popular 3rd-party sites like LinkedIn, GitHub, BitBucket, Amazon, SourceForge and many more. Various StackExchange profiles can also be included here, together with the best and highest voted answers. Companies are not neglected on this site either: they can create their own pages where they can include a company description, a map with its location, currently available jobs, pictures, accounts of key employees, benefits, technologies used in its projects and many more.

Big Data

All accounts, questions, answers, comments etc. added to the StackExchange network are publicly available through a series of special sites and APIs. This is possible because of the license, see the “The content’s license” chapter.

Data.StackExchange This is a special site that allows access to the network’s content. Members can write SQL queries in a big text area, execute those queries and see the results in real-time. To help members in writing these queries, the site allows viewing the complete structure of the database tables that have the content. Because StackExchange’s architecture works using SQL Server databases, the queries must respect this vendor’s syntax. The databases used on this site are not the same as the ones used by the live StackExchange sites; instead, they’re just a copy. This means that data is not entirely up-to-date. An update of Data. StackExchange’s databases usually happens once a month. Members that log in on this site can save their queries and then come back to change and improve them. Given the public nature of the content, some information is not available on this site, like for example people’s email addresses.

StackExchange API and StackApps

JSON or JSONP (padded JSON) format. This API can be used in 3rd-party applications that rely on the StackExchange network. There are a lot of such applications already published, especially for mobile devices. There’s a lot of content that can be accessed through this API only by authentication. To do this, the application must be registered in StackApps, at which point you’ll receive an authentication key. With this key, there will be a much higher allowed traffic limit for the application. StackApps is, like I said above, a site where the applications that use the API can be registered. Authors present their applications here, together with installation instructions. Discussions about the API and how to use it are also present here.


The format and philosophy on which the StackExchange network was built is a real success, its popularity cannot be questioned. The sites that it includes made the work much easier for millions of people of all professions. Personally, I save many hours by using these sites, hours which I would otherwise spend digging through the Internet’s far away corners trying to find answers to my questions. It’s practically impossible to calculate how much money StackEchange saves, but it’s pretty clear we’re talking about many billions of dollars [3].

References [1] Cristoph Treude, Ohad Barzilay, MargaretAnne Storey. How do programmers ask and answer questions on the web? In ACM, 21-28 May 2011. [2] [3]

Radu Murzea PHP Developer @ Pentalog

Another way to access the network’s content is through the StackExchange API, a REST webservice that returns data in | no. 24/June, 2014



QA professionals – growth and development


fter working for thirteen years as a software developer, I have dedicated the last seven years to the software products’ quality management field. During the last four years, I have also been involved in recruiting specialists. Due to my career path, I have heard various opinions related to the tester profession, giving me the feeling that this profession and its career development potential are still misunderstood. Besides, the „tester” and „QA engineer” terms are often interchanged and used improperly. There are still many candidates to the tester position that view it just as an opportunity, a way to assess what they consider to be the triumph of an IT career: software development. They do not give themselves the chance to understand the possibilities of professional development in the field of quality management. Some of those who practice the „tester” profession at a beginner level rush to improperly use the „QA engineer” accolade, but its meaning usually eludes them. Then I obviously have to ask myself, how confident can you get in the career you might develop, as long as you are ashamed to pronounce its name? On the other hand, in our environment there are still some project managers that minimize the role that the testers assume in order to assure the software products’ quality. This way, tester recruiting managers omit to request that their potential candidates possess technical skills. As a result, the process of developing professionals in quality assurance and control is severely damaged. Considering that between the numerous software products available on the market, the difference maker is their quality, the growth and development of professionals in this field is of a major importance. When we are looking for a position in this field, or when we are on the other side of the table and search for future employees to fill this position, the main focus should be beyond the immediate needs of


the project. The requirements we define when hiring a tester need to be focused to support the career’s development. Assuming the risk of mentioning wellknown facts, let us start by reminding that a software product’s Quality Management is accomplished through the correlation and synergy of two large branches that intertwine, yet remain essentially different: Quality Assurance, whose purpose is to gain the trust that quality requirements are met, being process oriented and using the following as leverage: • Standards, • Procedures, • Auditing, • Planning. Quality Control, whose purpose is to check that the product’s quality requirements are fulfilled, using: • Testing techniques, • Reporting defects, • Measurement and control based evaluation, • Process guiding, • The used tools’ identification.

tasks that imply defect validation and test runs (manual or automated). Considering the learning curve, this step should last only for a short while. Therefore, the requirements we have when recruiting should outline, besides the need to have a junior tester, the professional perspective that we can offer. At this level of experience, we cannot talk about QA engineering, as this implies process optimization and planning. As testers, we develop our ability to control the quality of a functional requirement according to the general quality requirements of the product by covering it with a required and sufficient volume of tests. To achieve this, one must develop the skills needed to understand the technical and functional aspects of a component and test techniques. Looking at this as a first step in our careers’ evolution, if it is easy to achieve, then our satisfaction and implicitly our motivation, are reflected through a positive attitude. The time needed to achieve this progress is usually very tightly correlated to the technical knowledge we possess in various fields like: • Programming, • Operating systems, • Computer networks, • Test techniques, • Test scripting, • Development processes.

While the software product’s Quality Assurance is the responsibility of the stakeholders, Quality Control is the testers’ responsibility. Keeping this in mind, let us identify a way to support the development of a career in Quality Management. The deep knowledge of the process and The first step in one’s career is usually the attention dedicated to respect it, both that of a „junior tester”. He is often assigned personally and by the other members of

no. 24/June, 2014 |

TODAY SOFTWARE MAGAZINE the team, as well as a careful estimation and a correct measurement of the quality of the tested functionality, contribute to the product’s quality assurance process. Knowing the product from the point of view of its utility, functional and technical architecture, as well as learning the types and the techniques of functional and nonfunctional testing is an important step we need to go through in the process of professional growth. To achieve this, we must keep a continuous focus toward: • Getting more thorough and extensive technical knowledge, • Developing the ability to design tests for various testing types, • Gaining knowledge of tools used to test, report and measure software products’ quality, • Getting deeper knowledge on development processes.

tight to the correct implementation of the software development process. Therefore, our involvement as testers in improving the ongoing process is imminent. Finding ourselves at this point, after a patient and dedicated journey, we can see the perspective with the clarity and safety that are gained by professionals. Development can continue in the management area, the functional architecture or business analysis areas as well as it could continue in the programming or consultancy areas. All depends on abilities, dedication and purpose. Developing a career in testing, leading up to a stakeholders’ position responsible for quality assurance, can be accomplished only using the same known ingredients: • Personal involvement, • Proactivity, • Continuous gain of technical skills, • Understanding the role in the conLooking at things from the quality text of the project, assurance perspective, we can now men• Learning and applying the processes, tion a contribution to it by planning test • Continuously enhancing abilities. types, process watching by having pertinent opinions about its possible sideslips An important advantage that we can and ideas to improve or adapt it. have is the chance to be part of a team that Further on, we can involve in deve- sustains and helps us through: loping the test plan, having in mind all • Training, the functional and non-functional cha• Coaching, racteristics of the product, as well as the • Integrating personal goals in the characteristics of the business to which the team’s purpose. product is dedicated. Knowing the tools needed to test, measure and report the quaLet us get back to the idea that amongst lity characteristics of the software product the multitude of software products available in regard to its purpose will help us develop on the market, the quality makes diffea strategy that assures its a competitive qua- rence. Experience has proven that in order lity. The success of the test strategy is deeply to reach a top spot in this competition,

you need a well-implemented process and many professionals in the fields of project management, project architecture, programming and quality control. The product is a whole whose fulfillment is given by all of these combined. The management’s maturity is starting to show itself in our community as well and comes to support the role that Quality Management has in the success of the software products and, consequently, in the development of field professionals, so the story that I have told has more and more real life examples. In order to grow and develop ourselves as QA professionals, we need to give ourselves the chance to understand the tester profession, with all its needs and potential. Let us stop seeing it as an alternative we choose because we are not good enough to do something else or as a role of „developer’s assistant” because, behind these misunderstandings born out of superficiality and lack of knowledge, there is a beautiful career that we can develop further and further, with effort and dedication.

Mihaela Claudia Senior QA Engineer @ HP România | no. 24/June, 2014



The Joy and Challenges of Writing Quality Software


riting quality software is not an easy task. There are many unexpected situations for which you need to keep an eye out. The range of possible difficulties is really infinite and can vary from misunderstanding real project requirements and wasting good design opportunities to not having a fair and productive interaction with the rest of the team members.

Things become even worse for complex projects and the simple fact that you don’t like the programming language chosen for development can be the proverbial drop that fills the cup. The purpose of the present article is to identify the characteristics of software development disciplines that make it easier for development teams to enjoy themselves and to feel engaged in the task of writing quality software. We are beginning our search by looking at: • Enjoyment and Engagement • Software Quality • Practical Considerations Let’s start with enjoyment and engagement (in that order) and discuss both software quality and practical considerations later as they will fit well into the context.

Enjoyment and Engagement

it be a discipline, when as we know a discipline means there must be some rules and restrictions and one is not allowed to do what he/she feels at any time? If you think like that you have just described a discipline that is not for humans. A discipline for humans must instill order and harmony between the product being created and the one who creates it. In his fascinating book Creativity: Flow and the Psychology of Discovery and Invention1, Mihaly Csikszentmihalyi describes the characteristics of such experiences • There are clear goals every step of the way, • There is immediate feedback to one’s actions, • There is a balance between challenges and skills, • Action and awareness are merged, • Distractions are excluded from consciousness, • There is no worry of failure.

Have you noticed how everybody (well maybe almost everybody) feels so engaged and enjoys popping bubble wrappers with every occasion? Why is that? What makes When all the above are satisfied people the bubble wrapper so attractive that often are already enjoying themselves and engapeople say it helps them relax and reduces gement will follow (the author calls this stress? Is it the smooth shape with lots of state flow): bubbles that makes it interesting, or is there • Self-consciousness disappears something else? • The sense of time becomes distorted The answer is quite surprising because • The activity becomes autotelic (meaactually it has nothing to do with the bubble ning one enjoys the activity just for the wrapper itself but with the actual discipline 1 h t t p : / / w w w . a m a z o n . c o m / required to pop the bubbles! But how can Creativity-Flow-Psychology-Discovery-Invention/dp/0060928204


no. 24/June, 2014 |

purpose of doing it and not for its final result) As always the right pointers and suggestions make us see the facts in a totally different light and we can further analyze the effects of popping the bubble wrapper as the effects of a discipline with the above characteristics. 1. Obviously, there are clear goals at every step of the way because what you have to do is pop one bubble at a time (or two or three, nobody says otherwise) you can only hope the wrapper will never run out of bubbles. 2. There is immediate feedback to your actions, the fact that a bubble pops means you are on the right track, this simple feedback gives you a feeling of comfort and relaxation (acting as a stress relief) 3. There is a balance between challenges and skills because the challenges can be solved by using your thumb in order to apply mild pressure. 4. Action and awareness are merged. You are paying attention to what is happening with the bubble wrapper. If a bubble doesn’t pop you can always try again by applying slightly more pressure. 5. Distractions are excluded from consciousness. Imagine you are using the wrapper and someone asks you if he/she can borough it (these kinds of things can end friendships!). 6. There is no worry of failure. Have

TODAY SOFTWARE MAGAZINE you ever been worried of failing just a moment before popping a bubble? I bet not. Worrying about failures is one of the best ways to inhibit your full capabilities therefore you should avoid doing it.

discipline. Once the building blocks are development discipline. there, as it often happens in the real world, you need to pay attention and make adjustments as you go so you don’t lose any of them as the project goes on. This requires practice and it is not only a task for manaEach of the six characteristics (buil- gers but for all team members, as we are all ding blocks) is necessary for enjoyment to responsible for making the time spent on take place. It suffices to ignore just one of the project worth it. them and things may change dramatically. For example there are some bubble wra[A short side note on Agile: One of the ppers with communicating bubbles (the reasons Agile development looks so attractive bubbles are not isolated from each other) is because its manifesto allows us to value such that applying pressure on one bubble human interaction and working software will transfer the air to the next one making and human interaction again while making it impossible to be popped by hand. This it clear that changes should be expected (so causes a slightly unbalance between chal- you should have no worry of failure). Agile lenges and skills, transforming the task into promises us a software development discia really frustrating one because bubbles will pline for humans to enjoy and be engaged no longer pop without the help of a pro- in.] per tool. I believe this simple observation strengthens the idea that enjoyment and To wrap up things, enjoyment and engagement are not due to the wrapper, but engagement bring huge benefits for proto the discipline behind it. jects and to the people working to make You might ask yourself what does this them a reality (think of increased creativity have to do with the software development and innovation, just to name a few). Each discipline. The answer is straight forward. of us should strive to make them happen Cătălin Tudor Wouldn’t it be nice for your daily job rela- by paying attention to the way we plan and ted activities to act as a stress relief? This is execute our everyday tasks. Remember the not something reserved for the lucky ones. six characteristics. Principal Software Engineer In order to enjoy and become fully engaIn part two of this article we’ll discuss @ Ixia ged into your project, first of all, the six about software quality and how the six building blocks have to be targeted by your principles can be made part of the software | no. 24/June, 2014



What’s wrong with the Romanian IT Industry?


he development and direction of the Romanian IT industry - Cluj Napoca in particular - has always been an attractive topic for me. I am the kind of person who always looks for the bigger picture; I try to understand the system or mechanism, how it “operates”, and then tweak it a bit.

Just to get an idea in the 1900’s the average lifespan of a company was 80 years, in the 1950’s the average dropped to 50 years and in 2011 that reduced to a merely 8 years. This shows the increasing volatility of the global market and the fact that companies need to make adjustments very fast in order to survive the rapid changes in the environment they live in. This particular case, the software development industry, is a very challenging one. Just to get a feeling of what we are talking about in the 5 major cities of Romania (Cluj Napoca, Iasi, Timisoara, Brasov and Bucharest) around 80,000 employees work in the software development industry. I know Indian companies that have more employees than our entire country. And there lies a problem and our challenge. It is becoming increasingly clear that further development of the IT industry cannot be achieved by volume alone. In the beginning, and even these days, most the IT companies relied on a growth of 2 digits per year in personnel. Since the type of work was mainly an hour-based approach (the “capacity”, “near-shore” or “outsourcing” models), the growth simply meant, more people > more hours > more revenue. Simple enough, however that model has put a significant strain on the labor market, causing a severe shortage of skilled people to be available. We have all but exhausted that “resource” as the HR departments 1 would call it. The over-use (or miss-use) of the capacity model has caused 2 major problems in today’s IT world in Romania. The skilled people who work in this industry, or shortage thereof, is the first problem. Studies show that, in ClujNapoca, considering the rate at which 1 Peter Leeson often says we should rename HR to HA from “human-assets” as employees are the most important assets.


universities are educating people plus those moving to the city, in 3 years we will be able to source only 52% of the demand coming from the local IT companies in terms of people (and that does not take into account new companies, just existing ones). The situation is comparable in other IT-centric cities as well. Given the high demand for skilled employees, the pressure on the salary costs has increased, increased and increased to a point where it is becoming harmful. In the battle to get good people, companies are constantly increasing wages. For employees, this means there is always somebody willing to pay more. A skilled professional can change companies rather quickly and with a significant raise each time. We appear to be starting a “bidding war” which has caused, among others, Facebook contests between employees, who will reach first the xxxx EUR milestone and so on. Now do not get me wrong, I have nothing against good wages or employee’s rights; having a market where the employees have the power is not bad in itself, but, when somebody simply leaves for a higher salary to another company, without having gained additional skills or knowledge, we have a serious problem. People are less and less interested in learning new things or further developing their own skills simply because with a small amount of effort they can get a better paying job. Not to mention that when an employee expresses his desire to leave, his employer will most likely come up with a “counter-offer”. The practice is not restricted to the IT industry (or to Romania), but it gives a confusing image: we are basically telling the employee that “you are worth more, we have just been paying you less until now”. Of course this is not a general rule, but the current state of “bidding-wars”

no. 24/June, 2014 |

encourages mercenary-type behavior. This has an impact on the so called “productivity” (as well as creativity and efficiency). Having the costs go up at a higher rate than the knowledge and specialization of the employee means that overall the Romanian IT industry is becoming less and less attractive than other countries around us, or not just around us as the buzzword of this century is globalization, the competition is everywhere, not just Eastern Europe. Not so long ago, the race for higher wages without increased skills or knowledge led to the downfall of the IT industry in many Western European and North American companies, and led them to outsource their development to countries like ours… The model is the second problem and the fact that it is based on hour selling or outsourcing. The more hours you sell the more revenue you make. It involves mainly development/coding and testing efforts and you hear very little of terms such as engineering, architecture, requirements and User Interface design. This is partially because the interest of developing these higherlevel services was low. Until now. Again, do not misunderstand, there is nothing wrong with outsourcing, it is a valid business model for certain times, it provides a constant stream of revenue to a company, it is only when companies believe they can rely solely on it then the problem occurs. More and more companies are starting to realize that without development into areas that allow them to bring a higher added value to the customer they can never become a real partner for their customers. Companies are becoming to understand that being just a supplier is the quickest road to losing your customer: your customers will always find cheaper elsewhere. Without the ability to advise, understand the business needs, to

TODAY SOFTWARE MAGAZINE lead and challenge your customer, you are, as a company, opening the door to redundancy. Companies are starting to move towards services, consultancy and products in order to develop themselves. This is the next step. If you look at Intellectual Property rights (of course there is no official statistic on this), but very little remains here in Romania. We trade hours for revenue and all the Intellectual Property rights of what we develop belongs to the customer / other party we are working with. But how does a company change from a capacity model to a service / product model or a mix of those 2? That alone is a large subject and will be a matter for a future article I suppose, but in a nutshell one needs to invest the capital obtained by the capacity model into the development of these new services. Key word here is invest and this investment must be primarily in people. As people are the main assets of a company in our industry they are the ones that need to grow in order to fit the new philosophy. There is frequently a feeling that with the current staff turnover, there is no point training people if they are going to leave anyway. A wise man once replied to that with “what if you do not invest in their training and they stay?” The employees, their skills and abilities are the most important “merchandize” a company has. The development of a company is tied also with the development of its employees. Luckily even if there are cutting corners and “one-time-project” companies are out there, they are the exception and not the rule of Romanian IT. I have talked a lot about problems until now and too little on solutions. Same as politics anybody can talk about problems, it is easy. Trying to give a solution is the hard part. And then there was quality. That little nice buzzword that we hear around in most companies, most likely accompanied by statements like “Quality is non-negotiable”, “Quality is key”, “Quality is the most

important”, etc. But when one asks what the level of quality is, how it is measured, defined and what are the trends a lot of people are puzzled, they simply think that talking2 about it will make it happen (they will however keep a tight track on hours and budget). The answer to the problems above lies in quality: understanding, defining, measuring and tracking a company’s quality is the first step towards solving the problem. As once you have done that, additional services and specializations are easier to manage and implement. Companies, and individuals, need to understand that their level of quality is their own, it is potentially the most powerful differentiator from everybody else. It is what separates them from the “pack” and brings in new customers, better yet: it makes sure your customers come back to you and advertise their satisfaction. When you talk about quality, paying for functionality and not hours, that is the first step from getting out of the people->hour->revenue trap. You can always find cheaper, but the quality you deliver is your main asset in the battle with the competition. One way to increase your level of quality as a company is an improvement program, as my colleague Tibor Laszlo pointed out in issue 22 of the TSM magazine in the article “Improving-why bother?” Whatever your choice is, quality focus is one of the most difficult endeavors a company can engage on simply because it is an organizational mindset change. And although there are some quick-wins you can get with low effort and high reward, a mindset change is one of the hardest things to pull off as it involves people, psychology and making a durable change in the way people think and work. It is a real company transformation. So, in the end if you are an employee of a software development company ask yourself these questions: • When the current cycle (or

bubble) on the labor market finally breaks in what type of company do you want to be? An hour-type company that sees you as a resource or a company that invests in its employees, their trainings and sees them as assets? • What have you done in your personal development to increase quality of what you deliver lately? And if you are a manager or in a position to influence the future of a company: • How are you going to ensure your company is still there in 5 years and what are you doing now to get there?

Ovidiu Șuța QA & Bid Manager @ ISDC

2 Of course, most company managers cannot even give a clear definition of what the word “quality” means to them. | no. 24/June, 2014



Why Is It That Our Children Do Not Dream Of Becoming Project Managers?


his was the question Jon Duschinsky – keynote speaker at the PMI Global Congress EMEA 2014 – asked the audience. I had the privilege to attend the Congress in Dubai, UAE, at the beginning of May. Jon was truly amazing and I will try to convey some of the bright ideas that stayed with me long after his speech.

Duschinsky, recently voted as the world’s second most influential communicator in social innovation (the first being Bill Clinton), has a strong background in philanthropy and social innovation. In 2012, together with a team of some of the world’s leading creative thinkers, Jon co-founded “The Conversation Farm”. Hired by companies and charities alike, The Conversation Farm solved problems by creating ideas that engaged millions of people in conversations that challenged and changed attitudes and behaviours. They believe that the driving force in business today is not what you make, but what you are made of. Their clients include a consortium of the world’s largest cosmetic companies (L’Oreal, Unilever, LVMH, Coty etc.), the World Alzheimer’s Association, Nascar, NFL and many more. Jon talks a lot about “conversations” and he does not refer to the social, face-toface conversations that our predecessors had during their meetings, parties and other social events of their time. What he refers to is something much bigger: virtual conversations that could not have happened ten years ago when people were only starting to talk about Web 2.0, at the same time when Mark Zuckerberg together with some of his colleagues from Harvard University were only founding Facebook and Twitter was nowhere in the picture! The breadth of today’s conversations is incredible indeed, due to the technical developments that allow the “creation of the Internet” in real time combined with easy Internet access from mobile devices. Equally important are the huge communities developed by the popular global


and regional social networking sites (only Weibo and Vkontakte reaching close to a billion users). Going back to the question raised at the beginning of this story, Duschinsky believes that part of the solution is to start conversations. “What kind of conversations?” - one would ask. “About what we do?” Not in the least! If you tell your children that your job is to work with your team to deliver on scope, time and budget, monitor risks, write reports and negotiate with stakeholders, will they understand anything? Improbable. They most likely will not even be able to pronounce “Project Manager”! The problem lies in the message that we want to convey. Our work by definition means focusing on the details, on the process, instead of the end result, which is visible and hopefully appreciated. What should we say, how could we touch a child’s heart so deeply to make them want to be just like us? Also, how could we influence our team and an entire organization to make them understand what we truly do and therefore gain their support – as this is critical to our success? The answer is quite straightforward: we need to talk to their heart and soul, to their emotional rather than rational self. The first step to achieve this is to understand what we - ourselves and our organization - believe in. What is it that we do every day? Why is it important to start that project – why is it so important for our organization? When we manage to answer these questions it will be so much easier to convey the message to our team and work together to achieve the same goals! It is not enough to do a project only for

no. 24/June, 2014 |

its immediate, pre-determined goals, as it may become irrelevant even before completion. A project should fit in with the genuine values of the organization that is initiating it, as we should identify ourselves with these values. If we truly believe in our project, it is very likely to be successful, while the opposite is also true. As an example for the importance of values being genuine: there were oil companies trying to lead people into believing that they were close to the communities they were working with, but in reality there was a large number of lawyers working in these companies’ legal departments with the only purpose of bypassing environmental laws. As they were promoting fake values, these campaigns eventually failed. However, if we manage to identify these genuine values and link our project to them, the team will not only work on a project, but they will also contribute to a grand goal while maintaining the relevance of the project in the organizational context. The second step is to understand who the real customer of our product or service is. Let us identify ourselves with the ones using it, let us understand their real needs and motivation in using it and we will understand how can we change their life with it. If we manage to do this, it will become so natural to deliver what is really needed and we can be sure that the conversation we are starting will be positive. The third step is to give people the tools to get involved, something real that they could do to join in the conversation and influence others. In order to match the theory above with reality, Jon talked about one of the

TODAY SOFTWARE MAGAZINE important causes that he contributed to, i.e. raising substantial funds needed to start research on treating and curing ALS – a neurodegenerative disease that, although discovered during the 19th century and affecting millions of people globally, did not receive any attention from researchers. Duschinsky believes that the main reason this disease did not receive proper attention is that there were no conversations about it, even though it is similar to many other diseases, like Parkinson’s (known since 1817) or malaria (affecting one million people every year). To confirm this theory, Jon mentioned that HIV/ AIDS, although discovered recently (in 1981), only had a treatment developed for nine years after its discovery. The HIV/ AIDS chief researcher confessed that the main reason for having started working on this was that “everyone was talking about it back in the 80’s”. Jon and The Conversation Farm were asked by Steve Gleason - a formerly famous American football player having ALS – to change the conversation about the disease during the 2013 NFL Super Bowl. Aiming to touch the emotional side of people, Steve created a one minute film where ex-team mates and coaches from New Orleans (Steve’s home town) speak in plain but powerful words about what ALS is and how it steals people’s life away bit by bit. Nothing innovative so far, there have been many similar campaigns that did not cause any significant outcomes. The unbelievable part of this story is that the film was not broadcasted on any television! The link to this film was only sent by email to four journalists from New York Times, CNN,

USA Today and ESPN. Through the power for, why we are doing the project and what of social media in today’s world, the film our ultimate goal is. was viewed by tens of millions of people in a matter of hours. This conversation triggered the posting of over 8 million tweets during the NFL SuperBowl alone! The end result was astounding, millions of US Dollars being raised during the first 48 hours after the game. Shortly after the event, 150 researchers gathered in a room to talk about the steps that needed to be made in order to start the research and less than a year later, some of world’s most important universities were creating business cases to support finding a treatment for ALS. To conclude, the ones working for this campaign believed in it, identified themselves with the ALS cause, created a simple but powerful message directly addressed to people’s hearts, offered them the means of getting involved – forwarding the message to their communities and contributing financially – and in this way changed the conversation about ALS. The breadth of the conversation motivated researchers to start their work on finding a treatment for the disease. We need to start changing the way we talk about Project Management. We need to change the conversation and make people see our world differently. If we succeed, we will achieve much more than initially planned: our teams will be more motivated because they will have a clear understanLaurențiu Toma, PMP ding of what they need to, why and for whom they are doing it; the other stakeholders will get closer to the project and Project Manager @ SOFTVISION support us better and we will understand what kind of organization we are working | no. 24/June, 2014




„Cloud” – perspective matters!

loud is probably one of the most used term in IT industry of this decade; but what does the cloud actually mean? From its very beginnings, the “Internet” network has been represented visually, through diagrams, by a cloud, thus showing the fact that it is offered as a service, without revealing information about its technical implementation, offering only an interface for communicating with other networks. The adjoining image shows an exam- to carry out any sort of maintenance for create programmatically new platform ple of such a diagram: there are several any sort of systems, they do not have any instances. elements that we are familiar with (the fire- sort of interaction with anything but what • Reducing and controlling the costs wall, the web server, the users – see Figure they need: the instruments for processing – due to the fact that we know exactly 1) and then an element that is offered as a civil status data and events. In short, they on what we spend the money: number service, just like described above. connect to the Cloud and get their job done! of active users, processor used, memory Figure 1 Can we say the same about the company used. producing the software? What does the • Independence from a geographiCloud mean for them? The application cal location – we can connect from the is, obviously, one of their products; they office, from home or from the office of a offer support, carry out maintenance, they client we are at. know exactly how each little piece works, • “Multitenancy” – we do not need a and the architecture diagram is a complex separate server for each client, as they are one. However, if we look at diagrams, we all using the same application. should nevertheless see a cloud. There • Reliability and elasticity – we can should be a specialised service of platforms allocate resources on our own, in real as a service (PaaS). One arrow goes, on the time, if need be. This element is the Internet, represented diagram, from the company’s development An interesting situation arises when by a cloud. Most users are not aware of environment (and why not, maybe even we have software that has a similar the details of how the Internet works as a from the programmers’ computers) functionality with that of a Cloud but runs network. They know that an interface for straight to the “Cloud”. The code written on an infrastructure that is not “cloudaccessing it is available to them (a wireless by the developers arrives directly in an like”, many of the attributes described network, a data cable, etc.) and they can use already prepared environment. Neither above being emulated through various it in their everyday activities. Obviously the programmers, nor the testers, nor the auxiliary solutions, within the application. this is the way in which the Cloud services business analysts, nor anyone else are/is Can we speak, in this case, of a “Cloud”? are useful for home users. dependent on the manner in which this Is it enough for an application to be drawn On the other hand, there are other types platform works, but only on the interface in the service beneficiary’s diagram as a of users. One of our partners developed an made available by the service provider cloud to become “cloud-based”? I think it application for local governments, which is offering the platform (usually a set of is not! Yes, it is offered “as a service”, but called “Government as a Service”. The sales APIs). In a similar manner, the platform if the scalability part is missing (and it of this application are based, naturally, on distributor may have an infrastructure has to be implemented at all levels!), then the magic word “Cloud”. distributor (IaaS) and so on. the application will not be able to support A web application is available to Moreover, this entire chain does not unexpected periods of intense traffic, we the employees of local administrations have to be an external one; it may be that won’t be able to talk about a reduction (because they are the target of the these services are offered by a different in cost if we cannot automatically scale application), which they will use intensively department of the same company. resources to current needs, etc. The same in their daily activity, an application which For instance, “the IT guys” offer the goes for the other attributes that help define makes available all the instruments they infrastructure part, a team of “devops” offer that a “cloud” is. Thus, according to the need for the services they are offering to the the platform part, and the programmers are perspective from which we look at things citizens (such as: recording births, deaths, tending to their own business. (beneficiary versus service provider) and marriages and others alike). When we talk about the “Cloud” type according to the specifics of a project, this The application will be hosted by of services we think, from the supplier’s word may mean several things. So let us not the software’s producers – all the local perspective, of attributes such as: be fooled by false clouds that do not bring administrations need is a computer for • Agility – the possibility of being able the much-needed rain of functionality that each employee, connected to the network. to relocate the resources we have avai- our crop needs! Can we say that this is a Cloud type of lable. If a municipality has a software Florin Asavoaie service, from their point of view? Surely! license for 10 users, it may delete or They do not care what sort of firewall deactivate the account of an employee DevOps the connections pass through, where and that has retired and may create a new @ Yonder how the data is stored (at least not the one. front-line employees), they do not have • The existence of an API – we can


no. 24/June, 2014 |





n the last weeks we discovered together the base principles of Aspect Oriented Programing (AOP). Now is the time to see how we can use at real power the AOP characteristics using PostSharp.


Before going straight to subject, let’s make a short recap. AOP is a programming paradigm with the main goal of increasing the modularity of an application. AOP tries to achieve this goal by allowing separation of cross-cutting concerns – using interception of different commands or requests. In the last posts we discovered how we can use AOP using Unity and .NET 4.5 features (RealProxy). Unity gives us the possibility to register actions that can be executed before and after a specific action. The RealProxy class is the base class around all these features that is used by frameworks like Unity to offer this feature. The biggest difference between RealProxy and a stack that offers us AOP is from the features perspective. Using RealProxy directly will require us to write all the functionality that we need – this can be translated to time, money and more code that we need to maintain (in the end we don’t want to reinvent the wheel).


PostSharp is the first real AOP framework presented in this series of articles. Until now we have looked at different ways in which we can use AOP features, but without using a real and dedicated AOP stack. I decided to start with PostSharp, because when you need an AOP framework for a real project that is pretty big and complex, you should look at PostSharp first. It is the kind of framework that offers you almost all the features of AOP that you would need. Usually I compare PostSharp with ReSharper from the point of view of the quality of the product. It is the kind of product that has all the features that you need when you talk about a specific feature. PostSharp has a lot of features that cannot be discussed in only one article. In the near feature we will study each of these features separately, but for now we will look at the most important features that are around AOP.

using INotifyPropertyChanged interface. All the setup and other things will be taken care of by PostSharp. More complicated behavior can be implemented in a very simple way when we start to use Code Contracts. Architecture Framework – Gives us the power to validate different aspects of code quality and design, design patterns, core relationships, code analysis and many more. We saw what the main features of PostSharp are, now let’s inspect the technical part and see how we can add AOP in our project by using PostSharp.

How does it work The biggest difference between PostSharp and other AOP solutions is how the custom behavior is added. In general this behavior is added at runtime, but not in the case of PostSharp. All the hooks are added at compile time. In this way the performance is not affected too much. Of course, just like in any AOP framework, the performance is affected a little, but in the case of PostSharp, the performance is almost the same as without it.

Basically, PostSharp is a post-processor action, which takes the code that is compiled and modifies it. All the hooks are added at this level. The output from compiler is taken by PostSharp and cooked.

Before and After

The most common use case when AOP is used is to execute a specific action before and after a method of property is called (for example for logging). This can be done very simply using PostSharp with a few lines of code. The first step is to define the behavior that we want to execute Features on that specific moment. This can be accomplished by extenThe main features of PostSharp are: ding OnMethodBoundaryAspect. We can override OnEntry, Threading Pattern Library - Allow us to control the level of abs- OnSuccess and OnException. In each of these methods we have traction around threading, detect and diagnose deadlocks, control access to input parameters, to the result and so on. We can even how actions are executed on different threads and position (fore- change the result of the call. ground or background). Model Pattern Library – Offers us the full features of AOP | nr. 24/Iunie, 2014


programming PostSharp [Serializable] public class FooAOPAttribute : OnMethodBoundaryAspect { public override void OnEntry(MethodExecutionArgs args) { ... } public override void OnSuccess(MethodExecutionArgs args) { ... } public override void OnException(MethodExecutionArgs args) { ... } }

From this moment we can use this attribute to annotate all the methods for which we want to have this feature. Of course we can specify the list of methods not only in this way. We can specify the list of methods from AssemblyInfo.cs, where we can define custom filters for which methods we would like to have this feature.

Code Injection

NotifyPropertyChangedAttribute. Once we add this property to our class through Smart Tag, we will not need to care about notifications any more. PostSharp will take care of the rest. [NotifyPropertyChanged] public class StudentEdit { public string FirstName { get; set; } public string LastName { get; set; } public string FullName { get { return FirstName + LastName); } } public string Email { get; set; } }

One thing that I liked was the Transitive Dependencies support. This means that if FirstName value is changed, a notification will be triggered for FullName also. If we want to add a specific attribute to all our classes from a namespace, we can do this pretty easily by using the multicasting attribute. This is done directly in AssemblyInfo.cs file and will allow us to specify for which classes we need to add a specific attribute. We have the possibility to add filters, exclude specific classes and so on. Of course, this multicasting setup can be done directly from code using IAspectProvider. The last thing that you should know about this is that you also have other attributes that can be used to ignore specific property or handling notification in a custom way.

This feature allow us to add code to our class at runtime. Using a custom attribute or from AssemblyInfo.cs we can inject specific code to our class. For example we can specify that a class implemented a specific interface or inject a specific method of property. In the below example we can discover how we can inject a specific property to our class: Code Contracts As the name is telling us, Code Contracts give us the possibility [Serializable] to define a contract at code level between the called and method public class FooAOPAspect : InstanceLevelAspect of property that is called. In this way the input validation will not { need to be made with a custom IF anymore. It is pretty similar public string FirstName { get; set; } with the custom validation that can be made through ActionFilter } and validation attributes in MVC for example. The advantage is that we can define this contract at any level, for example when we [IntroduceMember] are exposing a library API. public string FirstName { get; set; } The simplest example is NULL checks. Usually when we need to do a NULL check we add an IF in our method/property and [FooAOPAspect] public class Student throw an exception when value is not NULL. The same thing can { be done if we are using the Required attribute. See below example: }

public class Student

The IL code that will be generated will contain the FirstName { public void SetLastName([Required] string newproperty inside. LastName)

INotifyPropertyChange When we are working on a desktop or native application, we need to use this interface to be able to receive notification when the value of property is changed (on UI or in code). Usually this is made by implementing the above interface. Not to over complicate the code, a base class is created when the notification functionality is added. For a small code, this is acceptable, but if you have a complicated application you would add a lot of duplicate code to support this feature. This problem is resolved by PostSharp for us by


no. 24/June, 2014 |

{ ... } }

Without PostSharp we would need to check in the body of the message the value of input and throw an exception. Imagine writing the same code 1.000 times. This kind of attributes can be used also at property or field level. One interesting thing is when we are using it at field value. When we set this value at field or property value, it is not important from where the value is set (for example from another method), the check for NULL will be made.

TODAY SOFTWARE MAGAZINE public class Student { [Required] private string _lastName = „Default” public void SetLastName(string newLastName) { _lastName = newLastName; } public string LastName { get { return _lastName; } set { _lastName = value; } }


Of course all these features can be implemented by us, but PostSharp offers all these things out of the box. This is a great AOP stack used worldwide, extremely mature and good.

public void SetFullName(string newFullName) { ... _lastName = lastName; } }

A part of default validation actions are already defined. At any time we can define our own custom validation by implementing ILocationValidationAspect. We have ValidateValue method that we need to implement, where we can do our custom validation.

Other features There are other great features that we have not discussed yet, from the one that allows us to intercept events, composite aspect, code injection, exception handling, security, object persistence and many more. Another feature that I like in PostSharp is the ability to specify that an interface cannot be implemented by other assemblies and can be only used. I invite all of you to visit PostSharp web site and try it.

Licensing and Costs

There are 3 types of licensing of PostSharp. For individual use, you can successfully use the Express version that is great when you want to learn and understand how PostSharp works. Additional to Express one we have Professional and Ultimate, which come with other features that can be used with success in production. Don’t forget that the licensing model is per developer not per products developed. This means that you can use the same license for 1 or 100 projects.

Radu Vunvulea Senior Software Engineer @iQuest | no. 24/June, 2014



Securing Opensource Code via Static Analysis (I)


tatic code analysis (SCA) is the analysis of computer programs that is performed without actually executing the programs, usually by using an automated tool. SCA has become an integral part of the software development life cycle and one of the first steps to detect and eliminate programming errors early in the software development stage. Although SCA tools are routinely used in proprietary software development environment to ensure software quality, application of such tools to the vast expanse of opensource code presents a forbidding albeit interesting challenge, especially when opensource code finds its way into commercial software. Although there have been recent efforts in this direction, in this paper, we address this challenge to some extent by applying static analysis on a popular opensource project, i.e., Linux kernel, discuss the results of our analysis and based on our analysis, we propose an alternate workflow that can be adopted while incorporating opensource software in a commercial software development process. Further, we discuss the benefits and the challenges faced while adopting the proposed alternate workflow. Keywords-Static Code Analysis; Opensource; Software Testing; Software Development Life Cycle Although SCA is not new technology, of late, it has been receiving a lot of attention and is rapidly being adopted as an integral activity in the Software Development Lifecycle to improve quality, reliability and security of the software. Most proprietary software development establishments have a dedicated team of software validation professionals whose job responsibilities also include running various static analysis tools on the software under development, either in an agile environment or in a traditional waterfall model. SCA tools are capable of analyzing code developed using several programming languages and frameworks that include and not limited to Java, .Net and C/C++. Whether the goal is to develop application software or a mission critical embedded firmware, static analysis tools have proved to be one of the first steps towards identifying and eliminating software bugs and are critical to the overall success of the software project. Having said that, it is clear that there are typically either commercial interests and/or security issues involved in licensing and exhaustively applying expensive static analysis tools in software development efforts with very little or no scope for failure. What then remains as an expansive, unexplored void are the endless realms of opensource code that is usually assumed to be well reviewed since opensource software has been used and reused in countless number of applications by numerous


academic and commercial institutions across the globe. ‘Expansive’ indicating that it is continually adding new source code and ‘Unexplored’ indicating the absence of a dedicated, unbiased entity to statically analyze every line of code that is opensourced. Of late, SCA companies have taken up the initiative in this direction. But even then, the rate at which newer versions of the opensource software gets released or updated, presents a barrier to such efforts. Although such an effort would be attributed to paranoia by many, it is evident that financial meltdown and shrinking IT budgets have led to opensource software entering into commercial applications, for example, Linux, Apache, MySQL or OpenSSL

no. 24/June, 2014 |

Figure 1. Usual software development process segment incorporating static analysis

The conventional workflow where SCA tool’s output is included in the formal software review package is as shown in figure 1. In most software development and testing institutions, although the software product as a whole is subjected to dynamic analysis, like fuzzing or black box testing and newly developed software code is subjected to stringent static analysis and review, opensource code incorporated in the software is usually not subjected to the same stringent static analysis and review as newly developed proprietary code, as depicted in figure 1. Usually, a binary of the necessary opensource software is incorporated in the complete software package. This may be based on the assumption that opensource software is more secure and less buggy than closed source software because the source is freely available, lots of people will look for security flaws and other software bugs in it in a way that isn’t going to happen in the commercial world. Despite the fact that this assumption has withstood the test of time and probably cannot be proven wrong, it would still be an interesting exercise to apply static analysis tools on opensource code and analyze the results. Although there have been previous efforts to statically analyze opensource code using commercial or opensource SCA tools, in this paper, we seek to verify if certain code issues (or bugs) can be detected early on by SCA. Further, we propose an alternate workflow that can be adopted

TODAY SOFTWARE MAGAZINE while incorporating opensource code in a commercial software development process. We discuss the benefits, challenges and possible trade-offs to adopting the proposed alternate workflow. Such an effort will be of interest to both the software engineering community and the opensource community in general. Therefore, in order to experiment, we pick Klocwork Insight as our tool of choice, since it is one of the industry leaders in SCA. We pick a popular opensource project, i.e., Linux kernel, for our code analysis. We then proceed to run Klocwork Insight against Linux kernel code and we discuss the results of our analysis. In general, Klocwork is a representative tool for SCA and Linux is a representative opensource project for our analysis, which can be extended to other SCA tools and other opensource projects.

Paper organization

The rest of the paper is organized as follows. Section 2 provides the background to choosing Linux kernel code for analysis and SCA using Klocwork Insight. Section 3 discusses the results of SCA. In section 4, we discuss an alternate workflow that can be followed while incorporating opensource software in a commercial software development process and discuss the benefits and the challenges faced while following the proposed alternate workflow. Finally, in Sections 5 we conclude by summarizing important observations.

identify logic and security bugs. It is much more comprehensive in reach than blackbox systems testing. However, some issues are runtime dependent and can only be found by actually executing code, so static analysis cannot stand alone. A representational effort-benefit curve for using a SCA tool is shown in figure 2. It is observed that as more checks are enforced, the fraction of errors detected increases along with increase in the amount of effort required to enforce these checks. Typical compilers such as gcc incorporate static analysis to a certain extent in the form of warnings or errors reported during the compilation process. These generally require the least amount of effort and likewise report the least number of bugs. While on the other hand, formal verification of software, for example using model checking, is a much more complex approach towards automation of SCA, requiring greater amount of effort, but capable of detecting greater number of bugs of higher complexity. Although formal verification is not extensively adopted in the software industry, due to the law of diminishing returns, there are several operating systems that are formally verified such as NICTA’s Secure Embedded L4 microkernel [6].

PRELIMINARIES A. Linux kernel The Linux kernel is an operating system kernel used by the Linux family of Unix-like operating systems. It is one of the most prominent examples of free and open source software and hence a natural choice for our analysis. The Linux kernel is released under the GNU General Public License version 2 (GPLv2) and is developed by contributors worldwide. Day-to-day development discussions take place on the Linux kernel mailing list. The Linux kernel has received contributions from thousands of programmers. Many Linux distributions have been released based upon the Linux kernel [8]. We chose the Linux kernel version, released in February 2010, for our analysis.

Figure 2. Typical Effort-Benefit curve for using SCA tools [1]

In our analysis, we use the Klocwork Insight tool to perform SCA. Klocwork Insight is a SCA tool that is used to identify quality and security issues for C, C++, Java and C#. The product includes numerous desktop plug-ins for developers, an architecture analysis tool, and metrics and reporting. It is supported on both MS Windows and Linux OS based platforms [3]. Generally, most SCA tool checks typically include unused declarations, type inconsistencies, use before definition, Static Code Analysis unreachable code, ignored return values, The unique benefit of static analysis is execution paths with no return, likely infiits ability to scan complete codebases to nite loops, and fall through cases. Checking

is configurable and can also be customized to select what classes of errors are reported. Furthermore, custom checks can be created by the user to find specific conditions in a specific codebase. Usually, SCA tool usage rules configuration file can be edited to capture specific issues in the source code. For example, Klocwork usage rules configuration file can be updated with checker rules to flag the usage of banned APIs from the list of Microsoft Security Development Lifecycle (SDL) banned APIs (banned.h). This flexibility to edit the configuration file provides for more powerful or project specific checks to be incorporated in the SCA results. As more effort is put into tuning the SCA tool to the codebase and native compiler, better checking results. In general, most SCA tools are designed to be flexible and allow programmers or quality analysts to select appropriate points on the effort-benefit curve for particular projects. As different checks are turned on, the number of bugs that can be detected increases dramatically, that can also result in false positives. False positives can be reduced by editing the checker rules in the configuration files, turning on the required checkers and turning off the ones that are not needed for the project. SCA tools can be made to better understand the semantics of a given function or method by configuring it to understand any special keywords that may be used by the native compiler and by adding more information into the tool knowledge base. Thus, users can define new rules and associated checks to extend the SCA tool’s checking or to enforce application specific properties. In general, the automated build process incorporating SCA can be split into two stages. The first stage involves generating a ‘build specification’ file, which gets generated as part of compile and link process. The second stage involves running static analysis tool on the link output i.e, ‘build specification’ file to generate static analysis reports. Klocwork presents the results of static analysis via browser window, e.g. Internet Explorer, Mozilla Firefox, whose user interface can be customized to a certain extent.

Raghudeep Kannavara Security Researcher, Software and Services Group @Intel USA | no. 24/June, 2014



Advantages of using Free software


sing free/libre software confers many advantages to the persons/organizations doing so, especially if these users are technical (like software companies for example). This article will discuss these advantages and dispel some of the myths floating around free/libre software.

What is Free/Libre software?

Free software is software which respects your (the users’) freedoms . It is commonly written as free/libre (or colloquially “free as in freedom not as in beer”) to emphasize the fact that we’re not talking about price. Indeed, free software can be sold commercially - more about this shortly. There are four essential software freedoms as defined by the Free Software Foundation which we’ll cover next and see why they are important for us as technologists.

Freedom 0

and severely distort the market landscape which should encourage competition. For example how can you judge which database vendor to choose if nobody is allowed to publish any benchmarks? Needless to say that free software contains no such restrictions.

Freedom 1

“The freedom to study how the program works and change it, so it does your computing as you wish”. Access to the source code is a precondition for this. Source code is the ultimate source of truth about any software. Having the source code means that you can easily answer the most common questions which come up during the usage or integration of the software: • is the software capable of doing X? • how can I ask the software to do X? • why aren’t my steps achieving the desired results? • Compared to this proprietar y software only offers meek alternatives: • you can read the documentation but documentation has various levels of correctness and freshness. Source code is the ultimate documentation! • you can contact support - besides being costly there is a turn-around time of one day at least. Source code is right there when you need it • you can reverse engineer the product - if you have the right tools and expertise. Also, doing so is time consuming and potentially illegal in many jurisdictions

“The freedom to run the program, for any purpose”. Free software isn’t partisan to any idea. Software is just a tool and how you use it is your decision/responsibility/ judgment call. In contrast, proprietary software frequently restricts how/when/ where you can run it and there is no easy way to get a “let me do whatever I want” license. Frequent restrictions in proprietary software include: • the number of instances of the software you can run at any one time • the number of machines you can install the software on (even if they aren’t running concurrently) • the number of times you can reinstall the software (after reinstalling your machine for example) • the number of CPU cores / amount of memory the program can take advantage of • running benchmarks / performance Sidenote: some proprietary vendors tests against the software offer access to (part of) their source code, usually under very stringent conditions. Such restrictions are without any This only addresses one of the software good practical reason (other than them freedoms (and probably even that only parbeing a monopolistic practice to extract tially - since vendors rarely give access to as much money as possible from the user) the complete source code and tools needed


no. 24/June, 2014 |

to build the system) and such software is still non-free.

Freedom 2

“The freedom to redistribute copies so you can help your neighbor”. Sharing the software is of benefit to everybody (in the long run): • more people learn how to use the software (thus making them more valuable as employees) • the barriers are removed from innovators This provision guarantees that (free) software doesn’t form a monopolistic cartel where you must always pay your dues (and probably the bigger your revenues get, the more money they will demand from you). A clear indication of the fact that proprietary software is not sustainable is the fact that the Internet runs on free software .

Freedom 3

“The freedom to distribute copies of your modified versions to others”. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. This freedom makes sure that others can step in when there is a market opportunity. If a proprietary product is discontinued you have no choice but stop using it (or use it without support and hope that it doesn’t break). If the original developer(s) of a free software product can’t or won’t continue its development, others can step in and provide continued support. Or even multiple persons/organizations can compete in offering support and/or development services. Contrast this with proprietary software where vendors actively try to discourage

TODAY SOFTWARE MAGAZINE others from providing support.

can judge their competence directly (based on how they contribute to the project / Advantages community) rather than having to use Having talked about the four freedoms uncertain proxies like certifications. which define “Free Software”, let’s turn our attention to the advantages it provides. It is a truly capitalist endeavour Capitalism / markets produce the best Transparency results in environments where information Free software is transparent. You can is readily available to all the participants. check every claim which is being made Free software creates such an environment. about it. In contrast with proprietary Coupled with the near-zero cost of transsoftware you need to rely on marketing porting information in today’s age, we get materials and other sources produced or very quick evolution of projects. approved by the vendor. Free software is also transparent about You can offload maintenance its future: because usually all development It’s possible that you find a (free) is done in public for free software, you can software package which does 99% of what easily judge the level of activity and the you need but the final 1% is missing. This speed with which issues are addressed / being free software you can take the source new features are implemented. If the pro- code and add the final 1%. But even more, ject is no longer maintained, you will have if you give your contributions back to the ample warning about it getting out-of-date original projects, they will incorporate and various options to solve the problem. them and maintain it from then on. This Compare this with proprietary software means that if there are any major changes where the vendor can sunset a product on (like the API changes in v2 of the library a whim giving no option but to migrate off for example), you don’t have to spend your of their solution. time re-merging the changes - probably somebody else from the community will Suitable level of support do it. With free software there is a sliding scale of support levels available vs. money Simplified license management invested. The usual possibilities for getting Keeping up to date with software licensupport when using free software are: ses in notoriously hard to do. Failing to do • searching on the Internet - free so can result in lost productivity (why can’t software tends to have a much more I use the software? Oh, Bob is running it technical user base which frequently and we only have one license - but Bob documents the solutions to commonly is on a coffee break and I have to wait for encountered problems. There is a much him to return!) and even legal liability (are bigger chance of finding the solution you sure you have a valid license for each by searching the Internet when using and every copy of the software on the 100+ Free Software compared to proprietary devices of your company?). solutions Free software in contrast means “simple • asking for help on the projects mai- licensing terms and you can be confident ling list / forum - free software is created that you’re in compliance“. by dedicated and enthusiastic people and they can help you much better than the Being part of a community first level support provided by software Last but not least, when using and convendors. Plus, this is still free tributing to free software you are part of a • you can hire one of the developers community. You interact with like-minded part-time or full-time - if there are mul- people. You collaborate with persons from tiple developers, you can take your pick all over the world. You have the opportu• you can hire a company which pro- nity to learn from the best! vides support for the product - again, the license allows for competition in this Facts space, so for popular projects there will Having covered the advantages of using be multiple options to choose from / contributing to free software, here are a couple more facts about it:

Easy to find knowledgeable people

Because development is done in public, You don’t have to give your source code to you can easily find contributors who might everyone be interested in working for you. And you Free software only asks you that you

provide the option to your users to get the source code (in a format which can be used to reproduce the final working product). This means that you don’t need to give the source code to any random person asking for it, just to users, also: • if you are using code under the LGPL and incorporate it into a larger system (for example you use a LGPL licensed library) you only need to provide source code for the modifications done to the library, not the entire system • if you are using GPL licensed code but it isn’t vital to the functioning of the system (it is a plugin for example), you only need to provide source for any modifications done to it, not the entire system • if you are using GPL licensed code to create your product but don’t ship that code with the final binary (for example you use GCC to compile your source code), you do not need to provide source code to your product • if you are using GPL licensed code internally and you are shipping the result over the network (you have some kind of SaaS offering for example), you do not need to provide the source code. You only need to give the source code to the person actually running the binary, which in this case is you. While you don’t need to provide source code in any of the above cases, it is still a good idea to do so. Most businesses are not in the “software” business, they only use software to speed things up (for example the main value of a financial broker isn’t the software platform, but rather the negotiated deals it has with many market players). Giving the source code to your customers in such instances only assures them that their risk level is reduced and they will still come to you because they choose you primarily because of your business relations.

Free software can be sold

Free means Libre, “Free as in Freedom”, not as price. Nothing in the free software licenses precludes you from selling the final product.

Attila-Mihaly Balazs

Code Wrangler @ Udacity Trainer @ Tora Trading | no. 24/June, 2014



Why Invest in Professional Management?


ecently, the macroeconomic forecasts seem rather positive for Romania. The latest news in this line talk about one of the greatest economic boosts in Europe, about the significant enhancement of the country rating by one of the most important global rating agencies and about a new law regarding the non-taxing of profits that are reinvested into technology.

I personally believe that all these positive news and forecasts will have an intense effect of accelerating the investments in Romania. The only factor that might stop or even contradict the positive previsions would be the escalade of conflicts in Ukraine. In the hope that such a thing will not actually happen and considering as hypothesis the positive perspectives we have talked about, I would like to draw your attention on the investment that each company should make into professional management. Unfortunately, as I have also mentioned several times until now, the management positions in the Romanian companies were rather a way of awarding the people who had excellent results in other domains (sales, administration, financial, etc.), instead of representing the backbone for the functioning of the organization. The companies would focus all their efforts on the sales, purchases or investments into technology and would largely neglect the sustained investments into the creation of a performant management structure. The single signs of such a concern were related to sending some people to some courses or training, actions which were also seen as a reward or a way to align to what the others were doing and less as a minded and sustained investment into the creation of an organizational structure which could provide consistency to the company results. I know that from the financial-account perspective or even from the perspective of managerial finances, the financial resources used for the creating a professional management structure are not seen as investments, but rather as costs. Anyway we put it, it is compulsory for us to allot


time, effort and financial resources for surely pay off. A professional manager will the creation of such structures within our be able to prove this even before the investorganizations. Failing at this, the benefits ment is made. from the positive influences of the macroenvironment can quickly vanish, the investments we make into technology or other domains will not be adequately turned to profit, the organizational climate can deteriorate, having negative effects on the long term perception of the organization. I know it is a difficult thing to do, when some of the organizations can hardly find the necessary resources for the current work capital, both financial and human. To allot an important share of resources of time, effort and money to management seems rather difficult, all the more so as professional management is sometimes hard to identify, being seen rather as something abstract, which depends a lot on the perception and context. Professional managers know things are not like that. For all the others, I would like to point out the things that the investment into professional management can bring to the company: more stability on the medium and long term, higher predictability of results, harmonious development and efficiency with no stress or unnecessary risks, much easier acquiring of financial and quality human resources, respect from the competition, the employees and partners, etc. Therefore, whenever you make development plans, do not forget to allot an important part of time, effort and money to the creation of a professional management system in your organization. Any investDan Schipor ment into professional managers, done through trainings or coaching systems, Management Partner @ Casa de management Daris management structures, tools or systems, courses or management consultancy, will

no. 24/June, 2014 |



A NEW AID STATE SCHEME FOR IT SECTOR 50% of the wage cost grant for 2 years, at least 20 new jobs created


he Ministry of Finance through the Department of State Aid will run from 1st of July a new scheme of state aid that will be valid until 31 December 2020 and will have a total budget of about 600 million euros, with the possibility of supplementing. Estimated total number of firms which will benefit of state aid is 1.500.

Eligible expenses

Eligible companies

Salary costs are considered eligible expenses recorded for a period of 2 years consecutive, following the creation of jobs. State aid for the eligible expenses is granted under the following conditions: • Jobs are directly created by an investment project; It is not required a minimum investment project but it has to start after issuing the grant agreement. • Jobs are created after the receipt of the funding agreement, but no later than three years from the date of completion of the investment. It is possible to submit a project for multiple locations provided that you create at least 20 jobs / location. • Newly created jobs are considered if between employees and the employer or its associated companies there has been no employment relationship in the last 12 months prior to the filing of the financing agreement.

Both existing and newly created enterprises can apply. Existing enterprises must have had a good financial situation in 2013: positive equity, the profitability of turnover grater or equal 1%. New enterprises are eligible only if the shareholders are not carrying out or have not carried out its activity for applying for funding through other companies in the past two years.

Eligible activities The following activities are eligible: publishing activities, software production (582), information technology service activities (620), computer service activities (63). Activities such as: agriculture, trade, hotels, restaurants, transportation, gaming, telecommunications, rent, lease etc. are excluded.

Refundable aid Gross intensity of regional aid reported to the eligible expenditure Beneficiarii pot solicita plata ajutorului nerambursabil de patru ori pe an, strict raportat la costurile salariale efectuate și în conformitate cu proiectul aprobat; rambursările se vor efectua în 45 de zile lucrătoare de la data la care cererea de rambursare este considerată completă. Beneficiaries may request payments four times a year, strictly in relation to salary costs incurred in accordance with approved project; refunds will be made within 45 working days from the date the refund application is considered complete.

Work procedure

The program will be active from the 1st of July this year. The preliminary documentation (the calendar of the creation of jobs, tax certificates, etc.) shall be deposited in a session for submission -20 days, based on which a score is given to the company. The criteria for calculating the score for enterprises demanding state aid are: the number of new jobs created during the creation of at least 20 jobs, the location of the investment, the profitability of turnover, value of the issued share capital. Companies that qualify will submit complete business plan, and of these, those eligible will receive a grant agreement. The companies will have to wait for about 6 months for the issue of the funding agreements, therefore, they will have to plan to start both project investment and creating new jobs thereafter. Small and medium enterprises are required to retain new employees for 3 years from the date of their creation and large companies, for 5 years. Roxana Mircea Management Partner @ REI FINANCE ADVISORS S.R.L. | nr. 24/Iunie, 2014



Options to prevent others from taking advantage of your trademark


ny entrepreneur (the IT domain not excepted) knows how difficult it is to build a brand and how important it is in a business – a prestigious trademark on the market, but also time, energy and money investments for advertising, promotion, etc. The brand plays an important role in the marketing strategy of any company and it contributes to the consolidation of its image and fame. For some companies, it even represents their most valuable asset. Therefore, you will be prejudiced when using your brand on the market, without brand, may violate the rights of the owner your brand is illegally used by other people your consent, you have the classical option of that brand – due to the risk of confu– mainly, because of the risk of confusion to bring the case to trial or you may even sion among the clients. Therefore, for an that may be generated in the perception of notify the criminal prosecution body. entrepreneur who is fond of the unicity of customers (the customers should be able As a cheaper and quicker alternative, his brand, the monitoring of new Internet to easily differentiate between identical or you can send a formal notification to domain names may prove useful. resembling products or services). request the respective person to no longer There are cases, not few, when the Here are a few options at hand for the breach your trademark right (the so-called entrepreneur has not paid enough attention owner of a trademark, in order to reduce Cease and Desist Letter from the Anglo- to the online environment and he may find the risk of possible disputes in the court of Saxon practice). Thus, you can inform out that his trademark is already registered law, caused by the violation of their brand. them on the possible conflict and ask them by someone else, as a domain name: either not to use the respective brand anymore cybersquatters – namely those who in bad Brand Monitoring (under the threat of a lawsuit), or you can faith register domain names containing Nowadays, each new request to regis- suggest them to negotiate a co-existence brands owned by different companies with ter a brand is electronically published in agreement of the two brands. the purpose of reselling them later, either an official newsletter of OSIM . Within a The content of such a notification varies traders who sell the same type of products period of two months from the publication, from case to case, depending on the actual as the owner of the brand. concerned people are allowed to make a situation, and it involves the invocation of Until recently, most of the times, there stand against the respective registration. some legal texts and pertinent arguments. has been a “war” going on on the .ro and Thus, during this period, as the owner of Therefore, it is advisable for you to be assis- .com extensions – either by court proa trademark that is protected (also) on the ted by a specialist who can help you write ceedings or by arbitration with WIPO territory of Romania, you have the possi- the notification in a manner as formal and Arbitration and Mediation Centre . But bility to oppose, if you consider that the as official as possible and to document it there are also other reasons for concern in registration of the new trademark could correspondingly. the context of the emergence of the new harm yours. These notifications do not always have and varied domain names (gTLD) which So, if you are not diligent and you do the anticipated effect; they don’t always are about to given into use soon by ICANN not object within the deadline stipulated by do the trick. In practice, there are plenty (new possible examples gTLD: .game, the law, you may be taken by surprise by of cases when the notifications are igno- .software, .clothing, .shop, .car, .tourism, the fact that one of your competitors has red and the breaches go on untrammeled. .kodak, etc.). managed to register a brand that is similar What can be done in this case? Here you can find an infographic done or even identical to your brand. There are other solutions, depending by ICANN, regarding the way in which Basically, this actually involves monthly on the case: arbitration, mediation, court the owners can protect their brands in monitoring of the newsletters in which proceeding in the competent court of law, the online environment, following the OSIM publishes the new trademark requ- frontier laws (to prevent the import of “revolution” of the new domain names ests and checking, among others, whether counterfeit products). However, despite of (for instance, by a registration with the there is one which is identical or similar to what you may have heard, the arbitration in Trademark Clearinghouse ). your brand. Romania is not necessarily less costly and In conclusion, there is a series of posIf you do not have the necessary reso- quicker than the court of law. And medi- sibilities to solve the unpleasant situations urces (time, patience, competence, etc.) ation has benefits only if the opponent is related to the violation of trademark – it is to continuously monitor the new trade- also of good faith and wants the two of you up to you to find the one(s) you consider mark requests submitted to OSIM and to find together an amiable way to extingu- most appropriate to your specific situation. to establish which of them might conflict ish the conflict. with your trademark, you can choose a monitoring service provided by advisers Brand protection on the Internet. The Claudia Jelea authorized in the domain of trademarks. domain name In practice, it is considered that an Avocat & Consilier in domeniul marcilor Notifications to discontinue Internet domain name including a name @ IP Boutique If you found out that another person is that is resembling or identical to that of a


no. 24/June, 2014 |


powered by

Issue 24/June 2014 - Today Software Magazine