Issue 30 - December - Today Software Magazine

Page 1

No. 30 • December 2014 • www.todaysoftmag.ro • www.todaysoftmag.com

TSM

T O D A Y S O F T WA R E MAG A Z I NE

Machine Learning in Practice

d revolutions

On testing, scientific discoveries an

Common branching models and branching by abstraction Performance testing from Waterfall to Agile Atlassian JIRA REST API Startup files: MyDog The pursuit of Engagement in the Software Industry

On Entrepreneurship, at the End of the Year JDK 9 – A Letter to Santa?! Human software assements Interview with Tudor Gîrba Using OSEK/VDX-compliant operating systems in embedded projects To Be Or Not To Be Responsive



6 IT Days - image of Cluj IT Ovidiu Măţan

8 On Entrepreneurship, at the End of the Year Andrei Kelemen

10 Startup files: MyDog Radu Popovici

12 Axosuits wins How to Web Startup Spotlight 2014

31 From Design to Development Vlad Derdeicea

33 To Be Or Not To Be Responsive Raul Rene Lepsa

35 JDK 9 – A Letter to Santa?! Olimpiu Pop

37 Machine Learning in Practice

Irina Scarlat

Sergiu Indrie

14 Human software assements Interview with Tudor Gîrba

41 Delivering delight

Ovidiu Măţan

16 On testing, scientific discoveries and revolutions Alexandra Casapu

20 Performance testing from Waterfall to Agile Claudiu Gorcan

22 Common branching models and branching by abstraction Anghel Conțiu

25 Using OSEK/VDX-compliant operating systems in embedded projects Mircea Pătraș-Ciceu

28 Save user’s time with a well designed interface Axente Paul

Sebastian Botiș

45 Sun Tzu’s Art of War and Information Era Liviu Ştefăniţă Baiu

41 Atlassian JIRA REST API Dănuț Chindriș

48 The pursuit of Engagement in the Software Industry Victor Gavronski

50 Network Services of Microsoft Azure Radu Vunvulea


editorial

N

Ovidiu Măţan

ovidiu.matan@todaysoftmag.com Editor-in-chief Today Software Magazine

ow, when the year 2014 is almost over, I wish to thank our readers for being with us online, but also in the magazine release events and the events of our partners. 2014 has been significant from the point of view of the maturity of our magazine: we have registered a higher and higher number in our release events, and online, for the Romanian and English versions, we have an average of 7000 readers monthly. During the IT Days, the number exceeded 10000 readers. This year has also meant a lot to us through the meetings with the readers of our magazine in the special release events that took place in Bucharest, Timisoara, Brasov, Iasi and Targu Mures. An important project was the development of the new visual identity of our website, www.todaysoftmag.ro, respectively www.todaysoftmag.com. In 2015, we are going to continue this development by launching a job page for companies and the community, having a special support for the start-ups that wish to find collaborators. The Romanian society in the IT area has evolved greatly within the last year; we almost have all the ingredients for the eco-system that will produce the next generation of software products and, why not, hardware. We wish you good luck and we assure you of finding in Today Software Magazine a trustworthy partner and promoter. We are beginning this issue with an article on Entrepreneurship at the end of the year, after which we continue with MyDog, the second article of the file series dedicated to startups. We are also publishing two interesting interviews, the first one with Tudor Garba on metrics and the evaluation of software systems and the second interview with consultant Michael Bolton, on testing, but not only. We begin the series of technical articles with Performance Testing from Waterfall to Agile and The Risks of Branching; then we go on with the embedded systems. Web design is well represented in this issue through a series of articles on this subject: Save user’s time by a well-designed interface, From Design to Development and To Be or Not To Be Responsive. Since Christmas is near, we propose you to review the most important JPEs (JDK Enhancement Proposal) in The JDK 9 Promises: A Letter to Santa Claus?! The List can go on and we are inviting you to read them all!

Enjoy your reading !!!

Ovidiu Măţan

Founder of Today Software Magazine

4

no. 30/2014, www.todaysoftmag.com


Editorial Staf

Authors list Liviu Ştefăniţă Baiu

Claudiu Gorcan

Senior Business Consultant @ Endava

Senior Delivery Service Engineer @ Betfair

Alexandra Casapu

Irina Scarlat

Software Tester @ Altom Consulting

PR Manager @ How to Web & TechHub Bucharest

Anghel Conțiu

Olimpiu Pop

Design Lead @ Endava

Senior Software Developer @ Ullink

Vlad Derdeicea

Sebastian Botiș

Lead Graphic Designer @ Subsign

Delivery Manager @ Endava

Andrei Kelemen

Axente Paul

Director executiv @ IT Cluster

Senior UX Engineer @ 3Pillar Global

Radu Vunvulea

Dănuț Chindriș

Senior Software Engineer @iQuest

Java Developer @ Elektrobit Automotive

Victor Gavronski

Radu Popovici

Managing Director @ Loopaa

Associate @ Gemini Solutions Foundry

Raul Rene Lepsa

Mircea Pătraș-Ciceu

liviu.baiu@endava.com

Claudiu.Gorgan@betfair.com

Editor-in-chief: Ovidiu Mățan ovidiu.matan@todaysoftmag.com Editor (startups & interviews): TBD marius.mornea@todaysoftmag.com Graphic designer: Dan Hădărău dan.hadarau@todaysoftmag.com Copyright/Proofreader: Emilia Toma emilia.toma@todaysoftmag.com

alexandra.casapu@altom.ro

Anghel.Contiu@endava.com

irina.scarlat@howtoweb.co

olimpiu.pop@ullink.com

Translator: Roxana Elena roxana.elena@todaysoftmag.com Reviewer: Tavi Bolog tavi.bolog@todaysoftmag.com

office@subsign.co

Sebastian.Botis@endava.com

Accountant : Delia Coman delia.coman@todaysoftmag.com andrei.kelemen@clujit.ro

Made by

Today Software Solutions SRL str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania contact@todaysoftmag.com www.todaysoftmag.com www.facebook.com/todaysoftmag twitter.com/todaysoftmag

ISSN 2285 – 3502 ISSN-L 2284 – 8207

Radu.Vunvulea@iquestgroup.com

victor.gavronschi@loopaa.ro

UI Developer @ SF AppWorks

paul.axente@3pillarglobal.com

danut.chindris@elektrobit.com

radu.popovici@geminisols.com

mircea.patras@arobs.com C++ Developer @ AROBS

Copyright Today Software Magazine Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code

Sergiu Indrie

sergiu-mircea.indrie@hp.com Software Engineer @ HP

www.todaysoftmag.ro www.todaysoftmag.com

www.todaysoftmag.com | no. 30/december, 2014

5


event

IT Days - image of Cluj IT

W

Ovidiu Măţan

ovidiu.matan@todaysoftmag.com Editor-in-chief Today Software Magazine

6

no. 30/2014, www.todaysoftmag.com

e have recently got through the second edition of Cluj IT Days, the annual event of Today Software Magazine. We believe we have succeeded in offering those over 200 participants present this year the opportunity to get a clearer overview on the IT in Cluj and on the latest trends in the local industry. A novelty of the event was the release of a book entitled How to build a product?, signed by nineteen authors, who have also participated as speakers to the event. This book will actually be available online on the site of the magazine and that of the event. The event started with a session of presentations dedicated to the courses of evolution and the leadership on local level. expert and special guest of this edition, The tendencies of local companies such as talked in his first presentation about the Arobs, Accesa or Pitech+Plus, which have ways in which we can improve the perforbeen presented by their very founders/ mance of Java applications, a subject of a CEOs share the same principle: the ori- great interest among the local community. entation towards the development of their We will try to keep in touch with him in own products, along with maintaining the the future, by publishing his articles. The presentations continued with subquality of their outsourcing services. This thing is welcomed and we hope to talk jects that varied from architecture and big in our future editions about the new IT data to enterprise solutions. There was products from Cluj. Still in this first sec- even a trans-disciplinary approach which tion, the participants had the opportunity established correlations between the evoto discover interesting things such as the lution of the user experience concept and Firefox OS operating system, the projects the history of philosophy. Near the end, that are going on in the ICT Lab Budapest we discussed the opportunity of develoand the increase of the female presence in ping cultural applications such as “Statui de Daci” (Dacian Statues). We rounded the IT world. The technical session of the second the evening off with a glass of wine and a part of the first day slowly made the tran- piece of cake. Since we are talking about sition towards more specialized topics. an IT event, the participants were able to There were discussions on innovation, the choose their favorite cake online, one day impact of cultural differences, going as far before. The cakes were waiting for them, as the Java language. Peter Lawrey, the top with every body’s name on them, together


TODAY SOFTWARE MAGAZINE

with a small present from HipMenu. The atmosphere at the stands of the sponsors of the event was a relaxed one. The people from Accesa prepared a special coffee for those who would choose one of the personal development values within the company. Those from Yardi offered a surprise prize following a raffle, namely a GoPro. The visitors of the Accenture stand had the opportunity to try the applications developed by the company for Google Glasses, to receive small gifts and discover Industry 4.0. The well-known prize roulette of ISDC was also present in their stand, and those from 3Pillar Global were present in high glee and good spirits. The second day opened with the keynote presentation of Peter Lawrey on the advantages and disadvantages of being a software consultant. The following subjects were the startups, from the perspective of the ecosystem, and what is good to know if one wishes to obtain financing. After the first break, we learnt how an object is created in a 3D printer. We could also see and examine different objects produced by it. Afterwards, we discussed about the possibility of involving software companies in

startups, as well as the Industry 4.0 development tendencies. Another presentation proved why a mere idea is not sufficient for a successful startup. After lunch, we were introduced to the research projects developed by the Technical University in Cluj. We saw research projects that have as a result a better inter-connection of networks, followed by an impressive demo of vocal processing. Basically, after processing approximately three hours of recordings of a person’s voice, one can simulate a very authentic speech text. We concluded the series of research projects with a presentation of the e-learning platforms and their contribution to the Romanian projects in this domain. The pizza break of the second day brought a relaxed atmosphere and prepared the audience for the last part of technical presentations. The first two had security as their main topic, being followed by a discussion on testing strategies carried out by the big companies such as Facebook. In the end of the second day, we talked about the design of a startup from the perspective of Microsoft technologies.

The exceptional host of this event was Dan Suciu, lector of Babes-Bolyai University and Director of Technical Training in 3Pillar Global. Though the introduction of each presentation during the two days represented a real challenge, he successfully accomplished his mission. We thank the companies which have sustained us: ISDC, Accesa, Gemini Solutions, 3Pillar Global, Yardi, Accenture, Colors in projects, Mozaic Works, Telenav, Endava, Yonder, Betfair, SDL and Fortech. We thank our partners which helped us to promote the event: Cluj IT Cluster, Starcelerate, ClujHub, JCI Cluj and Loopa. Your opinions and feedback are very important to us and we are looking forward to your suggestions which may turn the 2015 event into an even more interesting one, with valuable guests.

www.todaysoftmag.com | no. 30/december, 2014

7


entrepreneurship

B

On Entrepreneurship, at the End of the Year

y the courtesy of Ovidiu (Matan, of course), I was invited to deliver a presentation on IT DAYS 2014. As I did not manage to go through everything during the time allotted to my presentation, the writing of this article gives me the opportunity to complete my presentation. This is what I wanted to say by all means!

Entrepreneurship can no longer put technology aside nowadays, no matter the domain. We can imagine any business activity you like and, without much effort, we can attach a technological function to serve that activity. Moreover, technology has become in time a differentiating element, which can ensure the competitive advantage necessary to success. Nowadays, we register an unprecedented acceleration of the rhythm of socio-economic progress. Most of the people believe progress will continue to be a linear one, but it is no longer like that. The technological progress or the one favoured by technology is now an exponential one. For illustration, we have chosen a single example, but it is conclusive for its impact on everything: the project of deciphering the human genome took 20 years, during the first 15 years of the project, only a small part of the genome was deciphered (less than a third), the big progress was made during the last five years, only due to the technological advance. What had seemed a failure, turned into a great success. The presence of technology does not guarantee this success, but it seems to become an ingredient of “the salt in your food” type (if you know the story). And since we are talking about food, is there an infallible recipe to guarantee our success among the people at our table? In other words, can we follow a certain strategy, which applied step by step should lead us to the desired clients’ satisfaction? An obvious answer is NO, otherwise we would all be millionaires and we would release products and/or services in conferences broadcast all over the world. Afterwards, we would set up foundations to help those who haven’t put the “recipe” into practice. And then what? I will not answer directly to the question, but I will ask at least two other. Here they are: Which are the other personal ingredients a (successful) entrepreneur needs? What about the societal ones? In the following lines, I will try to answer these questions. The entrepreneurial spirit is what

8

pushes us to initiate activities on our own, to make profit out of them, to get personal benefits and, possibly, group benefits. Entrepreneurship, namely the immediate effect of the entrepreneurial spirit, is perceived by many as the main factor for the world’s progress, and business – the blood within the global organism which continued to circulate even during the critical world conflicts. There are plenty of examples where the enemies (ideological or even in an armed conflict) continued their business relations. I have chosen but two, as they are most representative: • USA and Germany, during the second world war (a famous example is Ford, with his factory in Germany, who, at a given moment, used French war prisoners as workforce!). • Germany and USSR (Russia) during the cold war (USSR supplied the Federal Republic of Germany with gas, continuously; FRG built and maintained the major plants for the exploitation of the Russian gas). FRG has never suffered from cold, whereas the German Democratic Republic, theoretically an ideological ally of the USSR, has! Assuming risks is an essential personal trait, without which entrepreneurship is not possible. The way in which we are aware of the risk determines the course of our actions. It affects our personal life, our professional life and our interaction with society, in general. Basically, the way in which we understand to take a risk upon ourselves is a personal feature, which can be, of course, also trained by education. However, its existence does not guarantee our success in business. What is, in fact, the entrepreneurial spirit? Is it the same thing with the entrepreneurial culture? Let’s begin first with the etymology of the word “entrepreneur”. The origin of the word is French and it used to designate a person who was between two workplaces, namely an unemployed! It is interesting how a word referring to a person who had no job has come to mean, nowadays, a person who offers jobs. The entrepreneurial spirit is defined in the

no. 30/december, 2014 | www.todaysoftmag.com

field of study as the integration, in variable proportions, of some individual characteristics, including without fail: • Risk assuming, which I have already mentioned • Unicity, which is not the same with • Creativity • Adaptability • Knowledge of the business domain • The wish to exploit the existing potential and • Self-destruction – namely, in most of the cases, if other people are not put in charge with the initiated business (for management, growth, etc.), its creator, out of the powerful wish to engage in a new creation, will destroy the existing one. Entrepreneurial culture is an assembly of tangible and intangible societal characteristics which favor the manifestation, or to put it differently, the materialization of the entrepreneurial spirit. In respect to the tangible or physical features, things are rather simple – namely we are talking here about the infrastructure, in particular, which can consist of ways of access, utilities, ground etc… The intangible ones are historically represented by: • the legal frame • education / training • individual mentality • and society, focusing on two components – failure acceptance and trust Next, we will make an inventory of these tangible and intangible features. This inventory will begin with those characteristics we do NOT have, to conclude with those we DO have, remaining thus in the positive area of things. • There is no national policy regarding the entrepreneurial education, meaning there are no systematic programs, of a wide scope or on a long term, with results, to encourage private initiative. • We have no history, nor favorable memories (how many successful Romanian companies do you know to have a long history behind?) • We do not accept fai lure as


TODAY SOFTWARE MAGAZINE

Young spirit Mature organization A shared vision Join our journey! www.fortech.ro

something natural, as a step in learning; failure is strongly blamed by the family members, friends and society in general. • We do not have the tradition of mentorship, meaning that the people who have succeeded in business are not preoccupied to leave more than some material cumulations behind; this can be explained also by the fact that those who have succeeded haven’t always applied the cleanest methods of doing business and they wish this thing to remain unknown. • We do not have the custom of association, of partnership. • We do not have the capacity to introduce innovation / creativity in what we are doing, even though we say we are “inventive”. I will give you an ordinary example, of the same culinary area I started this article with. Let’s take a few ingredients: flour, cheese, basil, tomatoes. All of them present also in the harvest and farms of our ancestors. But who has invented the most widespread kind of food in the world, pizza? You know the answer. And the list can surely go on.

However, what is it that we DO have? A possible answer would be the following: • Emergent economy, which means a lot of opportunities. • A growing body of those who wish to begin a business on their own, but still far from the most efficient economies (world’s 42 nd place and Europe’s 25 th, according to Global Entrepreneurship Index). • A direct connection with international information media and exposure to traditional entrepreneurial cultures, and in this respect, Cluj IT is in a very good position. • A young population (the average age

is lower than in most of the European countries) that assimilates and quickly embraces the latest technologies or innovations. I have heard, in the context of the recent presidential elections, of the “facebook party”! We must agree that the initiative / entrepreneurial spirit is no longer sufficient. Maybe it was enough 20 years ago. But nowadays, a certain degree of sophistication is necessary, which means: • market research; • knowledge of the domain (the higher the degree of business sophistication, the deeper the knowledge should be), • request creation / cultivation through marketing strategies suitable to the customer’s / client’s profile. This means knowing it – that’s where knowledge of other domains is necessary, besides the knowledge of the business, one should also know sociology, psychology. • work, dedication, • competitional environment (competition is good!) • capital (preferably, that of others, not because it is better to spend other people’s money, but because this is how you prove that others, too, trust in what you are doing.) But it is important to have an environment where our business can develop naturally. This environment is, in fact, the entrepreneurial culture. How can we accelerate the formation or consolidation of the entrepreneurial culture? A part of the answer is represented by the clusters or the poles of competitiveness (“pole de competitivite” for the francophone people among you). The clusters are chain valuable structures of an industry, having an economical function and they perform a strategic alignment on areas of common interest. This thing is possible since, from a certain level up, competition among companies (especially

IMMs) no longer exists. Economists have called this phenomenon “coopetition” (cooperative competition). Clusters, according to the definitions that brought the Nobel prize for economy, 20 years ago (see prof. dr. Michael Porter, as well as the very interesting federal politics of the USA of sustaining the clusterization – www. clustermapping.us), have a function of promoting the development of the ecosystem. Among the advantages of the development of the ecosystem, I would like to mention: • the possibility of introducing some new, innovating elements of constant actuation of the economic growth; • the increase of competition which leads to the increase of competitivity; • the propagation of prosperity through the wide-spreading of successful business model(s). We need innovative ecosystems, meaning those ecosystems where there is entrepreneurship capacity, but also social capacity (the capacity to generate innovation and to assimilate its effects). And Cluj is very well positioned from this point of view for technological business domains, since: • there is a young population, with a high capacity of absorption/ adoption; • there are highly rated universities on the national, but also international level, which attract talents • Ex. UMF – over 1000 French students, and the trend is growing, inclusively for other universities (even USAMV is beginning to have foreign students); • there is a nascent chain-reacting amount which requires a different kind of development, one that is based on real competition, but, at the same time, with the desire of social growth on a scale. In other words, there are good premises for us and the next generations to create the history of Romanian entrepreneurship. Happy new year and a good entrepreneurial year to those who wish it! Andrei Kelemen

andrei.kelemen@clujit.ro Director executiv @ IT Cluster

www.todaysoftmag.com | no. 30/december, 2014

9


startups

Startup files: MyDog

I

n the apartment where I grew up, we had no pets, though my sister and I would ask our parents from time to time to keep a dog or a cat. This thing made me long even more for the summers spent at my grandma’s. And it was well known: when the summer holiday started, both my sister and I would get off our parents’ back and we would move to our grandparents’. Over there, our cousins would join us during day time and the fun was guaranteed. Coca (a nickname given to our grandma by my sister) lived in a house with a courtyard where two or three dogs used to run, always playfully. It is useless to say they didn’t always have a well-established path and they often treaded on the flowers she took care of, daily. Thus, all day long, the courtyard was full of a bunch of kids running to-and-fro, accompanied all the way by their quadruped friends. Coming back to our days, we have recently decided to adopt a dog. And if, looking back, our greatest concern back then was a rainy day, because we had to move our camp between the four walls, I now look with another eye upon this aspect and I understand the fact that a dog’s adoption comes with a series of responsibilities we cannot neglect. I am talking here about the medical aspect (vaccinations and recurrent examinations), periodical care (nail trimming, grooming, washing), training, food and many other aspects regarding the welfare of the pet. So, I began to research and read from other people’s experiences about the different features of canine breeds. I would like to mention below some of the things I found relevant before the adoption: • What breeds are more suitable for

10

an apartment as compared to those who necessarily need a big space to run. • What breeds are easier to train as compared to those rebel dogs who won’t listen to you no matter what. • What illnesses they are inclined to develop. • Approximately what recurrent investment I should expect. The internet is full of such details when you know what you are looking for, but the information is scattered between sites / domains / forums. Moreover, I began discussing with different people who had completed their families with a dog and I found out some very general details; however, my circle of friends and acquaintances could only help me to a certain step. I would have liked to be able to talk to someone who has the kind of dog I have always liked (Labrador Retriever). And approaching people on the street doesn’t seem such a good idea in this respect. A solution of the MyDog type would have helped me greatly at that particular moment, since I believe that this research step is very important, too. Otherwise, we end up in situations (which, unfortunately, we still see quite often) when the

no. 30/december, 2014 | www.todaysoftmag.com

owner realizes he took on more than he is able to bear and prefers to find another owner or, even worse, to abandon the dog. And, as I’ve mentioned before, this is when responsibilities begin. They vary from health and periodical care to education and well-being. How many of you know the vets in your neighborhood? Or what services they offer? How many forums do you have to access till you find an up to date thread on comparisons between the different types of dog food (in case you haven’t already discussed it with your vet to get a recommendation)? One last aspect on which I would like to draw your attention is that of socialization. Just like us, a man’s best friend also needs to socialize with other dogs, and this, from a very young age, in order for them not to become aggressive or fearful when they meet other dogs. In this case, socialization only happens in the real environment, either by getting to know new friends in the park or meeting dogs which accompany the owner’s friends, whom he/ she chooses to walk with in the morning or evening. We cannot (yet) discuss of an IT solution exclusively dedicated to dogs. However, a solution which addresses the owners is an entirely different discussion. Dan Damian, the founder of MyDog, joins us today with such a concept. This is not Dan’s first entrepreneurial project. He started CodeSphere six years ago, together with two other associates. CodeSphere is a provider of software services, which has grown nicely during all this time. But now, Dan wishes to change the course a little bit and focus on the area of intellectual property generation.


TODAY SOFTWARE MAGAZINE

[Radu] What does MyDog represent, in fact? [Dan] MyDog is a socialization network dedicated to dog owners and lovers. You can register with your dog on MyDog in a few seconds; you get a profile page where you can brag about the pictures of your friend, you can post the most interesting moments; there is a wall where you can see what your friends have posted and, last but not least, you can search for other dogs according to their breed, sex, age, location. MyDog was launched in Romania on the 15th of November and up till now it has gathered over 1500 members, proving that there was a real need for this. How did this idea started? [Dan] I’ve got an Amstaff for almost 4 years now and many times I have thought about finding a girlfriend for him. Moreover, I wished I could “check-in” to a kennel or see who is already there so that I could avoid them when they were too crowded. I also wanted to have access to a list of the nearby veterinary clinics, or to be able to find a dog trainer. And… if I recall well, before having Sawyer, I was oscillating between two breeds: Shar-Pei and American Staffordshire Terrier (Amstaff), and it would have been good for me to find someone to ask, to be able to talk to somebody who had both breeds for some time and find out directly which the advantages and disadvantages of each were. I searched for a site or a mobile application which could help me in this respect, but I couldn’t find anything satisfactory. After three months of validations and market research, we decided to proceed with the development.

Many people mention but the number of users when it comes to indicators. What other metrics are important to you in deciding the future trajectory of the product? [Dan] That is right. The number of users is only one of the indicators, but the percentage of new users versus old users who come back is also very important. Moreover, the average duration of a session or the bounce rate is very important, too. These indicators show how much the user is attracted to the platform after he has created an account, how often he comes back and how much time he spends there. After all, this is what matters most. Ok, you get You have used a rather unusual exten- your idea through to somebody, he creates an account, but, afterwards, if you do not sion for the domain. Why .xyz? [Dan] I started my research with the truly add value, he will cease to come back. name of the application well established – What is next? mydog, and what was left for me to do was [Dan] New pivots (according to the to vary the extension only. Obviously, .com was already bought and so were many Lean Startup concepts, advocated by the other usual extensions. Eventually, I disco- already famous Eric Ries). We have alrevered the new .xyz extension. Its novelty, ady swiveled once, when we decided not strangeness and non-conformism hit me. I to launch all the three planned features realized it will be easy to remember and it (Socialization, Kennels and Clinics), but will represent a major branding advantage, one, namely the Socialization module. And it is good we did that, as we wouldn’t so, here is mydog.xyz. have had enough time and we would have Before the launching, you presented postponed the launching and at the same the project in the third edition of Foundry time we wouldn’t have been able to focus on the feedback from our users. Conferences. How did it go? A month after the launching of the [Dan] It was a very useful experience. The dialogue with you forced us to brush beta phase, we can say we have fixed a lot up strategic aspects concerning the tar- of minor bugs, we have added strictly the get public, the launching strategy or the functionalities that were really needed and manner in which we will capitalize all this we know in which direction to go further. In January 2015, we will launch the effort. Also, the fact that you introduced us to George Buhnici’s show on Pro TV “Dog Services” section (not only Clinics), – I like IT has helped us a great deal. Of where all the providers of services dedicourse, everything culminated in the pitch cated to dogs will be able to register: delivered in front of an investor, opportu- veterinary clinics, canine beauty shops, nity where we could notice what was truly canine hotels, kennels, trainers and even people who are willing to walk your dog important from their point of view, too. or keep it over the weekend. At the beginning of spring, we will How was the platform received by the launch the Kennels module, together with community? [Dan] We can proudly state that the – we are hoping – the mobile version of the feedback is a very positive one. We have application. over 1500 registrations so far and we hope to reach 2000 users in the first month after launching. If this happens, we will align with our projections previous to the launching. Radu Popovici

radu.popovici@geminisols.com Associate @ Gemini Solutions Foundry

www.todaysoftmag.com | no. 30/december, 2014

11


startups

Axosuits wins How to Web Startup Spotlight 2014

B

ucharest, November 24th 2014 – Axosuits, medtech startup that builds advanced exoskeleton suits that help paraplegics walk again, is the winner of How to Web Startup Spotlight, the most important competition and orientation program for CEE startups. The runner-up of the competition was Avandor, the IXIA Innovation Award went to Project Wipe whereas the Best Pitch award went to Lat.io. The winners have been recently announced on the main stage of How to Web Conference, the most important web and technology event in South Eastern Europe that took place in Bucharest on November 20th and 21st. Organized in parallel with How to Web Conference 2014, Startup Spotlight brought together the best 31 teams building tech products with disruptive potential on a global level from 10 countries in the region. During the program, the teams had the opportunity to meet experienced mentors and professionals, and to talk to representatives from top accelerators and early-stage investment funds from around the world. The program kicked off with pitches from the startups selected in the competition, after which the jury announced the 8 finalists of Startup Spotlight who would have a chance to win the Best Startup prize. The 8 finalists were: Marketizator (the 3 in 1 conversion rate optimization platform for ecommerce websites), ProjectWipe (electronic glasses that help people with visual disabilities in orientation and obstacle avoidance), Attensee (online eye-tracking insights for website conversion optimization), Fittter (health and fitness mobile and web app that connects frequent business travelers with handpicked health coaches to provide them the tools to incorporate healthy habits on the road),

12

no. 30/december, 2014 | www.todaysoftmag.com

Avandor (Open Consumer Data Platform available as SaaS to the entire ecosystem made of sites, advertisers, agencies, ecommerce), CloudPress (hosted platform for designers to create responsive WordPress sites visually and publish them with one click when they’re done), Axosuits (affordable and easy to use exoskeletons for medical use) and ViewFlux (online collaboration platform for designers which helps them improve their workflow). In the second part of the day, the participating teams had 1-on-1 meetings with exceptional professionals from the global tech scene, and participated in panels and talks which took place at TechHub Bucharest. All the participants to How to Web Startup Spotlight had the opportunity this year to pitch their products on the main stage at How to Web Conference, in front of the entire audience, and benefitted from 1-on-1 mentoring sessions with some of the highest regarded professionals in the global tech industry. Bogdan Iordache (Investor 3TS Capital Partners and founder How to Web), Robert Knapp (Founder & CEO Cyberghost),


TODAY SOFTWARE MAGAZINE Adrian Gheară (angel investor), Bogdan Ripa (Master Product Owner Adobe Romania), Bogdan Ţenea (IXIA Innovation & Entrepreneurship Appointee), Sitar Teli (Managing Partner Connect Ventures) and Cosmin Ochişor (Business Development Manager hub:raum) were part of the jury which selected the winners of How to Web Startup Spotlight, considering criteria such as team fit and experience, market size & trend, market validation & initial traction, customer acquisition cost, scalability and overall feasibility. The big winner of this year’s Startup Spotlight competition is Axo Suits, Romanian startup that is building advanced exoskeleton suits that help paraplegics walk again. Avandor, the Open Consumer Data Platform available as SaaS to the entire ecosystem made of sites, advertisers, agencies, ecommerce and more, is the runner-up in the contest. The team from Project Wipe, the developers of electronic glasses that help people with visual disabilities in orientation and obstacle avoidance received the IXIA Innovation Award, and the best pitch award went to Lat.io, customizable software development kit that enables developers & companies harness the potential of iBeacon technology without getting caught up in the details. The finalists received USD 20.000 cash prizes offered by IXIA Romania, the main partner of How to Web Startup Spotlight. Startup Spotlight officially ended on Saturday, November 22nd, with a social event where participant startups had the chance to establish valuable connections with key leaders in the global tech industry. Startup Spotlight was organized during How to Web Conference 2014, the most important event focusing on innovation, technology and entrepreneurship in South-East Europe, which brought together on November 20th & 21st over 1000 members of the regional tech community and over 70 special guests from 4 continents. Organized with the support of main partners Telekom Romania, Bitdefender, IXIA and Grapefruit, and partners Avangate, SoftLayer, Canada’s Embassy in Romania, Hub:raum, CyberGhost, Domain.me, TechHub Bucharest, Mobaba and Reea, How to Web Conference 2014 marked a new beginning for the event, and proposed to the audience a more complex format, approaching in-depth specific subjects, relevant to certain categories in the audience. The main stage of the event hosted the How to Web – Future Trends & Entrepreneurship Track and focused the audience’s attention on subjects such as the Internet of Things and the business opportunities it brings about, the evolution and implications of crypto-currencies, future trends in cybersecurity, mobile platforms and the transportation industry, or raising investment through different financing methods. During the How to Web – Product Track, organized with the support of co-curators from Mozaic Works, the talks centered on developing disruptive products at a global level and involving users in this process. The How to Web – Angel Investment Track, organized in collaboration with Angelsbootcamp, with the support of Trento Rise and Vibe Project, featured talks about best practices in angel investment, working with accelerators and making lucrative investments.

The local game developers’ community got together on Friday, November 21, at How to Web – Game Development Track, an event organized in collaboration with the Romanian Game Developers Association, with the support with King, Mobility Games and Ubisoft. The participants in this track discussed the mechanics behind top grossing games which attract millions of users globally, and best practices learnt by the professionals who contributed to building them. Irina Scarlat

irina.scarlat@howtoweb.co PR Manager @ How to Web & TechHub Bucharest

www.todaysoftmag.com | no. 30/december, 2014

13


interview

Human software assements Interview with Tudor Gîrba

T

udor is a researcher and software consultant. He has recently become known for winning the Junior category annual award offered by AITO (Association Internationale pour les Technologies Objets), organization which supports research in the object oriented technologies area. He was present on December 13th in Cluj, in the event called Be Fast and Curious organized by 3Pillar Global company. [Ovidiu] Please describe for the readers of our magazine the capabilities of the Mondrian engine, which received the 2nd prize in ESUG 2006 Innovation Awards and also contributed to the one offered by AITO. [Tudor] Mondrian is a visualization engine. It was one of the first visualization engines that provided a compact scripting API. For example, visualizing a class hierarchy can be done like: view nodes: classes. view edgesToAll: #directSubclasses. view treeLayout

[Tudor] Moose is an extensive opensource project that was started 17 years ago at the University of Berne, Switzerland. I am leading the project since 12 years and currently it is supported by several companies and research groups around the world. The goal of the platform is to make crafting of custom analyses easy. This characteristic enables developers to combine generic services and contextualize them to the specifics of the system at hand.

A significant class of use cases comes from „testing” software architecture. For If we want to see each class as a rectangle example, in a client-server JEE system, it whose dimensions is given by code metrics, can be desired to not use @Stateful beans. we can extend that visualization like: A check for this rule would look like: view shape rectangle height: #numberOfMethods; width: #numberOfAttributes. view nodes: classes. view edgesToAll: #directSubclasses. view treeLayout

allTypes noneSatisfy: [ :type | type isAnnotatedWith: ‚Stateful’ ]

allSatisfy: [ :type | type isInterface and: [ type directSubclasses anySatisfy: [:class | class isAnnotatedWith: ‚Remote’ ]]]

Moose is not at all limited to only these kind of checks, and can be used for building complete data analysis tools with sophisticated interactions. For example, the attached picture shows an analysis session formed of four steps: In the first pane we scripted a visualization for the classes in the system that is shown live in the second pane; selecting a class from the visualization led to opening the details of that class in the third pane; and in this third pane we built another visualization that shows to the right how methods use attributes defined in that class. This is but an example that shows how developers can easily combine and customize analyses in a visual and interactive environment. More details about Moose can be found at: http://moosetechnology.org

This was a rather simple example of a possible rule. A more complicated rule could specify that all calls from the cliAll in all, Mondrian opened a new ent to the server to happen only through direction by showing how it is possible to interfaces that should be implemented by classes annotated with @Remote. A check express visualizations succinctly. in this direction would be: The topic you will approach in the The software and data analysis plat- (((allTypes select: #isUIType) Be Fast and Curious event is Humane form, Moose, has made, since 2003, under flatCollect: #clientTypes) select: #isServerType) your guidance, the transition from an academic platform to one that can be easily used in the business environment, too. As it is described in The Moose Book, we have a process that is well defined by data acquisition modules, model description and data processing engines and tools. Can you give us some examples of using it in software applications or data analysis ?

14

no. 30/december, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE

Assessment, which represents a new software engineering method whose purpose is to help in decision making. Can you sketchily describe to us what it is all about? [Tudor] Software engineers spend more than half of their time assessing the state of the system to figure out what to do next. In other words, software assessment is the most costly software engineering activity. Yet, software assessment is rarely a subject of conversation. As a consequence, engineers end up spending most of this time reading code. However, reading is the least scalable way to approach large systems. For example, if a person that reads fast requires 2 seconds to read one line, it will take that person 1 man-month to read 250’000 lines of code (the size of a medium system). Given that engineers need to understand the system on a daily basis, it follows that decisions are being made on partial and often incorrect information. But, it does not have to be like that. Humane Assessment is a method that offers a systematic approach to making software engineering decisions. It is based on the core idea that gathering information from a large data set is best done through tools. At the same time, these tools have

to be customized to the specifics of the system. This is why it is necessary for engineers to be able to build these tools during development, as they are the only ones that know what their system needs. To make this practical we need a new breed of platforms that enables developers to craft tools fast and inexpensively. This is where Moose comes in. Through Moose we show that this proposition can be realized in practice, and I argue that the kinds of features offered by the platform should become common in IDEs. Tools are necessary, but they are not sufficient. That is why, Humane Assessment offers a guide both to identify the needed engineering skills and to find a way to affect the development process. For example, given that the structure of the system shifts continuously with every commit, it is necessary for the development team to observe and steer the overall architecture on a daily basis. To this end, Humane Assessment recommends a daily assessment standup through which technical problems that have been previously made explicit by means of custom analyses get discussed and corrected. More information about Humane Assessment can be found at: http:// humane-assessment.com

not keep up with that pace. Let’s take an example: a recent study showed that there are some 10’000 mainframe systems still in use. These systems are probably older than most of the developers. This shows us that software is not as soft as we might think. Once created, a system produces consequences including at a social and economic level. The less flexible it is the more its position is strengthens with time and makes it almost impossible to be thrown away. The only options remain reconditioning and recycling. Because of the spread and impact of the software industry, we need to look at software development as a problem of environmental proportions. Systems must be built in a way that allows us to easily disassemble it in the future. As builders of the future world, we have to take this responsibility seriously.

If you were to write tomorrow an article on software, what would its title be? [Tudor] Software environmentalism. What is your perspective on the evolution of technology over the next 10 years and its impact on people’s everyday lives? [Tudor] Technology will have and ever increasing role in our daily life. This is an obvious direction. Less obvious are the consequences. We produce software systems at an ever increasing rate. On the one hand, this is a good thing. On the other hand, our ability to get rid of older systems does

Ovidiu Măţan

ovidiu.matan@todaysoftmag.com Editor-in-chief Today Software Magazine

www.todaysoftmag.com | no. 30/december, 2014

15


interview

On testing, scientific discoveries and revolutions An interview with Michael Bolton

M

ichael Bolton was in Cluj between 17-20 November as instructor for RST (Rapid Software Testing) and Critical Thinking courses. Before he continued his journey, Altom team stopped him for an interview to find out what some of his perspectives are on software testing and education in software testing, the link between testing and the scientific discovery process, as well as his opinion about testing communities and their role in potential revolutions in testing. What are the main themes covered by the RST course? [Michael] Wow! There are a lot of them... The most important thing is, I think, finding what the center of testing is. We look at the skillset and the mindset of the individual tester. That’s the big deal. We talk a lot about heuristics, that is, fallible ways of solving a problem. We talk a lot about oracles, how to recognize a problem. We talk about coverage, how to look at a program from a bunch of different perspectives in order to give a good, thorough search from a bunch of different angles for a bunch of different problems that could matter to people. Those are the most important things that come to mind, developing the skillset and the mindset of the tester.

How is the content of the course evolving? Is it still growing? [Michael] It’s terrible actually! We now have about 7 days of material for the 3-day course, and in fact if we really wanted to do it up proud, and we went through all of the exercises we’ve developed over the years on all the different topics we would love to cover, I really have no idea how long it would take... a couple of weeks? 3 weeks? 4 weeks? The other thing that’s been happening over the years is that we started to teach the Rapid Testing for Managers class, and just this year, James debuted, I think it’s a 2 day Rapid Software for Programmers, or people who are coders. It’s really cool! So a lot of the exercises in that class are oriented sort of to being tractable for using tools and using scripting, using machinery. That’s hard to squeeze into the regular class. So meanwhile, there’s a

constant evolution to keep finding stuff in the world that we import into the course. The most pressing issue for the moment is to try to figure out how to put this tacit and explicit knowledge stuff to be more central to the material. It’s in there implicitly, tacitly if you will. I’d like to have the time to talk more about that and to maybe do some exercises on how that stuff fits into testing. I’d like to do more reporting exercises as well, to give people a chance to try something and blow it. Then to take a couple of cracks at it, to get good at it. So, the material is always evolving, we’re always discovering new stuff, visualization stuff. James started doing some work with R lately - a data munging kind of language which allows you to show stuff, and to chop up data in really interesting ways. I’m behind him on that, but hey!, testing changes, so the class changes, and that’s a good thing.

Objective C

jobs-cluj@yardi.com Yardi Romania

16

no. 30/december, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE So the material is always evolving, we’re always discovering new stuff. James started doing some work with R lately a data munging kind of language which allows you to show stuff, and to chop up data in really interesting ways. Testing changes, so the class changes, and that’s a good thing. How would you describe a person who would benefit from the RST or the Critical Thinking course? Why are these courses valuable and for whom? [Michael] We have long said that Rapid Testing fits for anybody and any level, and that is because we’ve designed the exercises kind of inspired by Jerry Weinberg’s work. Both James and I were students of Jerry’s in this sort of informal way, that people are students of Jerry. We went to a lot of his workshops, a lot of his classes, and learned a ton from him about the design of classwork. Jerry has long been a proponent of experiential workshops and one time he gave away one of his secrets, which of course he published in books and so on. The secret is that on a reasonable exercise, everybody will learn what they need to learn, regardless of what level they are at. So some people in our exercises will learn very basic things, because they are at a very basic level. Other people, who are more advanced, will catch onto something or notice something, or maybe they’ll recognize something in the more junior people, and will understand something about how to guide more junior people. Sometimes people have insights on how to observe. The cool thing about Jerry’s approach is that people will learn what they will. That actually is kind of relaxing for us in that we have a set of things that we’d like people to learn and to tell people about, but at the same time we don’t have to commit to that. As a consequence to that, people learn different stuff. We observe them doing that and that loads us up with other things that we could talk about, and gives us other insights into what people need to learn, what people pick up from exercises, what they get from the experience. So, both of those classes, both the RST and the Critical Thinking class can work for anybody, at any level. We have a dual kind of thing happening: there are 24 people in the room sometimes, so we have to sit back and let people learn what they will because we can’t teach 24 people all in the same way. We give people an opportunity

I think a great thing for testers to read to learn rather than trying to teach them is a pair of books called “The Golem” - the too much when things are going well. golem is a figure in Jewish mythology, a What advice would you give to someone big, lumbering, clumsy giant, and that’s a who is doing testing and would like to find metaphor that Harry Collins and Trevor the passion in their job, but they’re not quite Pinch and several of their colleagues contributed essays to in these 2 books. there yet? One is called “The golem and what [Michael] This whole business of how you instill passion in people, I have a stock everyone needs to know about science” answer for that, which is: I don’t think you and the other one is “The golem at large have to. Testing after all is about curiosity - what everyone needs to know about techand people are naturally curious. People nology”. Both of those are really interesting often ask me: “How do you motivate tes- books for testers, I believe, because they ters?” My answer to that is you don’t have talk about the lumpy nature of discovery, to. Just stop demotivating them! That’s a the fact that discoveries don’t happen the huge problem in our business. As soon as way we’re told they happen. They do little case studies and you take responsibility away from people, and essentially make them a medium for investigations of what really happened the work of other people, then they disen- in scientific studies. For example the gage, they lose interest. If you give people a Michaelson-Moorley experiment, one mission to go out and discover something, of the most famous in the history of scipeople will go out and discover something ence, was never actually completed. But we don’t hear that story from the tradition their own. onal sources; it’s just set up as this done What’s one of the latest books you read deal. What this illustrates is the fact that science is messy, that it’s confusing. Also that you would recommend a tester? [Michael] I’m in the middle of a book that it takes time to make sense of whether called “The organized mind” by David something is good science or bad science, Levithan but I keep putting it down and or if somebody is a good scientist or a bad I keep losing my place in it. I’ve been rea- scientist. Especially because when there is ding a bunch of books on probability and a revolutionary discovery the natural reacrisk, and statistics lately, but that’s just a tion from the community is “That can’t be stop on my own personal agenda. I haven’t right! He’s got to be incompetent!”. Galileo had one exactly like that, to make me go was obviously a genius. Everybody thought WOW! except for the year I spent reading he was crazy! The only exception where people were Harry Collins books. Every one of his books is money in the blown away in the long history of this stuff bank, and he’s amazingly prolific as a wri- was Einstein. He was exceptional in that ter, but also a wonderful diversity of stuff way, except a lot of his stuff wasn’t settled along his central theme which is the soci- until the 1960s. It was only then, 60 years almost, after he had his miracle year that ology of science. the last serious objection to be put to bed. www.todaysoftmag.com | no. 30/december, 2014

17


interview love books, we’re not gonna get it all from books. We’re gonna get it from talking to each other, from learning from each other, from interacting with each other, from helping each other to solve each other’s problems. On a bigger scale, conferences are good in a way, because you meet a lot of people there. I like the small scale, when it starts with a bunch of people meeting and drinking coffee or beer. The most exciting of those maybe is the Weekend Testers in India, initiated by Ajay Balamurugadas. I saw Ajay give a talk about it. It was so exciting! He described the growth of this community. Over a 15 week period, every weekend they got together, first live and in person and then online, they developed a community of skilled testers in India. It started with 4 friends, and their mentor and then it spread to 4 more friends, and pretty soon people from Mumbay and Chenai were asking “Hey! could we get involved with this?!”, because they were hearing about it through the grapevine.

And the epiphany per page rate is really But the way Harry has described it in a high. number of places, we see science as a ship How do you see the role of the testing in a bottle, right? We see it as a done deal. He says we don’t see the mess, we don’t communities? Can they spark revolutions in see the spilled glue, and we don’t see the software testing? [Michael] I’ve been a participant in snapped mast, we don’t see the stuff that didn’t work, and the stuff that didn’t fit several communities. I started one in its way and all that. We see the finished project. with the Toronto workshops in software That’s a really interesting thing, I think for testing, participated in the early days of That’s how communities evolve, that’s the popular understanding of science. It one the Dutch Exploratory Workshops on goes against what we are told of this lonely Testing - DEWT - which was largely inspi- how revolutions start, that’s how big changenius someplace in a garret who had this red by LEWT, James Lyndsay’s London ges happen. Big changes always start with wonderful idea and lo and behold they Exploratory Workshops on Testing. I’ve little changes. said, “Well, that’s marvelous, what a great been to all three of those, and that I think thing!”. Science is really controversial and is where the REVOLUTION starts! It starts with a small group of people, it’s filled with the same kinds of arguments that testers have to face every day when like all revolutions. It starts with a small there’s doubt as to whether this is a serious group of people getting together and problem or not, or whether there is a pro- talking about what they’re interested in, sharing their experiences with each other, blem here at all. With every project we like to think, and trying stuff out with each other. I’m “Okay, it’s done and there you go!” But we really privileged to be part of those things! could never really be sure of that, that’s I get invited to them every now and again, not something that’s subject to certainty. or when I am in some place that somebody Jerry Weinberg says that one bit is enough spontaneously puts one together that’s to torpedo the value of a program, just a really terrific. It’s wonderful, because that’s Alexandra Casapu alexandra.casapu@altom.ro single bit. And before it happens, we don’t how we’re going to start putting the testers at the center of testing, by getting us all to know what that bit is. Software Tester They’re fantastic books! They are very share our experience, not for us to sit and @ Altom Consulting enjoyable and engagingly written, and fun. have material recited at us. As much as I

18

no. 30/december, 2014 | www.todaysoftmag.com


communities

TODAY SOFTWARE MAGAZINE

IT Communities

W

e round this year off with the satisfaction of having participated in several quality events and the 2nd edition of IT Days has successfully completed the end of the year. Let us not forget about the 12 release events in Cluj and the other 4 events we have organized in Bucharest, Timisoara, Iasi and Targu Mures. There were over 100 article presentations delivered and most of them are available on our YouTube channel: www.youtube.com/todaysoftmag. Transylvania Java User Group Community dedicated to Java technology Website: www.transylvania-jug.org Since: 15.05.2008 / Members: 594 / Events: 47 Comunitatea TSM Community built around Today Software Magazine Websites: www.facebook.com/todaysoftmag www.meetup.com/todaysoftmag www.youtube.com/todaysoftmag Since: 06.02.2012 /Members: 2011/Events: 27 Cluj Business Analysts Comunity dedicated to business analysts Website: www.meetup.com/Business-Analysts-Cluj Since: 10.07.2013 / Members: 91 / Events: 8 Cluj Mobile Developers Community dedicated to mobile developers Website: www.meetup.com/Cluj-Mobile-Developers Since: 05.08.2011 / Members: 264 / Events: 17

Calendar Decembrie 16 (Cluj) Launch of issue 30 of Today Software Magazine www.todaysoftmag.ro Decembrie 18 (Cluj) PM Meetup - Monitoring & Controlling (2) meetup.com/PMI-Romania-Cluj-NapocaProject-Management-Meetup-Group/ events/218942428/ Decembrie 16 (București) Lean Startup Buchares - Last end of the year meetup meetup.com/lean-startup-bucharest/events/219062023/

The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile methodology Website: www.agileworks.ro Since: 04.10.2010 / Members: 437 / Events: 93 Cluj Semantic WEB Meetup Community dedicated to semantic technology. Website: www.meetup.com/Cluj-Semantic-WEB Since: 08.05.2010 / Members: 192/ Events: 29 Romanian Association for Better Software Community dedicated to experienced developers Website: www.rabs.ro Since: 10.02.2011 / Members: 251/ Events: 14 Tabăra de testare Testers community from IT industry with monthly meetings Website: www.tabaradetestare.ro Since: 15.01.2012/Members: 1243/ Events: 107

www.todaysoftmag.com | no. 30/december, 2014

19


testing

programare

Performance testing from Waterfall to Agile

L

Claudiu Gorcan

Claudiu.Gorgan@betfair.com Senior Delivery Service Engineer @ Betfair

20

no. 30/2014, www.todaysoftmag.com

ike any other success story, when it comes to Performance Testing our story mixes people and processes altogether. Different companies might be built upon different cultures so they implement various mixtures of people and processes – versed teams that have dedicated performance testers might use only guidelines as a process while the more agile teams which rotate the performance tester role between team members, would probably need more detailed processes, checklists, so that the entire performance testing flow would be consistent from one sprint to another and the results offer the same level of confidence. high quality and high performing proThe next few lines will focus more on duct was split across different teams the new Agile like workflow but will also highlight the strong base of the entire process which has been built across many The agile wonder years and involved many skilled and expeAt some point in time, the perfqa proriences people. cess started to change and tried to solve the above issues. As such, the performance champions concept has been born. This whole new thing was not something The waterfall bit In many cases, the performance test- extraordinary; it was more like a commuing process would have looked like the one nity whose members were developers or below: a performance testing team (per- testers from different delivery team. These fqa team) that was giving the final signoff performance champions were now being directly involved in the perfqa workflow, before a product went live: Even though the perfqa team would thus bringing the performance testing and have been involved in the early stages of performance testing responsibility closer a product, through the perfqa kick-off, we to the delivery teams whilst the original were not being part of the sprints, not sit- perfqa team was mentoring, coaching and ting next to the developers and not even training the performance champions. As a supporter – and in some sort led by the same delivery manager that was driving the product implementation and driver – of the performance champions the development team. All the above have concept, I will present a few of my thougenerated a few inconsistencies with the ghts like Clint Eastwood would have agile workflow that was guiding the devel- presented them: opment teams: • We (perfqa team) were following The good our own backlog built upon requests As development, functional testing, from development project management, business analyst, • We were not feeling the pace and devops are all part of the team; adding struggles of the development team – for the performance testing to this would which we were performance testing the close the loop making the delivery team product fully responsible of the product they are • The responsibility for delivering a delivering – which is also empowering for


programare people. Performance testing can now be part of the sprint planning, and can be managed however it suits better for each team. New challenges are now available, which will help people to expand the field of expertise – performance testing strategies, tools, tuning and monitoring.

The bad It is worth considering the difference between an expert on a certain field (ie: performance testing expert) and a more versatile person who is responsible for different technologies (ie: development and performance testing or functional testing and performance testing).

The ugly As performance testing resides now in each team’s responsibility, they will eventually adapt it for their own needs and believes – which might not sound as an ugly thing but after all, at the end of the day we would all like to have a consistent understanding over the performance of a component (100 transactions per second tps would mean the same for all of us) and we would all want to have an integrated performing product, not a bunch of great performing components. Performance testing environments will now need to be maintained by different parts of the organization. The supporters of the performance champions concept/community would need to, at least try to, change the above – of course we won’t try to change what is already working fine, The good. The expertise can consist of the following:

TODAY SOFTWARE MAGAZINE • Performance testing environment: Environment preparations should be addressed at the earliest possible, especially if there is no dedicated owner. The team using the environment should allocate time for refreshing the component to be tested and its dependencies. The performance environment might be a scale of production or a disaster recovery environment. • The testing model: Usually we suggest using the pareto principle which would basically translate to: 20% of the application flows would generate 80% of the load. Now this is the part of the application we should focus on from a performance testing perspective. Modeling the real life user flows (like: at any given time 10% of users or logging in, 20% are loading the home page, etc. ) is quite important when understanding the performance behavior of an application that supports ~60k tps at peak. • The testing strategy: Load, capacity, scalability and soak testing are all critical together with the never-dying question: “How much can it cope with?” This flow should naturally start with load testing and understanding the application performance under normal load, than progress to a capacity test which should generate application tuning (like GC tuning) and reveal the application breaking point and more import the cause for the breaking point (is it CPU starving, is it memory, is it a thread pool, is it a bad db query). Then invest in scalability testing whether it is vertical scaling like stock exchange systems (someone working in the stock exchange industry presented their way of scaling

the infrastructure – they would always buy the biggest and the powerful servers on the market) or horizontal scaling which again should trigger scalability optimization as most of the applications would not scale 100% with the addition of an extra server (have seen most of them scaling ~80%, so if one server would cope ~100tps, two would cope ~180tps). Nevertheless, an 8hrs soak test would probably reveal stability issues and possible memory leaks (even small ones). • Component testing vs integration testing: Usually the time limitations would force passing by the component performance testing, but stubbing all the dependencies and load testing the component might reveal performance issues in the early stages of the development. • Production monitoring: It might sound more like an operational task, but the performance tester’s job does not end as soon as the product went live. Monitoring it in production and feeding back in the testing model would help in adjusting the model and understating that the results in a testing environment would give a flavor of performance behavior and would not match the actual production performance.

www.todaysoftmag.com | no. 30/december, 2014

21


programming

Common branching models and branching by abstraction

T

he use of version control has become the normal way of working while the latest version control tools enable us, the developers, to play with branches the way it suits us the most. Plenty was written about branching strategies and while some of them are very popular, almost all of them have a common problem that at some points must be handled. This is the merging problem. There is a reason why people are talking not only about their “branching strategy”, but about the “branching and merging strategy”. Anghel Conțiu

Anghel.Contiu@endava.com Design Lead @ Endava

Merging can easily turn into a nightmare when the branching strategy is wrong. Take for example one of the most popular branching strategies, the Vincent Driessen’s branching model in Figure 1. This model is so popular that a maven plug-in was implemented on top of it by Atlassian.

Consider the following situation: • George is working on george-feature-branch and he branches off from the develop branch. • Andrew is already working on his own andrew-feature-branch since yesterday • Mary finishes her feature after 6 days of working and she merges her maryfeature-branch into the develop branch. • Dan, our QA starts testing Mary’s changes. This seems like a healthy work flow, but it hides a couple of important risks that will lead to additional developing and testing effort. Here are some of them:

The feature retesting problem

Let us focus only on two branches that should be intensely used: the feature branch and the develop branch. The feature branch branches off from the develop branch and it is used by a developer to implement a feature. When the feature is finished, the developer will merge his feature branch into the develop branch, so the other developers will have the commits available.

22

no. 30/2014, www.todaysoftmag.com

Dan spends a day testing Mary’s new and shiny feature and he raises his thumb up, all good. What he doesn’t know is that Andrew is also working with two of the files that Mary has altered in order to implement her feature. Andrew merges his branch 3 days later, he is happy he had no conflicts while merging, so Dan starts testing it but he notices Mary’s feature doesn’t work anymore, so he starts retesting it and creates the corresponding tickets for Mary, Mary fixes the issues considering Andrew’s changes and Dan retests again both Mary’s and Dan’s features. So, Mary wasted her time doing fixes after Andrew’s merge, while Dan wasted his time testing Mary’s feature at least two times and Andrew’s feature at least two times.


diverse

TODAY SOFTWARE MAGAZINE

The merging problem

developers doing their work in isolation. Branching by component abstraction It is common for features to take a cou- An alternative to this problem is the use of This applies to situations where a large ple of days to get implemented and unit branching by abstraction technique. component must be replaced or re-written. tested, so, even if developers do merge An abstraction layer must be created the develop branch into their feature Branching by abstraction so our code will not depend on the combranches everyday (trying to avoid major It is defined by Martin Fowler as “a ponent anymore but on the abstraction. conflicts when the time comes to merge technique for making a large-scale change This might also include refactoring the their branch into develop they try to stay to a software system in gradual way that component so additional unit tests can up to date with the latest changes coming allows you to release the system regularly be provided at this time. The refactoring from their colleagues on develop branch), while the change is still in-progress”. might be too costly in terms of time and usually there are two features developed The technique implies providing an resources, so this is a point where a decisimultaneously on two different branches abstraction layer above the feature being sion has to be made in terms of which and the risks of conflicts at merge time is changed, so the client code will communi- branching by abstraction strategy should high, and the more time the developers cate only to the abstraction layer without be used. spend working in isolation on their feature being aware if it uses the new or the old The new implementation of the combranches, the higher the risk gets. ponent will be done in a step by step version of a feature. No matter the way branching by abs- manner, meaning that the features will traction is used, there is a common practice be implemented according to client code The continuous integration problem • Use an abstraction layer to allow needs. When a set of the features is ready Developers should write unit tests, for a client, that client can switch its wiring multiple implementations to co-exist. component tests and integration tests. • Gradually migrate to the new and migrate to use the new component. When these tests are developed in isolaRemember that the application should implementation tion on one branch, while another feature • Ensure the system builds and runs build and run correctly at all times. and its tests are implemented on another The implementation of the component correctly at all times, so continuous delibranch, the tests failing risks at merge time continues with the additional features that very stays on. are high and the tests have to be reviewed. are needed for the next client. In the end, The good thing about this situation is that While the developers write the tests there will be no dependency on the old the developers are aware that their tests fail, but they do have to invest additional for the new feature, the new version, they component, so, it will be safe to delete it. time to fix the tests even if they worked are confident that their tests work with the latest code version as everyone is pushing Branching by point of use abstraction before the merging moment. to the same branch and there won’t be any Consider the situation where branmerges that will break the tests (simply ching by component abstraction is not the The running of database scripts problem most suitable solution. Reasons for that Many software projects try to automate because other branches don’t exist). It is important to detect the right might include the high effort to be invested their deployment process and get their database up to date through scripts. There place where the abstraction layer sho- in refactoring the old component in order is a risk of messing up the order of running uld be placed and the way objects will be to make it work with an abstraction layer. these scripts because of the same reason. instantiated. There is room for creativity as For this kind of situations we can leave Developers work on different branches, in taking these decisions also depends on the the old code as it is and adopt a different context of the problem. Considering this tactics. isolation. 1. The old and new versions of client If we pay close attention to the root of aspect, there are multiple ways of doing it, class of our implementation (the point of all these issues, we find that it is related to here are two of them.

www.todaysoftmag.com | no. 30/december, 2014

23


programming Common branching models and branching by abstraction switch to the old version can be performed so the new implementation will be disabled with minimal effort. As a conclusion, here is what the developers and testers should focus on while using the branching by abstraction.

Figure 2: Branching by component abstraction. ClientCode is gradually migrated to use the NewComponent.

The developers • Will use only one branch for all their development • Will implement the abstraction, the switching mechanism and add unit tests, • Will set the wiring so it will be possible to instantiate the original or the new version. • Will then focus on the new version to add the new functionality.

runtime, it might know more about the use of our implementation) and the absenvironment where the application is traction (the interface) deployed and take corresponding acti• The point of abstraction is set to the ons, etc...) client class of our implementation (the • We will use the interface of the old point of use) (original) implementation, so the pop• Initially, we will have the original The testers per instance (original or new) can be version of the client class and a copy of • Will test the wiring and the switching injected at the point of use. it, which will become the new version (I functionality. • The Factory will ensure that the know, we don’t want to make a copy of • Will test the new functionality toggle between the old and the new clithe class, but that is temporary, until the • Will also test the old functionality ent class is performed in a consistent new version is stable; we can also switch (if it got refactored in order to properly manner and takes into consideration the between using the old one if something create the abstraction layer). This does same conditions. goes wrong with the new version; the old not apply to branching by point of use. version will be removed in the end) • Are confident that the code is not 3. The scope of the client class • We will have a new interface that altered on some other branch (because both client class versions (original and instance there is no other branch), so they won’t • Some classes might be instantiated new) will implement (“Interface”); the have to test again after a potential merge. and have a request scope only, others interface will probably contain the pubAs soon as the client class is deployed might be Singletons. Having the Factory lic methods of the original class) responsible for the client class instantia- to production and its functionality is pro• The new client class version will tion, it has the opportunity to establish ven to be right get modified to use the new feature • The switching mechanism is the instances scope. implementation. removed • Usually the new version should have • The original version will be removed • The original implementation is the same scope as the original one and in the end. removed this will be specified inside the Factory, Don’t forget, the system must always but other requirements can also be han2. The Factory build and run correctly. dled at this level. • It will be able to instantiate the original or the new version of the client class. 4. Having the implementation The switch between instantiating one version or another is performed inside deployed on different environments • The Factory can be made aware of the Factory. the environment and act accordingly so • It can get as smart as needed, depenit will instantiate the right version. ding on the needs (eg: it might switch • The switch can act as a safety meabetween the original and the new versure in the production environment. The sions not only at start time but also at

Figure 3: Branching by point of use abstraction. The old and the new

24

no. 30/december, 2014 | www.todaysoftmag.com


securitate

programming

Using OSEK/VDX-compliant operating systems in embedded projects Embedded systems use microcontrollers to perform the required actions according to the received inputs. The microcontroller offers (among others) I/O ports for interaction with the outside world and a CPU core to run the application program defined in the microcontroller software project. This project needs to include, besides the hardware

Mircea Pătraș-Ciceu

mircea.patras@arobs.com C++ Developer @ AROBS

dependent configuration, a task scheduler, code for peripheral control (drivers) and the actual application modules. The project can also include communication services, libraries and other components. In this article the focus is on scheduling algorithms starting from the super loop, state machine approach and, finally, OSEK/VDX schedulers. We will refer to OSEK/VDX simply as OSEK.

Schedulers

The scheduler is a software module (piece of code) that manages the tasks of the application. Tasks implement one or more application functions. So the scheduler will call the defined application tasks according to the configuration. Basically, configuration of the tasks means linking each task to an event so that when the event happens, the task is executed. In more advanced schedulers, for a task, is configured also the priority and class of the task. Note: In basic schedulers like the ‘super loop’ the priority and class of tasks are not configured, but are determined by the implementation of the application code. This case happens, for example, in a ‘super loop’ approach where somewhere in the code the call of function for task A is before call of function for task B; if the conditions for calling A and B happen in the same loop iteration, A will be called before B, so A has a higher priority than B.

in a WHILE(1) loop, all the conditions for running tasks are checked and task functions are called. Pros: straight forward to implement Cons: wastes CPU resources, the tasks are executed in an uncontrolled manner, code is hard to maintain

State machine This method adds some structure to the application. Tasks are executed depending on the state’s machine state. For more complex tasks nested state machines can be implemented. Pros: code is easy to design and maintain Cons: CPU resources not used optimally, task scheduling is not an independent module (it is highly dependent on the application design)

OSEK schedulers OSEK standardizes the specifications for automotive embedded systems. One of the areas comprised by OSEK specifications is the Operating System. OSEK defines the OSEK OS featuring the general event based scheduler and OSEKtime OS with the time triggered scheduler. Configuration of OSEK OS is done using standardized .oil files containing the tasks (with attributes), hooks, resources, alarms, events, messages, application modes, scheduler type, error handling.

OSEK

The following items are available in the Let’s look at the scheduling algorithms OSEK OS: tasks, events, resource managewe talked above: ment API’s, alarms, messages, interrupt handling API’s. Tasks are executed according to the configured priority and the Super Loop scheduler preempts the currently running This one is not really a scheduler but it is a way to run tasks. This works like this: task if the pending task has a higher www.todaysoftmag.ro | nr. 30/decembrie, 2014

25


programming Using OSEK/VDX-compliant operating systems in embedded projects

Fig. 1 Application development flow

Runtime

priority. At task termination, the scheduler resumes execution of the ready task with the highest priority. Task execution can be triggered in several ways: activation from another running task, expiration of an alarm counter, a configured interrupt or a message. Extended tasks can enter the waiting states where they wait for an event. When the event happens, the scheduler will resume the task if its priority is higher. Resources ‚consumed’ by tasks can be locked so no other task that uses the same resource can preempt the running task. OSEK allows to run the operating system in one of several application modes (defined statically in the *.oil file). In each application mode, entirely different set of tasks (different applications) can be executed. For example: normal mode, diagnostic mode, bootloader mode ... The decision regarding the mode in which to run the software application is determined at run-time. On errors OSEK executes the related hook routine (which can be configured by the application).

Initial steps

At any given time, the scheduler selects the highest priority task from all ready tasks. If two tasks have the same priority, the scheduler selects the oldest task (i.e. activated first, FIFO principle). The next task (with the same or lower priority) is performed only after the current task is terminated or enters into waiting state. Execution of the current task can be interrupted only by the task with higher priority, otherwise the processor can only be released using the “Schedule” service. Using tasks with different types of scheduling makes sense, for example, in the software application with a mix of short and long tasks. Long tasks are more suitable for low-priority actions (memory check, I/O polling, etc.) with preemptive scheduling, while cooperative scheduling is more suitable for time-critical actions.

Tasks Every software application can be easily divided into several different tasks. Each task is executed independently and competes with all the other tasks for system resources. This approach simplifies the creation of applications (especially large), makes the application more independent from the microcontroller, simplifies product support during its life cycle (makes it easy to add new/correct old features), etc. OSEK distinguishes two types of tasks: basic and extended. The difference between the two types of tasks is the availability in the extended tasks of the additional state, in which the task may be - waiting state. An extra state implies a separate stack for each extended task, while the basic tasks can share the same

Application development starts with the system design / configuration (tasks, messages, alarms, events, application modes, etc). All configurations are stored into the files with *.oil extension (OSEK implementation language). A *.oil file can be created using the OSEK building tool, or manually, using the “OSEK OIL” language. To create the executable file, the *.oil files should be transferred to the system generator. The system generator analyzes the *.oil files and if everything is ok, it creates the source files, which together with the developer’s source code can be transferred to the compiler. This approach (using *.oil files) allows to port the software application not only to any microcontroller supported by the operating system, but also to use any OSEKcompatible operating systems. Fig. 2 OSEK/VDX task state

26

no. 30/december, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE stack (it depends on the RTOS implementation). Also, unlike the extended tasks, the basic tasks may have several activations, i.e. instead of waiting for a certain event, the basic task completes its execution and when the event occurs - the task is reactivated, thus saving stack / RAM. Also, regardless of the type (basic or extended), the task can be “preemptive” or “cooperative”.

Synchronization mechanisms OSEK is an event-driven RTOS with a fixed priority scheduling, which offers a lot of synchronization mechanisms (objects and services), which greatly simplifies the development of the software application. Every object generates a specific type of event (i.e. it transmits a specific amount of information): • Event mechanism. Any task can serve as the sender of the “event”, but only the extended tasks can serves as the receivers of the events (i.e. only extended tasks can wait for a certain event). • Alarm mechanism. Alarm is an object attached to the counter (software or hardware) that counts certain events (time intervals, the number of button presses, encoder angle, etc.). When a new event occurs, the counter is incremented and its value is compared with the value stored in the “Alarm” object. If the counter has reached the desired value, the alarm is triggered (i.e. one of the following actions is performed: task activation, sending of “event”, callback execution). Any task or interrupt can launch an “alarm” (single or cyclic). • Hardware interrupts. OSEK offers services to handle interrupts, which, mainly, are used to synchronize the hardware with the software. • Communication mechanism. Messages can be used to transmit large amounts of information between two tasks of the same embedded system (i.e. inter-process communication), or between two tasks of different embedded system (i.e. inter-processor communication). Every message is configured statically in the *.oil file (queued/not queued message, senders of message, receiver of message, etc.), that reduces processor/ bus load.

for calculating the tick time is that it must be less or equal to the greatest common divisor of all the task execution times. Also, it is recommended to use the greatest value possible so the OS will not spend too much time in the task management routine.

Runtime At run time, the OS will start running the tasks of the default application modes dispatch table. Users can request at any time the switch to another dispatch table (application mode) using a dedicated system service. The actual switch will happen at the end of the current dispatcher round (when all tasks from the current dispatch table are suspended).

Task states Tasks are triggered at expiration of the assigned timer. The timer is reloaded with the task period at task execution. When the timer expires, the task state changes from suspended to running. If the task ends before any other task’s timer expires, then the state changes from running to suspended. If another task’s timer (task B) expires before ending the current task (task A), then task A state changes from running to preempted and task B state changes from suspended to running. At the end, task B changes to suspended state and task A changes from preempted to running state. Note: the same happens if the timer of task A expires before task B ends, so for this type of scheduling we can say that there are no priority levels defined explicitly or implicitly.

Error detection The task execution time is compared with the predefined task deadline. If a task runs for more than the deadline time, then error handling is initiated by the OS. Basically, an error callback function is executed and then the OS shutdown is initiated. At OS shutdown, a second callback function is called. The user must configure / program these callback functions with the application specific actions. After returning from the second callback function, the OS reboots.

ISR’s

The user must configure the ISR’s min recurrence time and program the ISR Handler routine. The ISR is disabled after serThe resources provided in an embedded application using vicing it for the min recurrence time. In this way, enough CPU OSEKtime are: the interrupts and the tasks. OSEKtime dispat- time is ensured for running the tasks, even in cases where an ISR ches the tasks periodically according to the predefined dispatch retriggers quickly. table. The scheduler is preemptive, so, if a task must be activated while another one is running, then the scheduler will preempt the Idle task running task and start running the pending task. After finishing Firstly, OS executes the idle task. The task content is applicaa task, the scheduler will resume running the preempted tasks tion specific (so the user can define it). The idle task runs only in if there are any. Some rules are limiting the preemption mecha- the ‚free’ time between the execution of the periodic tasks. It is nism (like one that says that a task cannot preempt itself). Such preempted by all pending periodic tasks. One way to use the idle cases are considered as anomalies and, OSEKtime OS will execute task is to call the ‚sleep’ instruction to put the CPU in idle mode. the user defined routine for such a case (like performing a CPU reset). COM OSEK standardizes also the communication service. Messages can be sent between tasks in the same CPU or between CPU’s Initial steps When a project is created, the first step is to configure one or more dispatch tables. More than one dispatch table is needed References when different application modes are used in the project (like 1. http://www.osek-vdx.org/ INIT mode and NORMAL mode). The user defines the tasks in each dispatch table by assigning the offset, period, deadline monitoring time and the function pointer of the task. An OS tick time must also be configured by the user. The rule

OSEKtime scheduler

www.todaysoftmag.com | no. 30/december, 2014

27


programming

management

Save user’s time with a well designed interface

T

ime is extremely important for the users, reason why we should care. On every project, we should ask ourselves the questions: “Do I save myself a few development hours at the expense of the user?” and “How could I improve the user experience?”

Axente Paul

paul.axente@3pillarglobal.com Senior UX Engineer @ 3Pillar Global

28

no. 30/2014, www.todaysoftmag.com

People hate wasting time, especially online. We spend so much time online that even the smallest interactions take time. A small inconvenient doesn’t seem much, but more of them put together may lead to frustration and increase the probability for the users to give up using the service you are trying to provide. We are careful with our time when we start a project. We have milestones to reach, demos to make, bugs to fix and, if these weren’t enough, we also have Internet Explorer. We are forgetting an essential aspect: everything we are doing has a positive or a negative effect on the user experience. We shouldn’t allow our feelings regarding the project prevail over user experience. In most of the cases, the user suffers because we do not like the project, we don’t have enough time and so on. Often, little and good is all you need as compared to a lot and bad. Steve Jobs used to say that, by improving the starting time of Macintosh, one could save lives. Jobs was obsessed not to waste the time of the users of his products. We should be as careful as he was when it comes to our products. Probably millions of people will not use your site, but millions do use the www. We are all stealing people’s lives by faulty interactions. When I am working on a site, the question that is constantly bothering me is: “Do I save myself a few development hours at the expense of the user?”.

This is the bottom of the problem. Out of the despair not to exceed the deadline or the budget, we save ourselves and everything is reflected upon the user. Let’s have a closer look at the effects our decisions have on the users.

The performance of the site – the Killer in the shadow

The most obvious and painful way you can frustrate the users is the loading time of a site, and when we are talking about performance, it is not only the users who suffer, but the client also. The story of the performance is a bit more complicated, as most of us blame the server and the internet or we repeat the classical excuse: “It works just fine on my computer!” As a matter of fact, the argument that the server or the internet is to blame is valid, but at the same time there are a lot of native applications which come to our help1 . We can detect whether the user is on a mobile device or not, whether he is connected to WI-FI or mobile internet and so on. We have so many resources that can come to our help, but most of us do not use them. Based on everything we know about the user – whether he is mobile or not, whether he has a high speed internet or if he still has battery on his device – we can improve performance and at the same time provide unique experiences. 1 h t t p : / / w w w . s i t e p o i n t . com/10-html5-apis-worth-looking/


TODAY SOFTWARE MAGAZINE The trouble is that the improvement of the performance is often the last thing we think about and, at the same time, one that the developers avoid the most. We have become lazy with the spreading of the high speed internet. We are cheating on optimizing the images, reducing the HTTP requests and the JavaScript libraries. All these are happening because many of us know how to develop many things on the web, but we don’t know an essential thing: how a browser works and how it returns the content, elementary information in order to be able to build a performant web-browser. So, questions such as “How can I optimize the loading of this file?”, “How can I make this piece of code more performant?”, “How can I write the same javascript function in fewer lines?”, are those encompassed by an attitude preoccupied by the performance of the site.

The forms – eternal pains in the neck

Each form has its specific, with its rules and problems it shares with the rest of the forms. There is no standard, and probably it would be useful to have one, but, unfortunately, things are different. Everyone is free to do whatever they want on the web, more or less. And when it comes to forms, each service has its own algorithms, some of them more complicated than the others. We should appreciate, however, the attempt to facilitate the registration and log in process by using the social profiles. Many of us stop filling in a form when we come to a certain suspicious CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). It was invented in the year 2000, based

on a rather simple process, which is useless at the same time. You get a picture of some characters which only a human being can recognize, you introduce them, and then you move on to the next step. It was very useful in blocking robots in 2000. But 14 years have passed since then, and we have considerably progressed. I don’t think it is necessary any more to waste users’ time with the introduction of CAPTCHA. Millions of hours have been wasted simply because users were forced many times to fill in a field, just to be able to send a poor form. And all these because we haven’t been able to solve the robot issue yet. I am not talking here only about the CAPTCHA, but about any system which forces users to prove they are human beings. Firstly, why should users prove they are people? The issue of the robots could have been solved, had we allotted a little time. “The honeytrap technique” is a very good example in this line. Moreover, there are server-side solutions which help us to filter the automated requests. The truth is that throwing in a CAPTCHA on the page takes less time and the effort is comparatively minimal. CAPTCHA is usually the last thing in a form, which is good.

Why are the passwords so complicated?

Most of us would reply: “Out of security reasons”. It is true that passwords must be safe. But lately, almost everyone forces you to make your password in a certain way. Why am I forced to have as password a mix of numbers, letters, special characters, capital letters, no capital letters, etc.?

Why isn’t it up to the user to choose his password, be it a combination of characters or a sentence? “My password is this one and I dare you to guess it!” The number of characters makes the password more secure and a password like the one in the example is also easier to remember. And if the system doesn’t allow spaces, just ignore them. We could also provide the option of seeing what you are writing in the password field.

Do not force users to “correct” their mistakes in the forms! We should appreciate it when they are making attempts to prepopulate the content of a field. For instance, based on the postal code, they could almost generate your complete address. This may let the user off from filling in four or five fields. It is a rather interesting thing, but unfortunately not many people put in into practice. A part of the scripts that populate the address based on your postal code require that the information be introduced without any spaces, commas, etc. Instead of taking all the characters except for the numbers out of the script configuration, the user is compelled to “correct his mistake”. Why should the user introduce the data in a certain way? Why do we waste their time and force them to reintroduce the data in a certain way? Why don’t we format the respective data? If a field cannot contain characters such as full stop, comma, it is much easier for us to format correctly from the script instead of forcing the user to correct a mistake which may not even be his fault. The country selectors are a pain in the neck and they should be considerably Our core competencies include:

Product Strategy

Product Development

Product Support

3Pillar Global, a product development partner creating software that accelerates speed to market in a content rich world, increasingly connected world. Our offerings are business focused, they drive real, tangible value.

www.3pillarglobal.com

www.todaysoftmag.com | no. 30/december, 2014

29


programming Save user’s time with a well designed interface improved. Why not have the most common countries on top of the list or introduce a research field which should filter the countries while the user is filling in that field? There are many solutions which could improve the quality of the country selectors; all we have to do is pay a little more attention to the small things. One thing that we should also take into consideration is the interaction with a form on the mobile devices. We should think about alternative controls (swipe, pull, push, etc.) in order to facilitate a little bit the manner in which the user fills in a form. Time and again I have seen forms that are impossible to fill in on mobiles. There would be much more to tell about the forms, but I will stop here, as the article would become too long.

Repetitive Interactions

We must be careful with the elementary interactions. Can we reduce a fraction of a second from the repeating interactions? The searching of a site, for instance. Does the user have to click the “Search” button or does it also work by pressing the “Return” key? First of all, he shouldn’t have to press the search button in order to get the results. Nowadays, most of the research forms work both ways, but just because the Search button is near the search field, most of the users are tempted to click on it. We have started to neglect the “Remember me” option and this is because modern browsers come to our help, which is not a bad thing. Still, what do we do if the user uses a public computer? A relatively simple answer would be: “Use a private session!”. Private sessions in the browser are something comparatively new, and occasional users are not familiar

30

enough with this option. Surfing and the dropdown type menus. How many times have the three level menus brought us to our wits’ end and how difficult was it to reach the last level? If you have more than two dropdown levels, even though two are a bit much, too, it is clear that you are doing something wrong, and what should initially have been something simple and quick turns into a process meant for a surgeon’s skills, where the hair precision prevails. In most of the cases, a difficult navigation is the main reason why users leave the site.

Help the users process the content more quickly

We waste so much of our users’ time with a large amount of content which is so badly written that it is impossible for them to find the pieces of information they need. For a start, we could better use the titles, paragraphs, lists, quotes, big spaces between the texts, much more clear areas and so on, so that we edit the content in a way in which users could scan it and extract the relevant information as quickly as possible. It is very easy to distract the user’s attention from what is important and take him to areas where you can lose him, which is why we should be very careful with what we place around the areas of interest. In the design process, we divide the content into three areas: The 1st area: Navigation, search field, log in area, account, shopping basket if it is an online shop, product list, item list, service list and sometimes contact page and so on. The 2 nd area: Useful information, secondary pages (about us, frequent

no. 30/december, 2014 | www.todaysoftmag.com

questions, help, social profiles, newsletter, etc.). The 3rd area is usually the footer area, which only few people pay attention to. Let’s take the example of an online shop, which has the following sequence as the main path: choose product, add to the shopping cart, pay. The more complicated the process, the more time the user wastes and he loses patience, eventually abandoning the order. All the above mentioned are but a small part of the problems users deal with every day, because of us, in their search for information. We could do a lot more, in every domain of the web. Most of the times, we are aware of the fact that we are wasting our users’ time, but we do not take any action. This is why we should be vigilant and ask ourselves one thing: “How could I save a little of the user’s time in this situation?”. And we should not forget what Steve Jobs said: “Things don’t have to change the world to be important.”


programare

programming

From Design to Development

L

ately, as the web becomes more and more diversified, more and more designers having limited technological knowledge manage to build successful sites. From the beginning, we must admit that designers are totally dependent on the developers to bring their creations to life. On our own, most of us are able to create a static demo or a basic animation if we are lucky, but that doesn’t mean that’s all. Thus, we need a developer to translate our design into the code that any device would understand and execute anywhere in the world. Vlad Derdeicea office@subsign.co Lead Graphic Designer @ Subsign

How do designers communicate with the developers, most of the times? The designers send a bunch of images, sites for example, mixed up PSDs and incomplete documentations. Once the development phase begun, changes in the graphics or the basic structure occur, inevitably. This bunch is held together by short meetings and superficial notes and, at the end of the project, everybody hopes that it bears resemblance to what the designer had sent in the first phase. We all agree that there is a gap between the designers and the developers. I don’t have a simple solution in this respect, but I do have a few pieces of advice received in time.

for the developer to understand what your client’s expectations are. A good solution in order to avoid misunderstandings is to show the developer right from the start what your wireframe is and ask if what you want to create is achievable. Now is the best time to ask questions such as: “Can we produce this thing within the established budget?” or “Do you see (add your concern here) as a problem which might make us exceed the deadline?”

Generate a good mock-up of your design

Before opening the Photoshop, structure the site in a grid that is adequate for different resolutions and devices. Using a content width of maximum 1080 px is a good manner to make sure you will make the developer’s Make a solid plan The drawing of a plan is essential to the life easier. Generating a grid is also an easier building of successful websites, but also method for him to observe the things as they

www.todaysoftmag.ro | nr. 30/decembrie, 2014

31


programming From Design to Development are in line. Using a grid based layout is the easiest and best way to get a “Pixel Perfect” design. Once the design is ready, it can be modified by the client or it can move on to the development phase, where it is important that you foresee how it will look scaled on the mobile phone, tablet and desktop. How will your layout be affected if another category is added, or if the client wishes to add 5 more text paragraphs? All these questions should have an answer before you send the project to development. Organize the PSD is such a way that others also understand

which layers or groups to work with. A good solution is to group the layers and name them as they would be named on the web (Header, Footer, Content, Video, etc.). Do your best to make sure that each graphical element is on a separate layer; there is nothing worse than trying to separate two elements from a single layer. All these things sound natural, but they significantly reduce the development time if they are organized. Moreover, it is essential to also have a few images of details (different states of the buttons, hover menus, calls to action, etc.). A good application for communicating feedbacks on an image or other details is “Skitch”.

Learn the functionality of your UI

When you create different elements with special features or animations, the best thing you can do is to find examples or write clear documentations. The developers cannot read your mind (yet) if you only give them a static image with no observation or demo.

32

no. 30/december, 2014 | www.todaysoftmag.com

Give an efficient feedback

Once the developer has finished his work, there will be reviews from you or the client. Hardly ever does it happen for him to achieve the final version on the first review, so, you will have to present to him as clearly an as specifically as possible the changes you want him to make. In order to avoid any kind of confusion it is recommended that you consider the modifications through versions. In conclusion, the more organized you are, the easier it will be for you to make changes or to make others understand and appreciate your vision. If you are new in webdesing, do not let any opportunity pass by. Ask your developer how you can save him a few hours and at the same time create things that are as innovating as possible. Learn from your mistakes so that, as you move forward, you can comply more easily with the established deadline.


programming

TODAY SOFTWARE MAGAZINE

To Be Or Not To Be Responsive

T

he web is responsive on its own ­ by default. It’s us that’s been breaking it all these years by placing content in fixed ­width containers - Andy Hume

While designing websites in a responsive manner is becoming a must lately, we should not forget that there is no universal correct solution for anything, and this also applies to web design. While it can be a massive advantage and attractive feature for the end­user, it can also prove to be a drawback if not used properly. In this presentation I propose taking a look at the DOs and DON’Ts of responsive web design: how it should be used, when it should be used, and most importantly: when it should be avoided.

What is Responsive Web Design

The importance of the presentation tier is given by the fact that it is responsible with the user interaction; it is what the end­ user sees after all the processing that the application handles; the presentation has to match the complexity of the rest of the application. It must be simple, intuitive and light even though the system is heavy or complex, because the end­user does not care about the load of the system or it’s elaborated structure; his desire is to navigate as straightforward and fast as possible through the

application. As far as web applications are concerned, the front end tier is the content rendered by the browser. This content can be static, dynamic, but it is usually a combination between the two. There are of course a lot of challenges in developing content for browsers, because of the numerous types of browsers and their versions, many having certain traits. There is no correct number of principles to take into consideration when creating the interface or design of a software product, but there is one on which everyone agrees: simplicity. An important aspect of styling is checking across several browsers and to write concise, terse code that is specific yet generic at the same time and displays well in as many renderers as possible [Cod09]. CSS can be used to display the document differently depending on the screen size or device on which it is being viewed. Coding towards this kind of document flexibility is known as responsive web design. The aim is to craft sites that provide an optimal viewing experience independent on the size of the display used to render them. In addition to this, the navigation must be done with a minimum of resizing, panning or scrolling. A website that has responsive web design adapts the layout to the viewing environment (as seen in Figure 1) by using fluid grids, flexible images [Mar11] and CSS3 media queries. Simply put, media queries are “if clauses” for the browser that renders the page: it knows that some styles have to be applied only if a condition is matched (usually the condition is put on the width of the screen).

To be or not to be responsive ­this is the question

• The main concern with this concept is that designers and developers define “responsive” differently, which leads to 1 communication problems . Hence, let’s see how the term was

Figure 1 – Example of a responsive layout on a monitor, tablet and mobile device 1

“What we mean when we say responsive”, Lyza Danger Gardner, A List Apart, March

2014

www.todaysoftmag.com | no. 30/december, 2014

33


programming To be or not To be responsive 2

defined when it first appeared, in 2010 : Ethan Marcotte defined it as providing an optimal viewing experience across a wide range of devices using three techniques: fluid grids, flexible images and media queries. • We must understand that the main goal of a mobile web experience is to be lightning fast and provide a compatible user experience on all devices, and most websites fail at the first part of the statement: performance. Responsive web design was never meant to “solve” performance, which is why we can’t blame the technique itself [Fir14]. • A new movement is starting to emerge, called by some “responsible responsive web design”, which suggests that responsive design shouldn’t be the only solution for mobiles, but used together with other techniques, such as conditional loading and responsiveness according to group.

Conditional resource loading

overcome the desktop ones in the years to come. The answer depends on the nature of the content, though: if there’s too much content that needs to be displayed on the phone, it’s better to keep a simple version of the site for mobile and provide an app to deal with the complexity of your system (banks usually follow this approach, for example). Relying on a respon3 sive website versus a mobile app is a business call , rather than a technical one. The ultimate goal for a website should be “happy users”, and not “being responsive”. When you know your goals, you can decide which tools and techniques are best to achieve them. Users won’t be happy without a high­performing website [Fri14]. Keeping the above in mind, in order to end on a happy note, here’s a quote from Brad Frost: “Your visitors don’t give a s**t 4 if your site is responsive” . They don’t resize the browser and they don’t care what responsive is. They want something fast and easy­to­use.

Media queries are usually either stored in a single CSS file with multiple @media declarations, or in multiple CSS files Bibliography linked to from the main page via media attributes (e.g. <link rel=”stylesheet” href=”mobile.css” media=”(max­ width: 480px)”>).

Although most tutorials and websites use the first technique, it’s wrong. Furthermore, there are some who think that the second options only loads the corresponding CSS for the resolution of the device ­ this is also wrong. Both of these solutions are wrong, simply because it makes the browser load all the possible CSS and then parse them to see which instructions to apply. This affects performance because slower mobile browsers are parsing both mobile and desktop intended styles and CSS blocks rendering. A solution for conditional resource loading is to replace media queries with Javascript matchMedia queries and only load styles designed for the resolution being used. Tools such as Modernizr also help by providing a way detecting features not solely depending on the resolution.

[Cod09] ­ Ivan Codesido, What is frond­e nd development, The Guardian, September 2009 [Fri14] ­ Maximiliano Firtman, You May Be Losing Users If Responsive Web Design Is Your Only Mobile Strategy, Smashing Magazine, July 2014 [Mar11] ­ Ethan Marcotte, Flexible Images, A List Apart 328, June 2011

Okay, okay. How do I do it?

There is no concrete solution for responsive web design (yet), but there are some tricks that help enhance performance when developing responsive solutions: • follow a mobile­first approach ­ this is extremely important. Think of mobile performance when developing or redesigning a website (the latter is harder) • load only the Javascript and the styles you need for the current device using conditional loading • deliver responsive images via JS until some better options appear • deliver the document to all devices with the same URL and content, but with different structure • test what happens on real devices (not only by resizing the browser), preferably on slow connections

Raul Rene Lepsa UI Developer @ SF AppWorks

So, what’s the answer? Conclusions.

The short answer is yes, you should be responsive when designing and implementing websites, but the methods of doing it have to change. Responsiveness is important especially because of the exponential increase of mobile users, which slowly but surely will

3

4 2

34

“Responsive web design”, Ethan Marcotte, A List Apart, May 2010

no. 30/december, 2014 | www.todaysoftmag.com

“Responsive Website or Mobile App: Do You Need Both?”, Rahul Varshneya,

Entrepreneur, July 2014 “Responsive Web Design: Missing the Point”, Brad Frost, personal blog, March 2013


programming

TODAY SOFTWARE MAGAZINE

JDK 9 – A Letter to Santa?!

A

s everybody knows, winter (especially the time before Christmas) is a proper time for dreaming and hoping, a moment when dreams seem to be touchable. A moment when children and grown-ups write on paper or in their thoughts fictive or real letters to Santa Claus, hoping their dreams will become reality. This is catchy, as even the people behind OpenJDK expressed their wishes of java, on the first day of December, when they published an updated JEPs list. Hold on, don’t get excited just yet...as we bitterly know, they might become reality somewhere in early 2016. Or, at least, this is the plan, and history has shown us what sticking to a plan means. Of course, the presence of a JEP in the above mentioned list doesn’t mean that the final release will contain it as the JEP process diagram clearly explains, but for the sake of winter fairy tales, we will go through the list and provide a brief description of what the intended purpose of each item is. Disclaimer: the list of JEPs is a moving target; since the publication of this article, the list has changed at least once.

JEP 102: Process API Updates

For those of you who were not that good, it seems that Santa punished you and you had the pleasure of working with java’s process API and of course met its limitations. After the changes in JDK 7, the current JEP comes to improve this API even further and to give us the ability: • to get the pid (or equivalent) of the current Java virtual machine and the pid of processes created with the existing API • to get/set the process name of the current Java virtual machine and processes created with the existing API (where possible) • to enumerate Java virtual machines and processes on the system. Information on each process may include its pid, name, state, and perhaps resource usage • to deal with process trees, in particular cases it means to destroy a process tree • to deal with hundreds of sub-processes, perhaps multiplexing the output or error streams to avoid creating a thread per sub-process

JEP 143: Improve Contended Locking

on different levels from the perspective of the logged information’s criticality (error, warning, info, debug, trace) their performance target being that error and warning not to have any performance impact on production environments, info suitable for production environments, while debug and trace don’t have any performance requirements. A default log line will look as follows:

I had the luck and pleasure to be present to a performance workshop the other days with Peter Lawrey, and one of the thumb rules of java performance tuning was: the least concurrent an application, the more performant it is. With this improvement in place, the rules of performance tuning might need to find another thumb rule, as with this JEP implemented the latency of using monitors in java is targe- [gc][info][6.456s] Old collection ted. To be more accurate, the targets are: complete • Field reordering and cache line alignment • Speed up • PlatformEvent::unpark() • Fast Java monitor enter operations • Fast Java monitor exit operations • Fast Java monitor • notify/ • notifyAll operations • Adaptive spin improvements and SpinPause on SPARC

JEP 158: Unified JVM Logging

The title kind of says it all :). If you are working with enterprise applications you had to deal at least once or twice with a gc log and I suppose you raised at least an eyebrow (if not both) when seeing the amount of information and the way it was presented there. Well, if you were „lucky” enough you probably migrated between JVM versions, and then definitely wanted/ needed other two eyebrows to raise when you realized that the parsers you’ve built for the previous version had issues dealing with the current version of the JVM logging. I suppose I could continue with the reasons why it is bad, but let’s concenI don’t know about you, but I can defi- trate on the improvements, so hopefully by nitely find at least a couple of scenarios the next release we will have a reason to where I could put at good use some of complain that it was better before :P. these features, so fingers crossed. The gc logging seems to try to align with the other logging frameworks we might be used to, like log4j. So, it will work

In order to ensure flexibility the logging mechanisms will be tunable through JVM parameters, the intention being to have a unified approach to them. For backwards compatibility purposes, the already existing JVM flags will be mapped to new flags, wherever possible. To be as suitable as possible for realtime applications, the logging can be manipulated through jcmd command or MBeans. The sole and probably the biggest downside of this JEP is that it targets only providing the logging mechanisms and doesn’t necessarily mean that the logs will also improve. For having the beautiful logs we dream of maybe we need to wait a little bit more.

JEP 165: Compiler Control

As you probably know, the java platform uses JIT compilers to ensure an optimum run of the written application. The two existing compilers intuitively named C1 and C2, correspond to client (-client option) respectively server side application (-server option). The expressed goals of this JEP are to increase the manageability of these compilers: • Fine-grained and method-context dependent control of the JVM compilers (C1 and C2). • The ability to change the JVM compiler control options in run time.

www.todaysoftmag.com | no. 30/december, 2014

35


programming JDK 9 – A Letter to Santa?! • No performance degradation.

JEP 197: Segmented Code Cache

It seems that JVM performance is targeted in the future java release, as the current JEP is intended to optimize the code cache. The goals are: • Separate non-method, profiled, and non-profiled code • Shorter sweep times due to specialized iterators that skip non-method code • Improve execution time for some compilation-intensive benchmarks • Better control of JVM memory footprint • Decrease fragmentation of highly-optimized code • Improve code locality because code of the same type is likely to be accessed close in time • Better iTLB and iCache behavior • Establish a base for future extensions • Improved management of heterogeneous code; for example, Sumatra (GPU code) and AOT compiled code • Possibility of fine-grained locking per code heap • Future separation of code and metadata (see JDK-7072317)

other than JDK and probably integration with the existing tool chain.

JEP 201: Modular Source Code

The first steps in the direction of project jigsaw implementation, having the intention of reorganizing the source code as modules enhancing the build tool for module building and respecting the module boundaries.

JEP 211: Elide Deprecation Warnings on Import Statements

The goal of this JEP is to facilitate making large code bases clean of lint warnings. The deprecation warnings on imports cannot be suppressed using the @SuppressWarnings annotation, unlike uses of deprecated members in code. In large code bases like that of the JDK, deprecated functionality must often be supported for some time and merely importing a deprecated construct does not justify a warning message if all the uses of the deprecated construct are intentional and suppressed.

JEP 212: Resolve Lint and Doclint Warnings

The first two declared goals are from my perspective quite exciting; with the two in place the sweep times of the code cache can be highly improved by simply skipping the non-method areas - areas that should exist on the entire runtime of the JVM.

As the launch date for the JDK 9 is early 2016, this JEP is perfect for that time of the year and the corresponding chores: the spring clean-up. The main goal of it is to have a clean compile under javac’s lint option (-Xlint:all) for at least the fundamental packages of the platform.

JEP 198: Light-Weight JSON API

JEP 213: Milling Project Coin

The presence of this improvement shouldn’t be a surprise, but for me it is surprising that it wasn’t done sooner in the JDK, as JSON replaced XML as the „lingua-franca” of the web, not only for reactive JS front-ends but also for structuring the data in NoSQL databases. The declared goals of this JEP are: • Parsing and generation of JSON RFC7159. • Functionality meets needs of Java developers using JSON. • Parsing APIs which allow a choice of parsing token stream, event (includes document hierarchy context) stream, or immutable tree representation views of JSON documents and data streams. • Useful API subset for compact profiles and Java ME. • Immutable value tree construction using a Builder-style API. • Generator style API for JSON data stream output and for JSON „literals”. • A transformer API, which takes as input an existing value tree and produces a new value tree as a result.

Project coin’s target starting with JDK 7 was to bring some syntactic sugar in the java language, to bring some new clothes on the mature platform. Even if it didn’t bring any improvements to the performance of the language, it increased the readability of the code hence it brought a plus to one of the most important assets of a software project, in my opinion, a more readable code base. This JEP targets four changes: • Allow @SafeVargs on private instance methods. • Allow effectively-final variables to be used as resources in the try-with-resources statement. • Allow diamond with inner classes if the argument type of the inferred type is denotable. • Complete the removal, begun in Java SE 8, of underscore from the set of legal identifier names.

Also, the intention is to align with JSR 353. Even if the future JSON will have limited functionalities comparing to the already existing libraries, it has the competitive advantage of integrating and using the newly added features from JDK 8 like streams and lambdas.

JEP 199: Smart Java Compilation, Phase Two

The sjavac is a wrapper to the already famous javac, a wrapper intended to bring improved performance when compiling big sized projects. As in the current phase, the project has stability and portability issues; the main goal is to fix the given issues and to probably make it the default build tool for the JDK project. The stretched goal would be to make the tool ready to use for projects

36

no. 30/december, 2014 | www.todaysoftmag.com

Olimpiu Pop

olimpiu.pop@ullink.com Senior Software Developer @ Ullink CoOrganizer @ Java Advent


programare

programming

Machine Learning in Practice

I

believe quality software is about making the user’s life as easy as possible. Not by just executing the required job and doing it flawlessly and efficient, but also by simplifying the task without losing functionality. A lot of our everyday tasks could be made easier if our software were able to make suggestions, try to guess and automate some of our tasks. This is one of the use cases for machine learning techniques. In addition to this, applications such as web search engines, image and sound recognition, spam detection and many more are all using machine learning to solve problems that would otherwise be almost unsolvable. Sergiu Indrie

sergiu-mircea.indrie@hp.com Software Engineer @ HP

Explanation

Although machine learning might appear to be complex and hard to apply, user_id like everything in computing, it all comes down to 1s and 0s. (Maybe even more than 1 average) The basic principle behind it is pretty 2 simple math. I want to determine the similarity of complex items but I can’t even 3 begin to declare comparison functions, so I just convert the complex items into numbers and let the machine calculate their we can plot correlation. chart Let’s take a simple scenario: We have a user base with the following data

user_id

city

age

hobby

1

London

32

tech

2

Paris

40

plants

3

Kansas

41

<empty>

And we want to provide similar users with similar hobby content (The idea is that similar people like similar things). Thus, by converting the input data into numerical data (manually or automatically):

city

age

1

32

2

40

3

41

the data in the following

Based on the above graph we can see that the two points in the upper side are closer to each other (especially if we apply weights to relevant attributes). This would indicate that the two users (user_id 2 and 3) are similar. In this case, this is due to the

www.todaysoftmag.ro | nr. 30/decembrie, 2014

37


programming Machine Learning in Practice close age values. Each number in the above two vectors indicates the word Mathematically, this would be done by computing the dis- count of that index in the dictionary. (The second vector starts tance between the two points, something like the Euclidian with 2, meaning that the word “I” is used twice in the document). distance but weighted To simplify this application we will use a partial bag-of-words model. We will define a custom dictionary containing only one relevant keyword for each category where u and v are two vectors, representing two points (items), and each vector is composed of two values, age and city (uage, ucity) ; and uage and ucity represent the weights of the age and city attributes. By providing the machine learning solution with lots of positive and negative examples (training data), the algorithm can fine tune itself to correctly determine the resemblance of two items. Three important use cases of machine learning that use the above principle are: 1. Recommendations - analyzes the preferences of a user and based on user similarity suggests potential new preferences 2. Clustering - groups similar items together (requires no explicit training data) 3. Classification - assigns items to categories based on previously learned assignments (also referred to as classifier)

Sample Application

One of the popular machine learning libraries in Java is Weka. We will use the Weka APIs to write a simple classification machine learning application. We want to replicate Gmail’s Priority Inbox functionality to the Geary desktop email client so that, when a new email is received, we can automatically determine its membership to one of 5 categories: Primary, Social, Promotions, Updates and Forums.

Data Representation

keyword

category index

category

dad

1

Primary

congratulate

2

Social

discount

3

Promotions

reminder

4

Updates

group

5

Forums

Using the above dictionary we’ll encode the following email: Hi dad. Remember to pick me up from preschool. Oh and dad, don’t be late! into the following word vector: 2, 0, 0, 0, 0

Weka Code

In order to start writing a solution using the Weka APIs, first we need to define the data format. We will start by describing a data row. // create the 5 numeric attributes corresponding to // the 5 category keywords Attribute dadCount = new Attribute(„dad”); Attribute congratulateCount = new Attribute(„congratulate”); Attribute discountCount = new Attribute(„discount”); Attribute reminderCount = new Attribute(„reminder”); Attribute groupCount = new Attribute(„group”);

First, we need a way to represent our data. The 5 categories are easy, we’ll use an index from 1 to 5. The incoming email content is a bit trickier. In these types of problems, the solution is generally to encode a piece of text using the bag-of-words model. What this means is that we represent the piece of text as a word count vector. Each row must be associated with one of the 5 categories, thus Let’s take the following two text documents: we proceed with defining the category attribute. 1. I like apples. // Declare the class/category attribute along with 2. I ate. I slept. // its values FastVector categories = new FastVector(5); categories.addElement(„1”);

Based on these two documents we create a common dictionary categories.addElement(„2”); categories.addElement(„3”); assigning indexes to all the words. categories.addElement(„4”); I

1

like

2

apples

3

ate

4

slept

5

Thus we can create the following representations of the two pieces of text: 1, 1, 1, 0 2, 1, 0, 1

38

no. 30/december, 2014 | www.todaysoftmag.com

categories.addElement(„5”); Attribute category = new Attribute(„category”, categories);

And finally we join all attributes into an attribute vector. // Declare the feature vector (the definition of one data row) FastVector rowDefinition = new FastVector(6); rowDefinition.addElement(dadCount); rowDefinition.addElement(congratulateCount); rowDefinition.addElement(discountCount); rowDefinition.addElement(reminderCount); rowDefinition.addElement(groupCount); rowDefinition.addElement(category);

Training Data

Machine learning classification algorithms need training data to adjust internal parameters to provide the best results. In our scenario, we need to provide the implementation with examples of email category assignments.


elementAt(3), Integer.valueOf(values[3])); For the purpose of this demo, we will generate our own trai- trainingRow.setValue((Attribute) rowDefinition. elementAt(4), Integer.valueOf(values[4])); ning data by using the class below. trainingRow.setValue((Attribute) rowDefinition. elementAt(5), values[5]);

public class TrainingDataCreator { public static void main(String[] args) throws IOException { ICombinatoricsVector<String> valuesVector = Factory.createVector(new String[]{„0”, „1”, „2”, „3”, „4”}); Generator<String> gen = Factory. createPermutationWithRepetitionGenerator( valuesVector, 5); File f = new File( „category-training-data.csv”); FileOutputStream stream = FileUtils. openOutputStream(f); for (ICombinatoricsVector<String> perm : gen) { if (Math.random() < 0.3) { // restrict the training set size String match = determineMatch(perm.getVector()); // first highest count wins String features = StringUtils. join(perm.getVector(), „,”); IOUtils.write(String.format(„%s,%s\n”, features, match), stream); } } }

}

stream.close();

TrainingDataCreator uses permutations to generate all training rows up to the count of 4 (each word count ranges from 0 to 4). Since we want this to be a realistic training data, we will refrain from creating a complete training set and only include 30% of the permutations. Also, we want the classification algorithm to make some sense of the data, so we are setting the category for each row based on the first highest word count (e.g. For the word count dad=2, congratulate=3, discount=0, reminder=1, group=3; congratulate is the first word with the highest count, 3). The generated file will contain lines like the following: 3,1,1,0,0,1 4,1,1,0,0,1 3,3,2,4,1,4

Now that we’ve generated the training data, we need to train our classifier. We start by creating the training set. // Create an empty training set and set the class (category) index Instances trainingSet = new Instances(„Training”, rowDefinition, TRAINING_SET_SIZE); trainingSet.setClassIndex(5);

// add the instance trainingSet.add(trainingRow);

The next step is to build a classifier using the generated training set. // Create a naïve bayes classifier Classifier cModel = new NaiveBayes(); cModel.buildClassifier(trainingSet);

And to put our classification algorithm to the final test, let’s say we have an incoming email containing the word dad 5 times and the rest of the word counts being 0. (Notice that the training data only contained word counts up to 4, meaning that the classifier must determine extrapolate to determine the correct category; which, naturally, is 1 - Primary) // Create the test instance Instance incomingEmail = new Instance(6); incomingEmail.setValue((Attribute) rowDefinition. elementAt(0), 5); incomingEmail.setValue((Attribute) rowDefinition. elementAt(1), 0); incomingEmail.setValue((Attribute) rowDefinition. elementAt(2), 0); incomingEmail.setValue((Attribute) rowDefinition. elementAt(3), 0); incomingEmail.setValue((Attribute) rowDefinition. elementAt(4), 0); incomingEmail.setDataset(trainingSet); // inherits format rules double prediction = cModel.classifyInstance( incomingEmail); System.out.println(trainingSet.classAttribute(). value((int) prediction));

The above code will output 1, meaning the Primary category. A few samples of unknown test data are represented in the table below: vector de intrare

Rezultat

0, 0, 0, 5, 0

4

0, 5, 0, 5, 0

2

7, 1, 0, 4, 9

1 – greșit

As we can see, the prediction accuracy is never 100% but by fine tuning algorithm parameters and careful selection of vector features as well as training data we can reach a high correctness addInstances(rowDefinition, trainingSet, „categoryrate. training-data.csv”); Weka provides APIs to evaluating your solutions. By creating two data sets: the initial data set containing 30% of permutations Inside the addInstances method we read the CSV file and create an Instance object holding a training data row for each CSV and a second data set containing all the permutations for word counts ranging from 0 to 4; and using the Evaluation class we are row. able to obtain 95% accuracy from using 30% of the total training // Create the instance set. Instance trainingRow = new Instance(6); trainingRow.setValue((Attribute) rowDefinition. elementAt(0), Integer.valueOf(values[0])); trainingRow.setValue((Attribute) rowDefinition. elementAt(1), Integer.valueOf(values[1])); trainingRow.setValue((Attribute) rowDefinition. elementAt(2), Integer.valueOf(values[2])); trainingRow.setValue((Attribute) rowDefinition.

// Test the model Evaluation eTest = new Evaluation(trainingSet); eTest.evaluateModel(cModel, testSet); System.out.println(eTest.pctCorrect());

www.todaysoftmag.com | no. 30/december, 2014

39


programming Machine Learning in Practice Ending

The above sample code was written using Weka because I feel the APIs are easier to use and understand compared to another popular Java machine learning framework, Apache Mahout. If your use case can deal with less than 100% prediction accuracy and have the possibility to construct training sets, the advantages may outweigh the costs. We have performed tests showing that by choosing the best algorithm for your data, you can obtain training times of less than 1 second for 28000 training rows (5 attributes + category) and similar prediction times for 1000 test rows. Machine Learning has a tremendous potential to be used in all software domains (even mobile apps), and as we saw in the above use case you don’t have to be a mathematician to use the tools available. Let the learning begin!

References 1. 2. 3. 4. 5. 6. 7.

40

http://architects.dzone.com/articles/machine-learning-measuring http://en.wikipedia.org/wiki/Bag-of-words_model http://www.cs.waikato.ac.nz/ml/weka/ https://github.com/rjmarsan/Weka-for-Android https://github.com/fgarcialainez/ML4iOS Introduction to Machine Learning, by Alex Smola, S.V.N. Vishwanathan Mahout in Action, by Sean Owen, Robin Anil, Ted Dunning, Ellen Friedman

no. 30/december, 2014 | www.todaysoftmag.com


management

TODAY SOFTWARE MAGAZINE

Delivering delight

G

reat success stories in businesses are often based on being first on the market, but they thrive by being able to outmanoeuvre the competition and adapt rapidly. The easiest domain that can be used to indicate different examples remains nowadays the technology world, where we can find different good and bad stories. “The first major social network, Friendster, was trumped by MySpace, which in turn was trumped by Facebook. Netscape was trumped by Yahoo!, which was then trumped by Google”. We all know that this is not the end and, somewhere, into a basement or a small garage, someone is building the next ‚big thing’ that can become the main actor on the market. All the current small and medium companies want to scale up in a very short period of time, becoming more complex in their organizations, having multiple roles defined, and willing to manoeuvre as easily as possible in all the situations that the clients from the market are unexpectedly bringing up. It is important to learn and adapt rapidly and find the shortest path to deliver the most effective services to our clients. In our daily basis work, we have to adapt to changes in the market, changing customer needs or technology, to deal with uncertainty and regulatory changes. In fact, the only constant variable remains the change. The better and faster we are at adapting, the more we’ll not only survive but thrive, succeeding in delivering premium services on time. What can we bring in equation that would differentiate us among others? Is it enough to just deliver something? What’s the difference between outputs and outcomes? Is there an obvious distinction between them? Some of us may think the questions are merely semantic or there is no obvious difference and no effort is required on this matter. But, the truth is that it can be the difference between mediocrity and the creation of lasting and sustainable change. “Ordinary” organizations are stuck on making decisions based on outputs. Great organizations are managing the outcomes. While we would consider the difference simply as outputs are extrinsic and outcomes intrinsic, it is worth to think more fundamentally and profoundly. Companies OFTEN measure success ONLY in terms of revenue and cost savings, which is perfectly fine and common for them, but they often miss the interest of why they do what they do in the first place. What is it customers really need and what will provide value to them? Well, this is a tough question that would require a huge

amount of joined effort, favourable context, understanding of what is behind the basic agile principles and willingness on all parties. Unfortunately, most of the current services companies today value more throughput or outputs over outcomes. They emphasize on more than better. Throughput measurements tend to cover how fast we get through the work. As examples for agile teams, we can consider things like “number of items/features/ stories completed” or “total story points/ideal hours/t-shirt sizes” completed during an iteration/sprint. Lean teams might use cycle time, or how long it takes a similar sized item to move from the beginning of the lifecycle to when it is launched. Output would include lines of code, features or function points completed. Many important representatives in the agile world believe that it’s worth to spend additional time thinking at alternatives to scaling agile processes to a different level by really starting to re-evaluate the Agile Manifesto, which was produced already a decade ago. This phenomena is more present nowadays on all the dedicated conferences and relevant blogs. Well, the same relevant agile representatives believe that nowadays “working software” is not enough for our clients. Unless the clients are delighted by the working software, the future of the business is not bright. Clients delight has become the new bottom line of business relations. We have, again, many examples of firms/companies that succeed in delighting their customers like Facebook, Apple, Tesla, Salesforce. Having your permission, I would like to deviate more or less from the main subject, and share with you some examples of the ways of shifting thinking in Agile. Considering this, and by getting in touch, directly or indirectly, with different people (see. Mitch Lacey, Kenny Rubin, etc) and by reading different valuable blogs (Mike Cohn, Kenny Rubin, Steve Denning, etc.), I have found some relevant approaches that could be considered in this shifting process. I would start with my favourite one, which is actually the main topic of this article: www.todaysoftmag.com | no. 30/december, 2014

41


management Delivering delight Shifting from an output to an outcome Remember well, the implicit goal of work in the Manifesto is “working software”, which is actually the production of an output – a definite work. Even “delivery of a user story” is the production of an output. By contrast, delighting the customer is an outcome, which can be considered more as a human experience. The shift from an output to an outcome is fundamental.

Propose customer delight as new dimension of “done” in Scrum We all know that in the Scrum methodology, there has been a great deal of discussion about the definition of “done”. Most organizations fail to recognize that the job is not fully “done” until the customer has been delighted.

Shifting from an implicit goal to an explicit goal Ideally, agile implementation delights its customers. All this has a cost and that is almost entirely related to individuals. When the individuals are leaving, the organization would slide back into merely delivering software. To cement the goal in place, it is vital that it is the explicit goal of the company.

Starting to measure the customer delight The importance of measuring customer delight through the Net Promoter Score can hardly be over-emphasized. Without measurement, “customer delight” becomes an empty phrase or slogan, and the firm will slide back into focusing ONLY on the financial bottom line. It is only through this measurement that the whole organization can be truly focused on delighting the customer. There are many other shifting ideas that could bring a different approach with an amazing outcome to our clients, but the main purpose of the article is not necessarily to reinterpret entirely the existing Agile Manifesto. Coming back, many others would consider the strategic outcomes to operational delivery as a main collective effort in shifting the mind-set of people from more outputs to better outcomes. Instead of focusing on producing ever more outputs, they believe the goal of services organizations must be to deliver better outcomes. The key could be in understanding what the right objectives and outcomes are, and then find the most coherent, simple and cheapest path to achieve them.

42

I mentioned, at the beginning, the importance of making the right manoeuvre. Well, being objective and outcome focused helps people individually and organizations collectively, to adapt quickly to change and to outmanoeuvre the competition. Everything sounds great so far, but the big challenge will start when we strive to do this shift by asking ourselves, how do we do this? I think it’s a matter of choice for each company to use different tools and concepts on this matter, but, I do believe that we need to consider an approach that would provide enough flexibility for people to innovate rapidly as new opportunities emerge, while also rigorous enough to help organizations make better strategic decisions, for achieving a vision. There is an old saying, the customer is always right, which might be a good principle for certain situations, but it doesn’t serve well for today’s services companies. A better formulation can be found, one that would sound like this: “if you don’t listen to your customer, you will fail, and if you only listen to your customer, you will fail”. We all know that all the customers wishes are transformed in a list of requirements called backlog in agile world, prioritized and then, we build as many new features as we think possible to meet these requirements. All teams do this over and over again with a collective focus on prioritizing and delivering ever more features to their customers, ever more efficiently. After all, this does not sound as a bad thing, actually it is pretty responsible and in the end, we are giving the customer what they ask for, so, what’s wrong with that? All business executives will be pleased. Many of us equate delivery of features with success in the marketplace, so teams quickly focus on how to design, build and test the solutions that meet the requirements, instead of trying to further understand the problems and why they exists, or the desired outcomes behind the backlog. Until people fully understand the problem they are trying to fix, they cannot possibly know the best way to fix it. Until an organization fully understands what objectives it wants to achieve and what related outcomes it hopes to deliver, it cannot possibly know the best options to pursue. In order to figure this out, we need to know what is going on, why it is happening, and who is experiencing the

no. 30/december, 2014 | www.todaysoftmag.com

problem. The outcome of our work is what really matters. We all intuitively know that our goal isn’t to release 10 new features this year, but to improve the lives of our customers the best we possibly can. To do that, we must always focus on the outcome or the impact our work will have on the users of our products. We can’t stop at thinking about the outcome, though. To be as successful as we can be at having significant impact on our customers’ lives, we have to actively work to minimize the output while also maximizing the outcome or the value of our services. As all engineers, we are always focused on how to increase the output of our teams. With each story, we aim to understand the value we are trying to provide to our customers, being translated in the well know concept of minimum viable product (MVP). It is important to make the right choice and start building the right story. Even though, this is mainly the responsibility of the Product Owner who needs to make decisions, I believe that we can have a huge contribution by asking ourselves, “Are we choosing to work on stories that balance effort and value so that we can have the most impact on our customers’ lives?”. What is going to be the best choice: doing 10 things crudely or doing 5 things brilliantly? The Mobius loop discovered by G e r m a n m a t h e m a t i c i a n s Au g u s t Ferdinand Möbius and Johann Benedict (according to Wikipedia a Mobius strip is a „surface with only one side and only one boundary component”) features a continuous loop with a twist in the middle. This visualizes the constant flow of information that should exist within an organization to create clarity of goals at all layers. With Mobius, you can start anywhere you feel the situation is asking, but it is important to start with a clear understanding of the problem or situation, and after that, we can move on to deep dive. There can be situations when we may have up front a set of options and a work back is required to ensure the problem or situation is well understood. Having the Mobius diagram in mind, as an alternative, we can create transparency over the decision making and investments. This is a plus for the business to track the investment path and check if it is appropriate to the value being returned


management

TODAY SOFTWARE MAGAZINE

Empower teams to reach the outcomes or whether they should invest in another area that may give them higher gains (it within constraints. can be seen in a reprioritized list of stories in backlog). Mobius has seven steps that work together Problem or Opportunity - what problem are you trying to solve and what opportunity do you want to pursue? Outcomes - what are the most important outcomes to improve next? Deep Dive - Discover more about the problem or opportunity. Options - What are the current options that we do have for improving the most important outcomes? Deliver - Deliver the best option with It is not easy to motivate the people to an incremental step and learn from it. Measure - How much did we move the make the switch to adopt outcome-based thinking. Many people are used to remain needle? Adapt - Should we continue on the in their comfort zone. Maybe it is more feasible to do this step in parallel, trying same direction or should we need to to put more emphasize on outcome during change the option? In the end, I would advise everyone the time. With the contribution of Gabrielle to continue to improve the agile tailoring and Ryan, below we can emphasize on process by making a review to the current some benefits, principles and steps that version on the Agile Manifesto in order to the outcome-driven approach requires or highlight our weaknesses, and systematically fix them. includes. Don’t forget: “If you don’t listen to your customer you will fail, and if you Benefits only listen to your customer you will fail” You can test multiple ideas quickly in order to make decisions having the results Sources of inspiration in writing this as the base and not forecasts. Improve time to market as we advo- article: cate to deliver only the minimal amount • Mike Cohn - “Coaching Agile Teams” needed. • Mike Cohn - “User Stories Applied” The investments are made in chunks • Mitch Lacey - “The Scrum Field Guide” and not all from the beginning, reducing • Steve Denning - “The Leader’s Guide to the risk and potential unexpected costs. Radical Management” Delivering the MVP means less code to • Gabrielle Benefield & Ryan Shriver - “Mobius create and maintain, reducing the overall - Create, deliver and measure value” costs for entire SDLC. • http://www.mobiusloop.com

Mobius is based on four main principles Focus on the outcomes, the drive value over the output that drives effort. Align teams around solving problems and pursuing common opportunities that benefit the organization and their customers. Measure value continuously in order to make sure that you are heading on the right direction.

Sebastian Botiș

Sebastian.Botis@endava.com Delivery Manager @ Endava

www.todaysoftmag.com | no. 30/december, 2014

43


management

programare

Sun Tzu’s Art of War and Information Era

T

his article is the second related to Sun Tzu’s Art of War, and adds more arguments to my personal belief that the 2500 year military treatise is a guidebook for the nowadays role of Product Owner.

Argument

In this respect, a successful PO must care outdated or inaccurate then the project As described in the previous article for his Scrum Team’s morale and for the success is jeopardized or even compromised. Thus, having the information from (Ancient advices for a Product Owner – budget of the product. trusted and reliable sources is a crucial TSM Issue 29), the PO and the General aspect of a project success and, implicitly from Sun Tzu’s Art of War are very simi- Foreknowledge lar in regard to their responsibilities and Foreknowledge - knowledge about of a successful PO. characteristics. In this article I will try something before it happens. [Longman to expand the list of similarities between Dictionary of Contemporary English Information sources these roles and present the way that Sun 1987] Sun Tzu describes five types of secret Tzu treated the importance of information ‘3. Now the reason the enlightened agents that can be used by a general: for the general and how this applies to the prince and the wise general conquer the • Native – agents recruited from the PO role nowadays. enemy whenever they move and their enemy’s country; In Sun Tzu’s Art of War, 13th Chapter achievements surpass those of ordinary men • Inside – enemy officials; • Doubled – enemy spies that can be (Employment of Secret Agents) is dedica- is foreknowledge. 4. What is called ‘foreknowledge’ canrecruited; ted to secret agents and knowledge about • Expandable – spies that ‘are deliberthe enemy army and moves. The goal of not be elicited from spirits, nor from gods, ately given fabricated information’; the war is to obtain a quick victory, and nor by analogy with past events, nor from • Living – the spies that return with in order to do that, the general should calculations. It must be obtained from men information from enemy’s ground. constantly be informed of and adjust his who know the enemy situation.’ The above quote is relevant for the fact strategy based on enemy strength, posiThese types are corresponding to tion, and maneuvers. A general should that a general must have, at all times, valid have a strong intelligence force in order to information about the enemy maneuvers different types of information that can be and location, as well as about its plans. It obtained and used by the general, such as: obtain information about these aspects. • About the enemy nation’s morale Nowadays, a PO must be constantly also underlines that relevant information and status (native spies); informed about the market trends, consu- cannot be obtained only from trustful and • About enemy’s politics (inside spies); mer behavior, and other similar products up-to-date sources. In the information world we’re living, it • Counterintelligence (doubled and on the market. He should position the proexpandable); duct he owns on the market, and deliver is obvious that a PO should be connected • Strategic and tactical (living spies). value through its functionalities, value that to the market the future product will be can ultimately be converted to profit for positioned on, and the decisions he makes We observe that these types of inforthe organization the product is created for. about the product features must be made In addition to the importance of based on valid information about the mation are matching the base for PO information for obtaining victory, Sun changes in the environment for the future decisions – market researches describe the current status, the trends and the technoTzu describes its influence on the army’s product. The planning activities that a PO and logies. Nowadays all software companies morale and the kingdom’s treasury, because costs for a long campaign are affecting the Scrum Team perform during the deve- have confidentiality agreements with their the kingdom’s economy. These factors are lopment of the product must be based employees – that’s the legal form of very important for the quality of a general. on information, and if the information is counterintelligence.

44

no. 30/december, 2014 | www.todaysoftmag.com


programare The PO is not responsible for setting up the tools and processes for gathering data, for these there is the organization and its specialized departments, or even external sources of data (consulting companies, data reporting companies, and so on). Nevertheless, the PO must request them and take them into account in his project related decisions and planning activities. The importance of secret agents, thus information, is emphasized by Sun Tzu: ’13. He who is not sage and wise, humane and just, cannot use secret agents.’ and ’... there is no place where espionage is not used’.

Development costs and Information costs

TODAY SOFTWARE MAGAZINE account the costs for developing the product and the value of each Sprint’s deliverable. The backlog items that bring the most value to the product should always be the ones that are ready to be developed by the team. The priority should be assigned by the PO based on informed decisions.

Conclusions

We all know that information as well as delivering a software product includes an amount of costs. The success of a SCRUM project is depending on the information that the PO has and on the way he uses the information on the decisions he or she makes about the backlog and the stories that are worked on by the development team. The PO must also gather information from all the stakeholders and from the market in order to be able to make decisions and present their benefits to the project sponsors. Sun Tzu’s Art of War proves to be a valuable source of information for a modern PO, as well as a reliable source of learning about how important it is to address all the aspects of a project – in this case information and costs, in order to be successful.

Sun Tzu describes the way that the costs of a campaign were calculated both for maintaining the army in a good shape for the battle but also for maintaining the home economy safe: ‘1. Now when an army of one hundred thousand is raised on a distant campaign, the expenses borne by the people together with disbursements of the treasury will amount to a thousand of gold pieces daily. There will be continuous commotion both at home and abroad, people will be exhausted by the requirements of transport, and the affairs of seven hundred thousand households will be disrupted.’ It’s important to mention that the social Bibliography organization in ancient China had family http://web.stanford.edu/class/polisci211z/1.1/ at its base, and eight families comprised Sun%20Tzu.pdf a community. If one family sent a man to war, the rest of the community should have contributed to support that family. That’s how the calculation was made in the above quote. It is easy to see that a long campaign has a negative influence on the people and Liviu Ştefăniţă Baiu liviu.baiu@endava.com the home economy, and why the costs of intelligence are affordable if they are justiSenior Business Consultant @ Endava fied by reducing the period of war. The PO should always take into

www.todaysoftmag.com | no. 30/december, 2014

45


testare

programming

Atlassian JIRA REST API

I

don’t think it is a coincidence that Atlassian JIRA is the issue tracking tool used in every project I’ve been part of up until now. Although there are quite a few similar products on the market, JIRA is one of the names recognized by most of the actors involved in software projects. Together with other renowned titles such as Bugzilla or Redmine, JIRA stands out thanks to its complete functionalities set, its quality, ease of use and also its extensibility. Atlassian JIRA was designed as a flexible product, easily extensible via plug-ins. In addition, the system communicates with the outer world through a REST interface. By this means, the clients of the platform are able to perform various operations. In the next paragraphs we will focus on the features of this API, talking about a few classic use cases, such as searching for an issue or adding a work log. Also, we will spend some time looking both at the Java client provided by Atlassian and the implementation of a REST client using Jersey.

JIRA REST Java Client

Atlassian provides us with a class library destined to ease our work with the REST API, suggestively called JIRA REST Java Client. This library publishes, in turn, an almost complete API for the operations we can perform in a JIRA project. For instance, using this client we are able to retrieve a project by its key, create issues, delete them, add attachments, add work logs and many more. The methods that we can use are grouped in a few classes based on the business object that we’re talking about. For instance, we can call the methods pertaining to issue administration only after we get an object of the IssueRestClient type. Before that, we need an object of the JiraRestClient type, created by the means of an asynchronous implementation of the JiraRestClientFactory interface. The details mentioned above can be noticed in the JiraClient class, which is part of the demo project that accompanies this article. The project can be downloaded from the address mentioned in the Resources section. We have to notice that this Java project has an educational purpose and the source code is minimal, not trying to follow the best practices. Let’s assume we want to retrieve, using the JRJC library, an Issue object that models the details of a JIRA ticket. Now that we have a JiraRestClient object, we can write the following:

fact, up until now, the fact that we were dealing with a REST API was transparent for us. In order to obtain a better image of what JIRA REST API is, we have to create our own implementation of a REST client. We will do that using Jersey.

REST Client Using Jersey

Jersey offers us a simple and quick way to create a REST client. We illustrate that in the following lines of code: String auth = new String(Base64.encode(USERNAME + „:” + PASSWORD)); Client client = Client.create(); WebResource webResource = client.resource(JIRA_SERVER + REST_PATH + query); Builder builder = webResource.header(„Authorization”, „Basic „ + auth).type(MediaType.APPLICATION_JSON). accept(MediaType.APPLICATION_JSON);

Back to the scenario where we want to retrieve an issue based on the key, we assume that the query parameter contains a path such as issue/JRA-9, where JRA-9 is the key of the ticket. In order to get the Issue object we call the get() method which matches the HTTP GET method. ClientResponse clientResponse = builder. get(ClientResponse.class);

Now we are able to retrieve the serialized JSON object, in the form of a String: String json = clientResponse.getEntity(String.class);

This String can then be deserialized as a bean designed by us, using the Jackson library. This library offers us various tools for working with JSON. This scenario is one of the simplest, but things get more complicated when we have to create a work log, for instance. In this case, first we have to create a bean to model a work log with and to populate an instance with the information we want to persist. Then, we have to serialize this object as a String and transmit it Promise<Issue> promise = jiraRestClient. getIssueClient().getIssue(„JRA-9”); to the REST service using the pattern shown above, this time via Issue issue = promise.claim(); the HTTP POST method. The answer of the service will contain We notice that the getIssue(String issueKey) method returns the stored object in its serialized form, enriched with new infora Promise<Issue> object; therefore, in order to retrieve the Issue mation such as the id of the work log. instance we have to call the claim() method. We have to proceed the same way when we want to retrieve other objects such as pro- Conclusions jects, work logs etc. JIRA REST API is an interface for the outer world with real One of the advantages of using this library is the fact that we business value. This API offers organizations the possibility to don’t have to take care of the subtleties of working with REST. In easily integrate the tool in their ecosystem to define creative and

46

no. 30/december, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE complex processes. However, one can notice that JIRA REST API doesn’t cover 100% the use cases of JIRA. One of the aspects that the users are waiting a resolution to is the impossibility to work with saved filters. There is a ticket on this subject (JRA-36045), that is still open. On the other hand, we have to mention that the REST API comes with important functionalities such as the possibility to retrieve the custom fields and their value, something that was impossible using the SOAP API. JIRA REST API is at its second version and although there is room for improvement, we can say it’s a well-established product which comes with important functionalities compared to its predecessor.

Resources 1. 2. 3. 4.

https://jira-rest-client-ro-leje.googlecode.com/svn/trunk https://docs.atlassian.com/jira/REST/latest/ http://www.j-tricks.com/tutorials/java-rest-client-for-jira-using-jersey http://www.baeldung.com/jackson-ignore-null-fields

Dănuț Chindriș

danut.chindris@elektrobit.com Java Developer @ Elektrobit Automotive

www.todaysoftmag.com | no. 30/december, 2014

47


management

The pursuit of Engagement in the Software Industry

M

arketing has changed dramatically for the past few years and it has changed us too, as marketers or business owners. We’ve seen the fall of the unidirectional communication, the appearance of big-data and the constant technology development that led to more and more innovations.

Even the brand perception has changed (obviously, from the consumer’s point of view). Now, the consumer needs to be engaged, needs to be part of our brand and he demands it, because his expectations have evolved. It’s just a matter of whether brands will respond properly or not to these changes.

First thing first: Do we need engagement?

Or differently said: Do we need air to breathe? It’s more like a rhetorical question. All businesses require a certain level of engagement, whether we’re talking about engaging our colleagues or our customers. And you know what? It has been here forever. Only that now we have a fancy word for it. A few years ago, media coverage ruled the land of marketing. You had coverage, you had „results”. However, with the increase of information that surrounds us every day, media coverage is just not enough. We need to make sure that our audience receives, understands and responds to our messages. The battle for a memorable brand experience has begun. Get to the core of what

48

2. Professional Networks. It repthey feel and engage with them. Thus you will establish the connection between your resents the opportunity for a brand to position itself within a community with audience and your brand. the same interests. Content generation plays a vital role in this process, as your The software industry Taking into consideration the rise of know-how and shared experience will get the software industry, a new challenge people to consider that you can provide the has risen too. The challenge of hiring and perfect work-environment for them. 3. Local Communities. A good and retaining the best employees. However, it seems that there are never enough candi- respected brand image is the key for recdates. Also, there is some kind of a struggle ognition and differentiation. Also, it sends to retain the good employees, as they are a clear message that you understand the constantly “hunted” by the competition. importance of the support and implication What to do then? Well, engagement mar- of these local communities. People feel keting may be the only solution for your closer to such brands and they are more employees to commit to the brand’s core likely to feel that they want to be part of it values, thus making them happier and (whether as employees, business partners or just fans). more productive at their jobs. 4. The company itself. Let me put it Software companies should take into consideration that they run their business this way: if your own employees don’t in an ecosystem and the engagement pro- trust your brand with its values, how do cess can occur in multiple contexts and you think that your customers or partners will? Employees are the most valuable types: 1. Social Media. The whole concept brand ambassadors you will ever get. More of Social Media revolves around the need than that, engagement within your comto interact with others. Brands are getting pany will add a non-material motivation closer to their fans and Social Media pro- for your employees that should transcend vides the perfect tools for this. Remember the financial one. Just give them a chance one thing though: Social Media is not just to feel that they have an important role in building a successful brand. Facebook!

no. 30/december, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE How can we generate engagement?

We should first consider the very basis of the communication process, because engagement marketing (as any part of marketing) has a strong component of communicating and exchanging feelings and know-how. And we need to clearly understand that engagement is all about establishing a long-term relationship. It is like the promise between two life partners and it should eventually lead to a beautiful marriage. But let’s take a look at a few things to consider in order to generate engagement: 1. know your audience. If you don’t get this step right... don’t even consider to go further. It’ll be a waste of time and resources (or pure luck if you get some results). Where do they eat? When do they wake up in the morning? What are their core values? How do they buy food? And so on. 2. create (or find) the context for your audience: the appropriate communication channels in the right timing. And here comes the issue of having the right coverage on the right channel. OR should I say channels? Nowadays, we’re talking about cross-channel communication and integration of digital and physical engagement. 3. generate the relevant content for your audience, always based on their needs which first should be carefully assessed. The quality of content is directly influenced by the degree of your knowledge about your audience’s expectations, at an individual level preferably. In a disruptive context, the content is what can set you apart from the others, setting up a true competitive advantage. 4. interact with your audience. Often we realize the importance of engagement only in this stage. The interaction itself allows us to get feedback and discover new opportunities. Ah... and one more thing,

it’s a 24 hour „job”. Be prepared to interact anytime and don’t be afraid. Your audience is like a predator that smells when something’s fishy.

Online or offline engagement?

Simply put: both! Or you should be ready for both. We’ve all become context-, place- and time-aware and our way of life, due to the mobile era, it is demanding for a seamless and customized experience. The real challenge for a business is to be adaptable, in order to keep up the pace with a dynamic world. It isn’t about choosing between online and offline engagement. It’s about understanding your audience and communicating efficiently.

Other implications

More than that, engagement is currently re-shaping the core concepts of running a business: how we communicate with our colleagues/ employees, distribute our products, reward our partners and increase our brand’s value. Human relationships are re-defined and thus, our entire society. It’s a bit scary... this whole new process but somehow, it feels natural as we all evolve and try to find better solutions in being more efficient.

The challenge

Engagement marketing is more about creating the context of a customized experience for your customers, employees or partners, based on their needs, acknowledged or not. And maybe the word „engagement” is not enough because we need people fully committed to our cause in order to be successful and get results. At Loopaa – Your dedicated marketing agency, this is our main challenge and we love to generate that idea that will secure your audience’s commitment.

Victor Gavronski

victor.gavronschi@loopaa.ro Managing Director @ Loopaa

www.todaysoftmag.com | no. 30/december, 2014

49


programming

Network Services of Microsoft Azure

I Radu Vunvulea

Radu.Vunvulea@iquestgroup.com Senior Software Engineer @iQuest

n this article we will talk about the most important services that help us to create a better network over Azure infrastructure. I assume that you already know the base concepts of cloud. The 3 services that are falling in this category are: • Virtual Networks • Express Route • Traffic Manager

We will take each service separately Main Features and talk more about each of them.

Traffic Manager

Traffic Manager allows us to distribute traffic to different endpoints. Endpoints can be on different datacenters. From some perspectives, we can look at Traffic Manager like a Load Balancer at global level. In this way, we can deploy a solution to multiple datacenters and redirect traffic based on user location.

Used only as DNS Resolver

Traffic Manager is used only to resolve the endpoint address. Once the endpoint is resolved, the request will go directly to the endpoint and will not pass through traffic manager.

Traffic Manager DNS Name

Each Traffic Manager instance will have its own DNS name. We will register this DNS in behind our real DNS if we want to redirect traffic from our site (www. How does it work Traffic Manager checks the endpoint foo.com) through Traffic Manger. status at a specific time interval. If 4 times in a row the endpoint is down, the Configuration Tools endpoint will be marked as down and trafWe can configure it using Power Shell, fic will not be redirected to that endpoint REST API or Management Portal. All the anymore. Users that already know about features are available from each tool. that endpoint will still try to connect to that endpoint until the DNS cache expires DNS TTL (DNS TTL). We can specify how long the name of the DNS that is resolved by Traffic Manager

50

no. 30/2014, www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE will be cached by our clients.

Distribute traffic on multiple locations

Being able to register multiple endpoints from different datacenters, the traffic will be distributed to the closest endpoint There are 3 types of load balancing supported in this moment: (based on response time or other configuration – depending on • Performance – Based on performance and the load of each what load balancing method we selected). endpoint (based on latency) • Round Robin – Traffic is redirected to all endpoints in Limitations a balanced way (first request goes to endpoint 1, second to endpoint 2 and so on) Not all Azure Services Supported • Failover – When the first endpoint is down, traffic is rediYou cannot include in Traffic Manager services like Service rected to the second endpoint (and so on). We can specify the Bus, SQL Azure or Storage. In general you don’t want to do failover order. something like this.

Load Balancing Method

Endpoints Weight

External Endpoints are not supported

Each endpoint can have a specific weight that will be used You are not allowed to monitor external endpoints (from onby Traffic Manager for traffic distribution. Bigger values means premises or from other Azure Subscriptions) that that endpoint can receive more traffic. It is used by Round Robin load balancing methods (this can be configured only from Monitor Configuration PowerShell or REST API). You don’t have access to monitor configuration. Clients cannot set up how often the health of endpoints is checked.

Custom monitor endpoint

Traffic Manager allows us to specify what endpoint to moni- Applicable Use Cases tor. We can use HTTP/HTTPS endpoints. We can also specify Next, you can find 3 use cases where I would use Traffic a custom port and path. For all endpoints, the same monitor Manager. endpoint will be used.

Web Sites deployed in multiple data centers

Profiles

If you have a web site that is deployed in multiple data centers, A Traffic Manager profile represents all the configuration for the load balancing of traffic can be made easily by using Traffic Manager. If the load of a web site increases or is down, the traffic a specific node. will be redirected automatically to another endpoint.

Nested Profiles

Traffic Manager allows us to specify another traffic endpoint. Uniform load distribution In this way, we can define complex failover profiles and we can When you have a system that is distributed in different data manage different load balancing rules. centers and you have a custom rule to specify if an endpoint can accept new clients or not, you can use Traffic Manager with success. You should implement the url path that is used to check the Real DNS Traffic Manager can be used by your own domain DNS. Your endpoint health to return a HTTP error code different from 200 OK if the current endpoint cannot accept new clients anymore. DNS needs to point to Traffic Manager DNS.

Endpoints location

Web sites failover

All endpoints need to be from the same subscription. You You can use traffic manager only as a failover mechanism cannot use it with external endpoints. for your web sites and configure the last failover endpoint from Traffic Manager to return only a simple HTTP page. In this way, even if your system is down, clients will be able to see something Redirect user based on location Users will be redirected to the closest endpoint. In this way ‘nice’. clients can reach the endpoint that is in the best shape and the closest to them. Pros and Cons Pros • Simple to use Automatic Failover • Different load balancing methods Traffic Manager is able to automatically switch to another • DNS TTL endpoint when the original one is down. Cons • Cannot specify external endpoints Test new deployments • Cannot specify endpoints from different subscriptions We can very easily add endpoints with different configuration • One point of failure if Traffic is down or different version of our application and check the behavior of them (testing the surface with new features).

Reduce Application Downtime

Pricing

When you calculate the cost of using Traffic Manager in your Traffic Manager allows us to switch automatically to a healthy system you should take into account: • The number of DNS queries to traffic manager endpoint if the original one is down. Short Description www.todaysoftmag.com | no. 30/december, 2014

51


programming Network Services of Microsoft Azure Express Routes is a private connection between your network (on-premises) and Azure Data Centers. When using this feature, you have a direct connection with Azure Data Centers, which is not shared with other users. Because of this, a connection like this is not only fast but it is also extremely secure. All traffic from clients that use this feature is split in two ‘channels’. One channel is used for traffic that hits Azure public services and the second one for traffic that hits Azure Compute resources. For each of these channels (Direct Layer 3 and Layer 3), there are different speeds that are guaranteed.

connected to the same express route. All Virtual Networks connected to the same Express Route are part from the same routing domain and are not isolated between them. If you need isolation between them, than you will need to create different express routes for each of them.

Limitations Number of routes

In this moment there is a limit up to 4.000 routes for public peering and 3.000 routes for private peering. S2S or P2S cannot be used in combination with Express Route. Express Route You cannot use both methods to connect to Azure infrastrucExpress Routes is a private connection between your network (on-premises) and Azure Data Centers. When using this feature, ture. If you use Express Route, you will not be able to use S2S or you have a direct connection with Azure Data Centers, which is P2S for the same connection. not shared with other users. Because of this, a connection like this is not only fast but it is also extremely secure. All traffic from Multiple Providers clients that use this feature is split in two ‘channels’. One channel Each Express Route can be associated with only one provider. is used for traffic that hits Azure public services and the second Because of this, you cannot associate the same Express Route with one for traffic that hits Azure Compute resources. For each of multiple providers. these channels (Direct Layer 3 and Layer 3), there are different speeds that are guaranteed. VLANs to Azure Express Route Layer 2 connectivity extensions to Azure are not supported.

Main Features

Not over public Internet

Applicable Use Cases

Below you can find 4 use cases when Express Route can be Connections that are made over Express Route are going over used with success: a private connection that is not connected to the ‘known internet’.

Video Streaming

More secure

When you are using Azure Media Services for video streaData that are sent over the wire are more secure because ming, you will want to be able to stream live content to Azure the connection is over a private wire that cannot be accessed by Media Services all the time. In this case you need a stable connection between your studios and Azure Data Centers. Express public users. Route can be a good option for you.

Faster speed

The speeds that are offered over this connection are higher Monitor and Support and the bandwidth is dedicated to you. If your infrastructure and services are on Azure, than you will need at monitor and support phase an Express Connection between you and Azure. Support team needs to be able to access Lower latency The direct connection between you and Azure data center your Azure services in a fast and reliable way. reduces the latency that normally exists between two endpoints.

Data Storage

Bandwidth Available

When you are using Azure Storage or SQL Azure to store There are different options of bandwidth that are available your data, you will also want a low latency and fast connection between your data and your on-premises infrastructure. Express from 10 Mbps and goes to 10 Gbsp. Routes can be a solution for this problem.

Connection Redundancy

Yes, we even have connection redundancy. Layer 3 Bank data privacy Connectivity (over Network Service Providers) can have a redunIf you are a bank and need a secure connection between ondant connection (Active connection). premises sites and your Azure services, Express Route can be a very good solution. Using it, you will have a secure connection that cannot be accessed from internet. Easy migration from S2S and P2S If you already use Site to Site or Point to Site and want to migrate to Express Route you will discover that migration can be Pros and Cons made very easily. Pros • Fast • Secure Virtual networks • Reliable All virtual networks that are connected to the same Express • Redundant Route can talk with each other. You will be able to connect virtual • Easy to connect networks from different subscriptions as long as all of them are

52

no. 30/december, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE Cons • Not available worldwide yet

Pricing

The pricing is based on outbound traffic. A part of outbound is free, included in subscription. Exceeding it, you will be charged with a small rate per GB. The included data transfer traffic may differ based on Exchange provider port speed that you prefer to use. • When you calculate the costs of using Express Route you should take into account: • Exchange Provider Port Speed • Outbound Data Transfer

Virtual Networks

A Virtual Network is a ‘private’ network that you can define over Azure infrastructure. In each Virtual Network users have the ability to add Azure services and VM that they want and need. Only VMs and Azure services from the same Virtual Network can see each other. By default, external resources cannot access resources from Virtual Network. Of course, users have the ability to configure a Virtual Network to be accessed from the outside world (if needed). We can imagine a Virtual Network as a private network that we create at home or at work. We can add to it any resources, allocate specific IPs and subnet masks, open different ports for external access and so on. It is a network inside a network, if we could say this. Many times I refer to it as a Private Network, because you can add resources to it and limit access of external resources to it.

When you create a Virtual Network, Azure can generate your custom scripts that need to be run by IT on your on-premises network. Using these scripts, the two networks can become one network without additional configuration.

Custom IP range and Subnet Mask When creating a Virtual Network you have the ability to set a custom IP range and subnet mask. You can create your own configuration based on your own needs and network requirements.

Persistent Private IP Address In the Virtual Network resources will have a static IP that doesn’t change in time. For example, a VM will have the same IP and it will not change. On top of this, you can assign a specific IP to your resources by hand from your Virtual Network IP range.

Azure Resources Supported At the moment, only Virtual Machines and PaaS resources can be added to a Virtual Network. Resources like Service Bus cannot be added to a Virtual Network (yet).

Tools that can be used For setup and configuration we have two tools that can be used: • Netcfg – a file that is generated by Azure platform and can be used to make different configurations • Management Portal

Subnet Masks Limitation

Main Features

You can define the number of subnets, as long as they don’t overlap. The same rules are applicable for any network (from onpremises or cloud).

It isolates resources from public access

IP Type and Ranges

Using Virtual Networks you can create your own private corVirtual Network allows us to use any kind of IP, from public ner in Azure, where only you have access to it. All your resources IP Address to any kind of IP ranges. are isolated from the rest of Azure.

Type of Network

3 type of models In general, there are 3 types of configuration used with Virtual Networks: • Cloud-Only – A Virtual Network with resources only from Azure. Used to manage, secure and isolate cloud resources. • Cross-Premises Virtual Network– For hybrid solutions, where Virtual Network is used to create a space in Azure that can be accessed and is integrated with an on-premises network. Both networks form a single network that allows cross access. • No-Virtual Network – No Virtual Network used for cloud resources.

Cross-Premises Connection

Virtual Network is a Layer 3 network that is responsible for package forwarding and routing.

Supported Protocols There are 3 types of protocols that are supported in this moment: • TCP • UDP • ICMP

VPNs VPN connections are supported (RRAS – Remote Access servers and Windows Server 2012 Routing).

In use cases when you need to connect your on-premises solution to Azure resources, than Virtual Network is a must and Linux Support you need to think about it from the first moment. All Linux distribution that are supported on Azure can be used in a Virtual Network.

Resource Name Resolution

Once you integrate your network with a Virtual Network you Limitations can access your resources directly by their DNS name (for examYou cannot move resources to a Virtual Network once they ple Virtual Machine name). were created Once a resource like a Virtual Machine was created and deployed, you cannot move it to a Virtual Network. This is Automatic integration scripts www.todaysoftmag.com | no. 30/december, 2014

53


programming Network Services of Microsoft Azure happening because network information is acquired during deployment. Of course, you can redeploy your machine, but a short downtime will appear. VPN is limited only to Windows OS You can use VPN only for Windows OS (W7,W8, Windows Server 2008 R2 64b, Windows Server 2012) It cannot be used with all Azure Services At the moment, only Virtual Machines and PaaS resources can be added to a Virtual Network. Other types of services cannot be added to a Virtual Network. Cross Region A virtual region can be defined only in one region. If you want to create a cross region network you need to create multiple Virtual Networks and connect them. Network Size The smallest subnet network that is supported is /29 and the largest is a /8. VLANs cannot be added Because Virtual Network is a Layer 3 network, there is no support for VLANs (that operate at Layer 2). Tracert This network diagnostic tool cannot be used in a Virtual Network. IPv6 At the moment there is no support of IPv6. Broadcast At the moment there is no support for packages broadcast. Multicast At the moment there is no support for multicast. IP-in-IP encapsulated packets At the moment this is not supported. Generic Routing Encapsulation (GRE) At the moment this is not supported. SQL DB At the moment SQL DBs cannot be used in combination with Virtual Networks.

Applicable Use Cases

Below you can find 4 use cases when I would use Virtual Network: To isolate an application that contains multiple resources (like VM) In this case you would like all your resources to be part of the same network and to be isolated from the external world. To scale your on-premises resources When you need more computing power resources (VMs) and you want to scale your on-premises resources in a secure and simple way, Virtual Network can create the secure environment to do something like this. Hybrid Solutions When your solution is hosted on-premises and also on cloud, Virtual Network can be used with success to unify the system and resources.

54

no. 30/december, 2014 | www.todaysoftmag.com

To connect to Azure VM in a secure way If you want to access VM in a secure and reliable way from your own networks, than Virtual Network is a must.

Pros and Cons

Pros • Easy to configure • Isolated resources from internet • Secure Cons • Only VM and PaaS resources can be used • IPv6 is not yet supported

Pricing

When you calculate how much Virtual Networks would cost, you should take into consideration the fallowing components: • Outbound data transfer (Inter V-NET) • VPN Gateway time duration

Conclusion

In this article, we have discovered different ways of managing the network layer for our applications that are hosted on Microsoft Azure. Based on your needs, you should take into account these services.



sponsors

powered by


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.