Issue 20 - Today Software Magazine (english)

Page 1

Nr. 20 • February 2014 • •



Craftsmanship and Lean Startup How (NOT TO) measure latency

Interview with Radu Georgescu

Interview with Philipp Kandal

Interview with Alexandru Tulai

Metrics in Visual Studio 2013

Thinking in Kanban Business pitching or how to sell in 4 simple steps An overview of Performance Testing on Desktop solutions

Cluj IT Cluster on Entrepreneurship New business development analysis Multithreading in C++11 standard (II) Startups: Evolso

6 Cluj Innovation Days 2014 Ovidiu Mățan

7 Interview with Philipp Kandal Ovidiu Mățan

9 Evolso Alin Stănescu

11 An overview of Performance Testing on Desktop solutions Sorin Lungoci

15 Software Craftsmanship and Lean Startup

28 Restricted Boltzmann Machines Roland Szabo

31 Multithreading in C++11 standard (II) Dumitrița Munteanu

34 Metrics in Visual Studio 2013 Radu Vunvulea

36 How (NOT TO) measure latency Attila-Mihaly Balazs

39 Thinking in Kanban Püsök Orsolya

Alexandru Bolboaca și Adrian Bolboacă

17 Getting started with Vagrant Carmen Frățilă

22 Startup marketing: challenges and practical ideas Sorina Mone

24 Cluj IT Cluster on E ntrepreneurship Daniel Homorodean

26 Interview with Radu Georgescu Ovidiu Mățan

40 New business development analysis Ioana Matei

42 Business pitching or how to sell in 4 simple steps Ana-Loredana Pascaru

44 Gogu and the Ship Simona Bonghez, Ph.D.



Ovidiu Măţan, PMP Editor-in-chief Today Software Magazine

he word innovation on the cover of the magazine announces the theme of this issue, startups and innovation. But the word was also chosen to mark symbolically the incoming of a new stage of the local IT, that of innovation. If, a few years ago, execution and performance were among the most widely employed words, we are now witnessing an unprecedented ascending of the concept of innovation on the Romanian market. This new trend points to an evolution of the Romanian entrepreneur’s mentality, who becomes more and more aware of his capacity to metamorphose from a mere executant into a product creator. Assuming the new status must take into account a reality synthesized suggestively by Radu Georgescu in an interview during How to Web 2013: outsourcing means selling working hours, whereas producing means selling the same product a thousand times. The piece of advice that he gives to the outsourcing companies is to try to create small teams to develop some products of their own. The innovation concept may have several forms. A simple search of this word on the site of the magazine, , gives us a few of its possible facets: Innovation in Big projects, Innovation in IT, public-private project, connecting the innovating technologies to the global market, Lean Six Sigma and the management of innovation. These are but a few of the approaches that reveal the endless possibilities of reference of this concept. As an addition meant to prove the above mentioned materialization of the innovative spirit, we pass in review the events that have innovation as a main theme: Startup Weekend Cluj – the most important local event dedicated to the creation of new startups. The team designated as winner in 2013 was Omnipaste aka Cloud Copy, which is currently in the Deutsch Telekom Hub:raum accelerator; Startup Pirates – a new event which offers mentorship to those who wish to create a startup; Cluj Innovation Days – organized by Cluj IT Cluster, whose aim is to offer its participants, during the two days, three parallel sessions of presentations on this theme; Innovation Labs 2.0 – a hackaton organized in Bucharest and Cluj. Thus, we also invite you to take part in these events where we hope to see revolutionary ideas which can bring benefits to the Romanian market. The present edition of the magazine puts forth a series of interviews with Radu Georgescu, Philipp Kandal and Alexandru Tulai, as well as details on the above mentioned events. The first technical article of the magazine is on the subject of testing, namely: An Overview on Performance Testing on Desktop Solutions, followed by Craftsmanship and Lean Startup, which proposes two possible development stages of an application from the perspective of an entrepreneur: discovery and implementation. Vagrant is the title of another article on a technical theme, as well as the name of a powerful tool when we have to deal with virtual machines. This issue’s article represents an introduction as well as an overview of its possibilities. Next, there is a new series of technical articles: Restricted Boltzmann Machines, the review of the book Maven, The Definitive Guide and How (not to) measure latency. Startup marketing: challenges and guiding marks offers some pieces of advice to young entrepreneurs on how marketing should be done with limited resources. Another article dedicated to startups is the one entitled Cluj IT Cluster and Entrepreneurship, which presents the involvement of the Cluster in supporting entrepreneurship. The articles dedicated to management continue with Thinking in Kanban, New business development analysis, Business Pitching - the advertising of today’s entrepreneurs. Finally, I would like to mention that the 20th issue of Today Software Magazine marks 2 years of existence of the magazine. We thank you for being with us and we promise to carry on with many other editions at least equally interesting as the ones till now!!! Enjoy your readings !!!

Ovidiu Măţan

Founder of Today Software Magazine


no. 20/February |

Editorial Staf Editor-in-chief: Ovidiu Mățan Editor (startups & interviews): Marius Mornea Graphic designer: Dan Hădărău

Authors list Alexandru Bolboaca

Daniel Homorodean

Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Member in Board of Directors @ Cluj IT Cluster

Radu Vunvulea

Roland Szabo

Senior Software Engineer @iQuest

Junior Python Developer @ 3 Pillar Global

Attila-Mihaly Balazs

Sorina Mone

Copyright/Proofreader: Emilia Toma Translator: Roxana Elena

Code Wrangler @ Udacity Trainer @ Tora Trading

Reviewer: Tavi Bolog Reviewer: Adrian Lupei

str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania

Marketing manager @ Fortech

Dumitrița Munteanu

Adrian Bolboaca

Software engineer @ Arobs

Programmer. Organizational and Technical Trainer and Coach @Mozaic Works

Made by

Today Software Solutions SRL

Sorin Lungoci

Silviu Dumitrescu Consultant Java @ msg systems Romania

Tester @ ISDC

Carmen Frățilă ISSN 2285 – 3502 ISSN-L 2284 – 8207

Ana-Loredana Pascaru

Training Manager @ Genpact

Püsök Orsolya

Project manager @ Evolso

Functional Architect @ Evoline

Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code

Software engineer @ 3Pillar Global

Alin Stănescu

Ioana Armean

Copyright Today Software Magazine

Project Manager @ Ogradamea

Simona Bonghez, Ph.D. Speaker, trainer and consultant in project management, Owner of Colors in Projects

Ovidiu Măţan, PMP Editor-in-chief Today Software Magazine | no. 20/February, 2014



Cluj Innovation Days 2014


he second edition of Cluj Innovation Days is scheduled to take place on March, 20th and 21st and is the main yearly event organized by Cluj IT Cluster. The President of the Cluster, Mr. Alexandru Tulai answered a few questions about the event, exclusively.

Ovidiu Mățan: Cluj IT Cluster organizes on the 20th- 21st of March the second edition of Cluj Innovation Days, event hosted this time by the University of Agricultural Sciences and Veterinary Medicine in ClujNapoca. Why Cluj Innovation Days and not Cluj IT Innovation Days, as was the last year’s conference called? Alexandru Tulai: This year’s edition of Cluj Innovation Days through the topic, guests and structure of the event aims at bringing together the main national and international stakeholders in the innovation process, decision-makers and individuals and organziations which are interested in changing the paradigm of how we do business and how we educate ourselves, so that we become more oriented towards the generation of innovative ideas and products with high added value. This is also the reason why our event, which we would like in time to become one of the major event of its kind,is no longer entitled Cluj IT Innovation Days. The location where it will take place is not meaningless either: we wish to emphasize the importance of the collaboration among researchers, businesses and public authorities.. Our vision of the IT industry is one in which IT&C becomes an indispensable infrastructure for development, present in all the verticals of economy and of the society. Cluj Innovation Days is, in my opinion, the type of event through which we can contribute towards a long-term consolidation of the IT community and to facilitate the development of connections


with national and international business partners. What do you attend to achieve with this year’s edition of Cluj Innovation Days? Cluj Innovation Days is structured on three thematic sessions, ehich are Mastering Innovation, Fostering Entrepreneurship and Showcasing Innovation, organized as such in order to cover the three main constitutive aspects of innovation. The first thematic session regards innovation in a more general context and it is concerned with issues related to handling innovation through the management of the innovating product, intellectual property, support obtained through the European policies and strategies, as well as other aspects concerning the capacity to implement and sustain the innovation processes. The second session is oriented towards the entrepreneurship area and it aims at presenting to the public the main mechanisms and steps in the consolidation and capitalization of an innovating idea, by direct reference to the ingredients of startups, spinoffs, as well as ways of attracting investments and other available sources of financing. The last thematic session, Showcasing Innovation, aims at illustrating all the means and mechanisms that sustain innovation, by successful stories from the business area, meant to motivate and inspire young people to develop some entrepreneurial initiatives, but also the small companies that are in an incipient stage of development.

no. 20/February, 2014 |

Who are the participants you are expecting at this event? During the two days, the event will reunite over 400 people, from our country and from abroad, among which I would like to mention the representatives of the European Commission, Government of Romania and local public authorities, business poeple, representatives of the academic environment, ambassadors, members of other national and international clusters, but also representatives of business associations, of the financial-banking sector and investors. As guests, we also expect representatives of academia such as universities, the Romanian Academy and research institutes. Due to the great number of participants, but especially to the nature of their pursuits, we can state that, for two days, Cluj will become the regional capital of innovation. More details on the Cluj Innovation Days 2014 event are available on the web site:

Ovidiu Măţan, PMP Editor-in-chief Today Software Magazine



Interview with Philipp Kandal


ately, we have witnessed a series of acquisitions of local companies. It is also the case of EBS, purchased by NTT Data, Kno by Intel, or Evoline by Accenture. The last transaction of this kind was the purchasing of Skobbler by Telenav. Philipp Kandal, cofounder of Skobbler, had the kindness to answer a few questions for us.

Ovidiu Mățan: Philipp, congratulation for the selling Skobbler to Telenav. Please tell us three product strenghts that made possible this deal. Philipp Kandal: We have focussed consequently on OpenStreetMap which is comparable to a Wikipedia of maps. This is growing at a very fast pace and on the way to become the most important map in the world. As we have been the most successful company in that area that was the key reason why Telenav wanted to acquire skobbler. Apart from our strong OpenStreetMap technology the main assets which we built was a strong installed user base (> 4 Mio. users) and our unique offline capabilities that allow people to use our products full offline without a connection to the internet. So at the end it was a mix of a great product and an outstanding technology built by a world-class team which made this deal possible. Probably the most important aspect for the local IT community, how will be affected the Skobbler team by this transaction on short and longer term perspective? This is a deal about doubling down on our efforts and growing the teams and products we’ve built here. So we clearly expect to grow our team in Cluj and with more capital build even more awesome products. We are going to be very aggressive moving forwards with the products we are creating and pushing them into the market i.e. we’ll be expanding in more new regions. Short term you can expect that we’re looking to unify the brands between skobbler and

Telenav, but definitely we’ll continue to focus building outstanding consumer products for the future. We know that you are leading the company from a technical perspective too. How this will change, will you still continue to collaborate with Telenav for a while? I am very committed to what we’ve built, so I’ll definitely stay as a General Manager of our Cluj offices. Long term I’ll have to see, but everybody who knows me is aware that I am a great supporter of entrepreneurship, so that’s definitely a path I would explore again, and in the mean time I definitely have now the ability to make some more angel investments in Cluj ;). Now, the question that everyone might looking for. What will you do next? Will you continue to stay in the maps/navigation area or are you planning to do something totally different ? As said I definitely want to see our products grow to tens of millions of users so for the foreseeable future I’ll stay at Telenav. If I would have to start something new at some point it would most likely be in a new area outside of maps / navigation as I am definitely keen to explore a few areas that need to be rethought from ground-up, definitely I wouldn’t do a me-too but something that could be revolutionary. You had been involved in local events like Startup Weekend and supporting many others locally including our own, IT Days. Also you were supporting and sponsoring local startups like Squirrly. What are you

planning next ? Are you planning to remain in Cluj and build another business? I truly believe that Cluj is a fantastic place for entrepreneurs, so I’ll definitely stay in Cluj for a long time. I am actually considering to buy an apartment here, so you can expect me here for the long-run. I hope we all can make this place a truly competitive place for entrepreneurs and create some internationally respected companies out of Cluj, and I am very keen to try to help in this process in any way that I can..

Ovidiu Măţan, PMP Editor-in-chief Today Software Magazine | no. 20/February, 2014


startup weekend global



1200+ events







2012+2013 & more juicy facts...

average of 10 new ventures created


startup weekend cluj


average of 93 attendees per event with a maximum of 150










ideas pitched:

articles about:

partners & media:








teams formed:

energy drinks:


connected devices:

prizes value:




how about you write some


FORM A TEAM & win!

no. 20/February, 2014 |

580 history right now?









volso is created with 3 main technical parts : The mobile app, the website and REST services. The web platform can be accessed at : On this site you can find out more informations about what evolso stands for, how it can help you in your daily life and how to use it at maximum, info about team members, events.At the same time there is a visible part only available to our partners where they can login with an account and create,design events and see statistics of what’s happening inside their locations ( ex :how many users check-ed in, how many will attend the events, etc. ). The mobile app is created for our users and is the main part of the system. We started on Android, giving the fact that the market share of smartphones with Android in Romania is bigger than other OS.But we have the iOS version too, that it will be launched at the end of this month. The mobile app connects to the server using REST services and the responses are in JSON and some of the responses are send with Push Notifications through the devices. Behind the „magic” there is a MongoDB database. We picked out MongoDB because it’s a NoSQL database,easy to grow and uses documents JSON style,it’s open-source and it can have „Out of the box” GeoLocation. MongoDB helps us make different „queries” in the database locations related, an important aspect in the evolso project.

I would like to add that all the percen- errors on different OS versions,where the tages between the users are done real-time majority where fixed and we will continue on the server,wich means that MongoDB to fix them. offers an incredible response speed. Technical difficulties we encountered when trying to connect two different individuals. The connectivity could’ve been done using messages using different frameworks, or using our methods ( wich we already did), or using voice. We wanted to implement VoiP from the first version of the mobile app to offer our users a bonus. To use this service, we used a framework created by ( This Alin Stănescu framework is free and can be used on Andorid and iOS.Other problems that Project manager @ Evolso occured from the fragmentation of the OS.In the mobile app appeared different | no. 20/February, 2014



IT Communities


ebruary and March are generally marked by events that are dedicated to start-ups and to innovation. Now it’s the time to plan what we are going to do the rest of the year and to get our concepts validated by the public and the mentors attending the events.

Transylvania Java User Group Community dedicated to Java technology Website: Since: 15.05.2008 / Members: 563 / Events: 43 TSM community Comunitate construită în jurul revistei Today Software Magazine. Website: Since: 06.02.2012 /Members: 1171/Events: 16 Romanian Testing Community Community dedicated to testers Website: Since: 10.05.2011 / Members: 702 / Events: 2 GeekMeet România Community dedicated to web technology Website: Since: 10.06.2006 / Members: 572 / Events: 17 Cluj.rb Community dedicated to Ruby technology Website: Since: 25.08.2010 / Members: 170 / Events: 40 The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile methodology. Website: Since: 04.10.2010 / Members: 396 / Events: 55 Cluj Semantic WEB Meetup Community dedicated to semantic technology. Website: Since: 08.05.2010 / Members: 152/ Events: 23 Romanian Association for Better Software Community dedicated to senior IT people Website: Since: 10.02.2011 / Members: 235/ Events: 14 Testing camp Project which wants to bring together as many testers and QA. Website: Since: 15.01.2012 / Members: 1025/ Events: 27

Calendar February 13 (Cluj) Launch of issue 20 of Today Software Magazine February 15 (Cluj) Hackatonției-ceata-în-cluj February 17 (Cluj) Question-Answering Systems February 20 (Cluj) Machine Learning in Python February 22 (Timișoara) Meet the Vloggers February 22 (București) Electronic Arts CodeWars February 22 (Iași) ISTC February 2014 Edition February 24 (Cluj) Mobile Monday Cluj #5 February 27 (București) Gemini Foundry Conf - TSM recommandation February 28 (Cluj) Startup Weekend Cluj - TSM recommandation March 2 (Cluj) UBBots 2014 March 20-21 (Cluj) Cluj Innovation Days - TSM recommandation


no. 20/February, 2014 |


An overview of Performance Testing on Desktop solutions


pplication performance can break or make a business given the direct impact on revenue, customer satisfaction or brand reputation. Praised or criticized, performance of business-critical applications falls into the top three issues that impact a business. Currently, the pressure on performance is skyrocketing in a marketplace where demands of application users are getting more varied and complex and definitely, real-time.

Sorin Lungoci Tester @ ISDC

I have chosen to talk about performance testing on desktop solutions because the information available in terms of performance is pretty limited but nevertheless critical to success. I am considering for my story not only my experience acquired in different applications, coming from industries like finance or e-learning, but also the wisdom of others expressed as ideas, guidelines, concerns or risk warnings. I hope that my story can make your life somewhat easier and, definitely, more enjoyable when you’re asked to perform such task. If we speak about the performance of a system, it is important to start from the same definition of what „performance” is. Could it be responsiveness? Resource consumption? Something else? Performance desktop applications context can have different meanings. I will explain them below.


From architecture point of view, there are several types of desktop applications. The layers used are quite the same but the place and interaction between them leads to a

different architecture type. Among the most used layers we can recall: UI (User Interface), BL (Business Layer), TL (Transfer Layer) and DB (Database Layer). Please remember that these don’t cover all combinations between architecture styles and types: 1. 100% pure desktop - the installed application, having user interface, business layer and database on the same machine without the need of a network connection. As an example, think of Microsoft Money (personal finance management software). This is a single tier application that runs in a single system driven by a single user.

2. Another Client/Server solution is to have a thin client used a little more than taking the input from the user and display the information, the application being | no. 20/February, 2014


testing downloaded, installed and run on the server side. It is widely used in Universities or Factories or by staff within an intranet infrastructure. An example can be a student having a CITRIX client installed on a local PC and running an internal application on one of the University servers.

3. Client/Server application style using Rich Client and Server is the most used solution on a Desktop platform. This type of architecture can be found mostly on intranet computer networks. This is a 2-tier application that runs in two or more systems having a limited number of users. The connection exists until logout and the application is menu-driven. Think of Microsoft Outlook or any other Desktop e-mail program. The program resides on a local PC and it connects momentarily to the mail server to send and receive mail. Of course, Outlook works offline also, but you cannot complete the job without connecting to the server.

An overview of Performance Testing on Desktop solutions perceived more like functional testing. This happens because the application is designed to handle the requests coming from a single user. It isn’t appropriate to load a desktop application with, let’s say 100 users, in order to test the Server response. In case you could do that, you will test the local machine hardware/software (which doubtlessly will be a bottleneck) and not the server or application overall speed. The client performance testing should be done considering the following risks: • Impact on user actions - how fast the application handles requests from that user, • Impact on users system - how light the application is for users system (starting from how fast the application opens to memory used for running or other consumed resources) The Server part of the client-server system is usually designed to be performance-tested using somehow the same approach as Web testing: record transactions/requests sent to server, and then create multiple virtual-users to repeat those flows/ requests. The server must be able to process concurrent requests from different clients. In various examples of setting up the test environment for Client/Server applications, many teams decided to set multiple workstations (from 2 to 5) as clients, for both functional and performance testing. Each workstation was set to simulate a specific load profile.

Tools Approach

Testing client/server applications requires some additional techniques to handle the effects introduced by the client/server architecture. An example can be: the separation of computations might improve reliability but this can increase the network traffic and also increase the vulnerability to certain types of security attacks. Testing client/server systems is definitely different - but it’s not the „from another planet” type. The key to understanding how to test these systems is to understand exactly how and why each type of performance potential problem might arise. With this insight, the testing solution will usually be obvious. We should take a look at how things work and from that, we’ll develop the testing techniques we need. The testing at the Client side is often


application, it will likely only have one user at a time. It makes more sense to test the database response independent of the Windows application. • Telerik Test Studio 2 runs functional test as performance test, offers in-depth result analysis, historical view and test comparison. Designed for Web and Windows WPF only. No Windows Forms applications supported. • Infragistic TestAdvantage for Windows Forms3 supports testing on Windows Forms - or WPF-powered application user interface controls. • WCFStorm4 – it’s a simple, easy-touse test workbench for WCF Services. It supports all bindings (except webHttp) including netTcpBinding, wsHttpBinding and namedPipesBinding to name a few. It also lets you create functional and performance test cases. Due to time constraints, the following tools were not investigated but might help you test the Desktop applications: Microsoft Visual Studio, Borland Silk Performer, Seapine Resource Thief, Quotium Qtest Windows Robot (WR), LoginVSI, TestComplete with AQtime, WCF Load Test, Grinder or Load Runner. In classic client-server systems, the client part is an application that must be installed and used by a human user. That means that, in most cases, the client application is not expected to execute a lot of concurrent jobs but must respond promptly to the user’s actions for the current task and provide the user with visual information without big delays. The client application performance is usually measured using a profiling tool. The Profilers combined with Performance Counters even at SQL level could be a powerful method to find out what happens on the local or server side. You might consider using the builtin profiler of Visual Studio. It allows you to measure how long a method takes and how many times it’s called. For memory profiling, CLR Profiler allows us to see how much memory the application takes, which objects are being created by which methods. Most UI test tools could be used to record a script that you played back on a few machines.

If we compare the tools available to test the performance of a Desktop application compared to the Web, the truth is that there is a misbalance, meaning less tools in the first category. And even fewer tools able to cover multiple platforms like: Desktop, Web or Mobile. During my investigation on this topic, I have come across some interesting tools described in different articles, forums or presentations. • Apache JMeter 1 it is most commonly used to test backend applications (e.g. servers, databases and services). JMeter does not control GUI elements of Desktop application (e.g. simulate pressing a button or scrolling a page), therefore it is not a good option to test desktop applications from UI layer (e.g. MS Word). JMeter is meant to test the Collective findings around performance load on systems using multiple thre2 ads, or users. Since you’ve got a client 3 h t t p : / / w w w. i n f r a g i s t i c s . c o m / p r o d u c t s / 1

no. 20/February, 2014 |

testautomation 4


Below, you have the overview of useful findings on desktop application performance as experienced by myself or some other testers: • Many instances of badly designed SQL were subsequently optimized • Several statements taking minutes were improved to sub-second • S e vera l incor re c t vie ws were identified • Some table indexes that were not set up were also identified and corrected • Too much system memory consumed by the desktop application • Program crashes often occur when repeated use of specific features within the application causes counters or internal array bounds to be exceeded. • Reduced performance due to excessive late binding and inefficient object creation and destruction • Memory leak identified, when the application was opened and left for a longer period of time (few hours)..


The most encountered problems relate to software and the environment. The predominant issue that concerns the performance tester is stability, because there are many situations when the tester has to work with software that is imperfect or unfinished. I will expose here some of the risks, directly related to Desktop applications performance test: • A quite frequent problem during scripting and running the tests for the first time is related to resource usage on the client side leading to a failure (usually

because of memory running out). Some the 80/20 rule applies: 80% of the dataof the applications often crash when base volume will be taken up by 20% of repeated use of specific features within the system tables. 80% of the system load the application causes counters or interwill be generated by 20% of the system nal array bound to be exceeded. Of transactions. Only 20% of system transacourse, those problems will be fixed, but ctions need to be measured. Experienced it is an impact on time spent because testers would probably assume a 90/10 these scripts have to be postponed until rule. Inexperienced managers seem to the fix is done. mix up the 90 and the 10. • Building a performance test data• Tools to execute automated tests do base involves generating a lot of rows in not require highly specialized skills but, selected tables. There are two risks involas with most software development and ved in this activity: testing activities, there are principles • The first one is that, in creating which, if complied with, should allow the invented data in the database reasonably competent testers to build tables, the referential integrity of the a performance test. It is common for database is not maintained. managers or testers with no test automa• The second risk is that the busition experience to assume that the test ness rules, for example, reconciliation process consists of two stages: test scripof financial fields in different tables are ting and test running. On top of this, the not adhered to. In both cases, the load testers may have to build or customize simulation may not be compromised the tools they use. but the application may not be able to • When software developers who have handle such inconsistencies and thedesigned, coded and functionally tested refore fails. It is helpful for the person an application are asked to build an autopreparing the test database to undermated test suite for a performance test, stand the database design, the business their main difficulty is their lack of tesrules and the application. ting experience. Experienced testers who • Underestimation of the effort have no experience of the SUT however, required to prepare and conduct a usually need a period to familiarize with performance can lead to problems. the system to be tested. Performance testing a Client/Server system is a complex activity, mostly Conclusions because of the environment and the In summary, some practical conclusiinfrastructure simulation. ons can be drawn and applied in your own • Over ambition, at least early in the work: project, is common. People involved often • Tools are great and essential, but the assume that databases have to be popuproblem isn’t only about tools. The real lated with valid data, every transaction challenge is to identify the scope of permust be incorporated into the load and formance test. What are the business, every response time measured. As usual, infrastructure or end user concerns? | no. 20/February, 2014


testing Among the contractually-bound usage scenarios, identify also most common, business-critical, performance-intensive usage scenarios from technical, stakeholder or application point of view. • Usually the risks are traced to infrastructure and architecture, not the user interface. For this reason during the planning and design phase, you have to have a clear overview on concern – test relationship. Don’t waste the time on designing ineffective tests; each test should solve a specific problem or concern. • The desktop application performance is very close to test automation and as well as to writing code. There is a slight trend on the internet stating that more and more people develop their own tool for automation/performance using .NET, Java, Python, Perl or other languages. • It’s difficult to find a tool that can record the majority of the application at UI level and then play-back with multiple users or threads. It seems that the performance focus on Desktop solution is moved more at API / Service Layer. • For some performance testing (like testing the client side) you don’t need a specific tool, only a well-designed test case set, a group of test suites with some variables and that’s it! • Factors such as firewalls, anti-virus software, networks, other programs


An overview of Performance Testing on Desktop solutions running etc. all affect the client performance, as does operating systems and service packs etc. It’s a complicated set of variables and must be taken into consideration. • Database, system and network administrators cannot create their own tests, so they should be intimately involved in the staging of all tests to maximize the value of the testing. • There are logistical, organizational and technical problems with performance testing - many issues can be avoided if the principles like the one shared below are recognized and followed. • The approach to testing 2-tier and 3-tier systems is similar, although the architectures differ in their complexity. • Proprietary test tools help, but improvisation and innovation are often required to make a test ‘happen’. • Please consider technology when choosing the tool, because some of them can record applications only using WPF (Windows Presentation Foundation) technology like Telerik Test Studio, while others provide support only for Windows Forms. • Other limitations observed during testing or investigation: • Many tools require the application to be already running • Some of the tools stop the

no. 20/February, 2014 |

playback if a pop-up is involved (like QA Wizard Pro) • Others cannot select „browse” for a file on the local Hard Drive or even select a value from the menu (Open or Save) • Other challenges that are related to performance testing on desktop solutions could be the multitude of operating systems and versions, hardware configurations or simulating the real environments. Enjoy testing !!


Software Craftsmanship and Lean Startup

T Alexandru Bolboaca Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Adrian Bolboaca Programmer. Organizational and Technical Trainer and Coach @Mozaic Works

he Lean Startup movement was a revolution in the startup world, although it started from a simple observation. The standard way to create a startup was until recently “Build, Measure, Learn”: • An entrepreneur comes up with a product vision • After obtaining necessary financing, he/she builds the product according to the vision the product is tested on the market • If it is successful, awesome! If not, it is modified to be closer to the needs of real users.

The problem of this model is that at the time when the product is tested on the market, it was already built, meaning that money and effort were invested in it. From the technical point of view, the product was often rushed, the programmers worked under pressure and the code is difficult to change and full of bugs. Steve Blank and Eric Ries came up with the idea to reverse this cycle and use “Learn, Measure, Build” instead: • An entrepreneur comes up with a product vision. The vision is turned into a set of hypothesis. • Each hypothesis is tested through an experiment. The result of the experiment is measurable and compared with the expectations defined in the hypothesis. • Once the hypothesis was validated, the solution can be developed and packaged in a product. While the experiment can include the development of a working prototype for an application, it is not really necessary. The art is in defining the minimum investment to validate the hypothesis. The most common experiments, especially for online products, are “A/B testing”: two possible implementations are competing and “voted” by potential users. The most popular is usually chosen as the winner.

On technical level, Lean Startup requires high agility due to working in an unknown environment, under discovery. If the standard product development model assumes that the vision, the current and future features are all known, in a lean startup the vision can modify depending on the market (through pivoting), features are discovered through experiments and the next requirements are often surprising. Flickr is a classic example. Flickr is well known as the image storage service acquired by Yahoo. Their initial vision was to build a game called “Game Neverending” that had one feature allowing the player to add and share photos. The team realized after a while that the game was not well received, but the photo sharing feature was. They decided to focus on and stop developing the game. The Lean Startup litterature barely mentions the technical aspects. The authors had two reasons: the revolution had to start from the business side and they assumed programmers will know what technical practices to use. Imagine though the context of many online startups today. Many of them deploy a new feature hundreds of times a day. Each deployment is either an A/B test or a validated feature. This development rhythm requires | no. 20/February, 2014


management Software Craftsmanship and Lean Startup programming skills such as: • Incremental thinking – splitting a large feature into very small ones that can be rapidly implemented in order to get fast feedback from the market • Automated testing – a combination of unit tests, acceptance tests, performance tests, security tests is necessary to rapidly validate any change in the code • Design open to change – with inflexible design, any change takes too long • Code easy to modify – following common coding guidelines, optimizing for readability and writing code easy to understand are essential for keeping a high development speed • Refactoring – the design will inevitably be closed to change in certain points. Fast refactoring is an essential skill to turn it into a design easy to change. In addition to the technical practices, the proficient teams often use clear workflows that commonly include: monitoring, reversing to previous version when major bugs appear, validation gates before deployment etc. Not all startups need to deploy hundreds of times a day. Sometimes it is enough to have a feedback cycle of under a week. The necessary programming skills are the same. The only difference is that the work can be split in larger increments. If the link with Software Craftsmanship is not yet clear, let’s explore it some more. As we showed in the article on Software Craftsmanship [http://www.


Software_Craftsmanship__404], the movement has appeared to allow reducing cost of change through applying known technical practices and through discovering new ones. Some of the practices we know and use today are: • Test Driven Development, to incrementally design a solution adapted to the problem and open to change • Automated testing, to avoid regressionsSOLID principles, design patterns and design knowledge to avoid closing the design to change • Clean Code to write code easy to understand • Refactoring to bring design back to the necessary level when it is unfit When should a team working in a startup invest in such practices? The answer should be easy for anyone following the lean startup model: as long as you are beginning to discover the customers through experiments, there is no need to invest more time than necessary. The best experiments do not require implementation. If code is required after all, the most important thing is to write it as fast as possible, even though mistakes can be made. Once a feature was validated, the code needs to be written, or re-written, so that it is easy to modify. The risk is otherwise to benefit too late from the learnings of the experiments. It is important to mention that developers who practiced these techniques for long enough can implement faster when using them. This is the Software

no. 20/February, 2014 |

Craftsmanship ideal: develop the skills so well so that they become the implicit and the fastest way to build software, especially when the development speed is very important.


Lean Startup needs Software Craftsmen. The best experiments require zero code. Sometimes, code is needed. A Software Craftsman is capable to implement them fast and without adding problems. If, in the discovery phase, practices such as TDD, automated testing or refactoring can be skipped, the implementation phase needs them badly to allow fast deployment of validated features. The fastest they are deployed the highest the chance to have paying customers, ensuring the survival of the company and increasing its chances of success.



Getting started with Vagrant


ow many times have you heard „But it works on my machine” or „But it works on my local”? How long does it take to setup an environment? How many times have you encountered differences between the production and development environments? Imagine an ideal world where all developers work on the same pre-build platform and the development and production platform share the same specs. This world exists and it’s called virtualization. Vagrant is a virtualization tool, which has an answer for all these questions, making this ideal world reality. It can be used to create and configure lightweight, reproducible and portable development environments. Vagrant is written in Ruby by Mitchell Hashimoto (https:// The project started in 2010 as a side-project, in Mitchell Hashimoto’s free hours. In the next two years Vagrant grew and started to be trusted and used by a range of individuals to teams from large companies. In 2012 Mitchell formed his own company called HashiCorp, in order to develop and to provide professional training and support for Vagrant. Currently Vagrant is an open source project, being the result of hundreds of individuals’ contribution ( mitchellh/vagrant). To achieve its magic, Vagrant stands on the shoulders of his giants, by acting as a layer on top of VirtualBox, VMware, AWS, or other provider. Also the industry-standard provisioning tools such as Shell scripts, Chef or Puppet can be used to automatically setup a new environment. Vagrant is usable in projects written in other programming languages such as PHP, Python, Java, C# or JavaScript and can be installed on Linux, Mac OS X, or Windows systems. Vagrant offers transient boxes that are portable and can move around, with no permanent residence just like a vagrant. If you’re a developer you can use Vagrant to isolate dependencies and their configuration with a single disposable consistent environment. Once Vagrantfile is created you just need to run vagrant up command and everything is up and running on your machine. As an operations engineer, Vagrant gives you a disposable environment and consistent workflow for developing and testing infrastructure management scripts. You can quickly test things like shell scripts, Chef cookbooks, Puppet modules, and more, using local virtualization such as VirtualBox or VMware. Then, with the same configuration, you can test these scripts on remote clouds such as AWS or RackSpace with the same workflow. As a designer, by using Vagrant, you can simply setup your environment base on the Vagrantfile, which is already configured, without worrying about how to get the app running again. By using Vagrant you can achieve the following: • environment per project - you can have different configuration files for each project • same configuration file for developing, pre-staging,

staging and production environments • easy to define and transport the configuration file (Vagrantfile) • easy to tear down, provisional: infrastructure as a code • version able configuration files - you can commit all your cookbooks and the Vagrantfile • shared across the team - by using the same configuration file (Vagrantfile) Before diving into the first vagrant project, you need to install VirtualBox or any other supported provider. For this example I used VirtualBox. The next step is to install Vagrant. For this step you have to download and install the appropriate package or installer from Vagrant’s download page (http://www. The installer will automatically add vagrant to the system path, so it will be available in terminals as shown below. $ vagrant Usage: vagrant [-v] [-h] command [<args>] -v, --version Print the version and exit. -h, --help Print this help. Available subcommands: box manages boxes: installation, removal, etc. destroy stops and deletes all traces of the vagrant machine halt stops the vagrant machine help shows the help for a subcommand init initializes a new Vagrant environment by creating a Vagrantfile package packages a running vagrant environment into a box plugin manages plugin s: install, uninstall, update, etc. provision provisions the vagrant machine reload restarts vagrant machine, loads new Vagrantfile configuration resume resume a suspended vagrant machine ssh connects to machine via SSH ssh-config outputs OpenSSH valid configuration to connect to the machine status outputs status of the vagrant machine suspend suspends the machine | no. 20/February, 2014


programming Getting started with Vagrant up environment

starts and provisions the vagrant

you can access web site, where you can The next step is to initialize vagrant on your project by running find a list of available boxes. Vagrant offers also the possibility of vagrant init command line: creating custom boxes. A nice tool which can be used to create custom boxes can be found here: $ vagrant init veewee. A `Vagrantfile` has been placed in this directory. There are two ways to add a base box. One option is to define You are now the base box in the Vagrantfile, and when you run ready to `vagrant up` your first virtual environment! Please vagrant up command, the box will be added. The second option is Read the comments in the Vagrantfile as well as docuto execute the command bellow: mentation on`` for more information on using Vagrant.

$ vagrant box add <name> <url>

After running this command, a new file called Vagrantfile <name> parameter can be anything you want to, just make is generated into your project folder. Vagrantfile is written in sure it is the same value defined in the directive, Ruby, but knowledge of the Ruby programming language is not form Vagrantfile. necessary to make modifications, since it is mostly simple varia<url> is the location of the box. This can be a path to your local ble assignment. The Vagrantfile has the following roles: file system or an HTTP URL to the box remotely. • Select base box $ vagrant box add precise32 http://files.vagrantup. • Choose virtualization provider com/ • Configure VM parameters • Configure Networking Available commands for boxes are described below: • Tweak SSH settings $ vagrant box -h • Mount local folders Usage: vagrant box <command> [<args>] • Provision machine Available subcommands:

Select base box The automatically generated Vagrantfile, contains the following lines: # Every Vagrant virtual environment requires a box to build off of. = “precise32” # The url from where the ‘’ box will be fetched if it # doesn’t already exist on the user’s system. config.vm.box_url = “”

add list remove repackage For help on any individual command run `vagrant box COMMAND -h`

Choose virtualization provider

There are two ways to specify the provider, similar to those described for the base box. The first option is to specify the provider from the command line as a parameter. If you choose this solution you have to make sure that the argument from command line matches Vagrantfile’s line describes the machine’s type required for directive config.vm.provider. the project. The box is actually a skeleton from which Vagrant $ vagrant up --provider=virtualbox machines are constructed. Boxes are portable files which can be used by anyone on any platform that runs Vagrant. Boxes are relaThere are two ways to specify the provider, similar to those ted to the providers, so when choosing a base box you have to be described for the base box. The first option is to specify the proaware of the supported provider. In order to choose a base box, vider from the command line as a parameter. If you choose this Our core competencies include:

Product Strategy

3Pillar Global, a product development partner creating software that accelerates speed to market in a content rich world, increasingly connected world. Our offerings are business focused, they drive real, tangible value.


no. 20/February, 2014 |

Product Development

Product Support


solution you have to make sure that the argument from command line matches Vagrantfile’s directive config.vm.provider. Mount local folders While many people edit files from virtual machines just using Configure VM parameters plain terminal-based editors over SSH, Vagrant offers the possibiVagrantfile offers the possibility to configure the providers lity to automatically sync files on both guest and host machines, by adding the vb.customize directive. For example if you want to by using synced folders. By default Vagrant offers the possibility increase the memory, you can do as shown below. to share your project folders to the /vagrant directory on the guest machine. So the /vagrant directory that can be seen on the vb.customize [„modifyvm”, :id, „--memory”, „1024”] guest machine, is actually the same directory that you have on # Provider-specific configuration so you can fine-tune your host machine. So you won’t have to use anymore the Upload various and Download option from your IDE in order to sync files on the # backing providers for Vagrant. These expose provider-specific options. host and the guest machines. If you want to change the synced # Example for VirtualBox: directory on the guest machine, you can add the directive config. # config.vm.provider :virtualbox do |vb| vm.synced_folder „../data”, „/vagrant_data” in the Vagrantfile. # # Don’t boot with headless mode # vb.gui = true # # # Use VBoxManage to customize the VM. For example to change memory: vb.customize [“modifyvm”, :id, “--memory”, “2048”] end

Configure Networking

Accessing the web pages from the guest machine is not such a good idea, so Vagrant offers networking features in order to access the guest machine from our host machine. Vagrantfile has three directives, which can be used in order to configure the network. • :forwarded_port, guest: 80, host: 8080

Translating this directive, means that Apache from our guest machine, created by Vagrant, can be accessed on our host machine by using the following url This actually means that we have all the network traffic forwarded to a specific port on the guest machine (8080). # Create a forwarded port mapping which allows access to a specific port # within the machine from a port on the host machine. In the example below, # accessing “localhost:8080” will access port 80 on the guest machine. :forwarded_port, guest: 80, host: 8080 • :public_network # Create a public network, which generally matched to bridged network. # Bridged networks make the machine appear as another physical device on # your network. :public_network • :private_network, ip: “” # Create a private network, which allows host-only access to the machine # using a specific IP. :private_network, ip: “”

Tweak SSH settingsH

Vagrant.configure(“2”) do |config| config.vm.synced_folder “../data”, “/vagrant_ data” end

Provision Machine

Provisioning isn’t a matter that developers should care about, as usually sysadmins handle it. The idea is to record somehow the software and the configurations made on the server, in order to be able to replicate it on other servers. In the old days sysadmins kept a wiki of the executed commands, but this was a terrible idea. Another option was to create .box or .iso backups, so new servers can be configured based on those files. But maintaining these backups up to date requires a lot of work and as the time goes by, it’s quite hard to keep synced all the machines. Provisioning in our days, offers the possibility to add specific software, to create configuration files, to execute commands, manage services or create users, by using modern provisioning systems. Vagrant can integrate the following provisioning systems: Shell, Ansible, Chef Solo, Chef Client, Puppet, Salt Stack. The two most popular provisioning systems are Chef and Puppet, being supported by large communities. Both are written in Ruby, having similar features like modularized components, packages for software installs or templates for custom files. As a notice, both systems are open source projects with enterprise revenue model.

Provisioning with Shell Provisioning with Shell in Vagrant is quite easy. There are three ways to do it. You can write inline command, or you can specify the path to the shell script. The path can be either from an internal or external folder. Inline command config.vm.provision :shell, :inline => “curl -L | bash -s stable”

Internal path

config.vm.provision :shell, :path => “install-rvm. sh”, :args => “stable”

External path

config.vm.provision :shell, :path=>”https://example. com/”, :args => “stable”

Vagrantfile offers also the possibility to configure the config. Provisioning with Pupppet ssh namespace, in order to specify username, host, port, guest_ Puppet modules can be downloaded from port, private_key_path, forward_agent, forward_x11 and shell. puppetlabs. In order to configure Vagrant with Puppet, you have to setup the Puppet directives as follows: Vagrant.configure(“2”) do |config| config.ssh.private_key_path = “~/.ssh/id_rsa” config.ssh.forward_agent = true | no. 20/February, 2014


programming Getting started with Vagrant vagrant_main/recipes/default.rb include_recipe “apache2” include_recipe “apache2::mod_rewrite” package “mysql-server” do package_name value_for_platform(“default” => “mysql-server”) action :install end


Table 1. Chef vs. Puppet

config.vm.provision :puppet do |puppet| puppet.manifests_path = “./tools/puppet/manifests/” puppet.module_path = “./tools/puppet/modules” puppet.manifest_file = “init.pp” puppet.options = [‘--verbose’] End

init.pp include mysql::server class { ‘::mysql::server’: root_password => ‘strongpassword’ } class mysql::server ( $config_file = $mysql::params::config_file, $manage_config_file = $mysql::params::manage_config_file, $package_ensure = mysql::params::server_package_ensure, )

Mysql params.pp: class mysql::params { $manage_config_file = true $old_root_password = ‘’ $root_password = “strongpassword” }

NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName ServerAlias DocumentRoot /var/www <Directory “/var/www/ sites/all/”> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> ErrorLog <%= node[:apache][:log_dir] %>/ tsm-error.log CustomLog <%= node[:apache][:log_dir] %>/ tsm-access.log combined LogLevel warn </VirtualHost>

If you’re still irresolute and you don’t know which provisioner to choose, you could have a look at Table 1 for a helping hand.

Fig 2. Cookbook files structure

Mysql Template: Fig. 1 Puppet Module’s Structure

Mysql Template [client] password=<%= scope. lookupvar(‘mysql::root_password’) %>

Provisioning with Chef Solo

Chef Solo cookbooks can be downloaded from here: https:// For Chef Solo Vagrantfile has to be edited as shown below, in order to configure the path to the cookbooks. config.vm.provision :chef_solo do |chef| chef.cookbooks_path = “cookbooks” chef.add_recipe “vagrant_main” # chef.roles_path = “../my-recipes/roles” # chef.data_bags_path = “../mydata_bags”


# chef.add_role “web” # # # You may also specify custom JSON attributes: # chef.json = { :mysql_password => “foo” } end


no. 20/February, 2014 |

Once you are done with the Vagrantfile configuration, you are ready to create the virtual machine. For this step you should open your command line interface and navigate to the project’s folder, where the Vagrantfile should be placed also, in order to sync the folders. Then just type vagrant up and your guest machine will be created. First time, when you’ll run vagrant up, it will take a while, because Vagrant will download the configured box. In our case, I didn’t add the virtual box with vagrant box add <name> <url> command, so the box will be added at vagrant up command line, as shown below. D:\projects\tsm> vagrant up Bringing machine ‘default’ up with ‘virtualbox’ provider... [default] Box ‘precise32’ was not found. Fetching box from specified URL for the provider ‘virtualbox’. Note that if the URL does not have a box for this provider, you should interrupt Vagramt now and add the box yourself. Otherwise Vagrant will attempt to download the full box prior to discovering this error. Downloading box from URL: Progres: 1% <Rate: 41933/s, Estimated time remain-


After the guest machine was created, you can simply type vagrant ssh in order to access it. For Windows systems you can install PuTTY SSH client if you want to, by adding the authentication information as shown below: D:\projects\tsm> vagrant ssh ‘ssh’ executable not found in any directories in the %PATH% variable. Is an SSH client instaled? Try installing Cygwin, MinGW or Git, all of which contain SSH client. Or use the PuTTY SSH client with the following authentication information shown below: Host: Port: 2222 Username: vagrant Private key: C:/Users/Carmen/.vagrant.d/insecure_ private_key

If you have modified only the provisioning scripts and want to quickly test, you can just run vagrant provision or vagrant --provision-with x,y,z, where x,y,z represents the provisioner :shell or :chef_solo. If you want to save the state of the machine rather than doing a full boot every time, you can run vagrant suspend. With vagrant resume command you can resume a Vagrant machine that was suspended. If you want to shut down the machine you should use vagrant halt command. By using this command you can save space, but it will take longer time to restart the machine because of booting. With vagrant up, you’ll have your machine running again. If you want to run halt and up, because you just did a modification to the Vagrantfile, for example, you can quickly run vagrant reload. If you want to stop the machine and to remove all the allocated resources, you can do it by typing vagrant destroy.

Fig 4. Vagrant workflow

system 3. Docker raised a new deployment model 4. Docker is supported only on Ubuntu linux machines 5. Docker is not recommend on production since it is still in the development phase 6. Vagrant is better because it keeps source code and deployment information in the same place 7. Vagrant is better because it is stable and can be used in production 8. Vagrant is better because it can be integrated with Linux, Windows and Mac OS X systems

Vagrant vs. Docker

Docker is an open source project to pack, ship and run any application as a lightweight container. The main idea is to create components per application. The component is actually a snapshot of the application. After making changes in the component, you can commit the new state of the snapshot, so rolling back to a previous state is quite easy. This project is awesome because it doesn’t involve virtual machines as Vagrant does, which means that startup time and resources usage is better. Also you can forget about cookbooks if you don’t want to use Chef anymore. The interactive command line tutorial ( is also very intuitive and in less than 20 minutes you can have an overview image of what Docker is. By comparing the workflow we can enunciate the following points: 1. Docker is better on the provisioning side. 2. Rolling back is easier with Docker because of the snapshot

Carmen Frățilă Software engineer @ 3Pillar Global | no. 20/February, 2014



Startup marketing: challenges and practical ideas


udget, team and limited resources in general, anonymity and the need to create awareness, sometimes the need to educate the market and the need to generate sales opportunities – these are just some of the challenges faced by startup businesses. In this context, the process and the marketing approach applied by startups has at least some particularities often addressed in the marketing literature and certainly „lived” by many organizations in the early stages of their existence. Sorina Mone Marketing manager @ Fortech


no. 20/February |

First of all, perhaps more than other processes, marketing in startups is innovative. Limited resources put managers and marketing specialists (if there are dedicated people!) in a position where they need to find solutions to these shortcomings, often unconventional ones. It is often said, that desperation leads to innovation. Exactly for this reason, the term of „growth hacking” – introduced by Sean Ellis and centered on creativity and the use of unconventional methods to achieve rapid and spectacular growth – is very popular in the context of startups in the technology area. Sure it does not involve reinventing the wheel, but rather the use of already popular concepts and practices (such as those in the area of content marketing or community marketing), but in a unique way that attracts attention and facilitates rapid dissemination. Products such as Dropbox, LinkedIn and YouTube are often given as examples of growth hacking. Furthermore, the approach is short term. This is rather natural: the business has no history yet, estimates are hard to define and, in addition, the aim is, to a certain extent, survival. Unfortunately sometimes, and especially in less mature economies such as our country, factors

from the external environment, especially the macro-environment (unstable legislation, socio-economic situation etc.), also contribute to this. Finally, marketing is strongly influenced by the personality of the manager who usually is also the business owner (entrepreneur). Often, the organizational culture of a young or small company revolves around the entrepreneur and adopts defining characteristics of this person’s personality. Essentially, it is not a bad aspect, but situations may occur, where too much reliance on the entrepreneur can cause shortcomings. A simple example is blockage on the decision-making level; there may be situations where no decisions are taken in the absence of the manager - entrepreneur, and nothing is done, while agility should be one of a startup’s strengths. Given these challenges, marketing strategy (the segmentation - targeting - positioning process and the entire marketing mix: product, price, promotion, distribution) in a startup consists most often of identifying a set of priorities, taking decisions and executing them. Some references are essential in this context: First of all, it’s necessary to provide answers to a series of questions aimed to

define the positioning the product: What consumer segments does it address? What needs or problems does it solve for these consumers? What makes it different from other similar products that are already on the market? Knowing the future customer in as much detail as possible is an essential aspect, since it may provide the basis for important decisions, including decisions in the product conception and development phase (with respect to functionalities, characteristics and so on). In this context, marketers usually resort to creating a very comprehensive profile of the user (i.e. user persona) which includes demographical, social and economic characteristics such as age, occupation, income, habits, interests and so on. Once the target consumer, the need and the competitive advantage of the product are identified, this advantage has to be communicated. This is when branding comes in, a process which, at a basic level includes the choosing of a name and the articulation of a mission and vision statement, as well as some key value propositions. Especially when dealing with B2C software products that are addressed to individual consumers, it’s important for the name to be easily identifiable and memorable. The key messages, in order to actually reach the consumers and make them aware of the product, have to articulate the benefits, the solutions to problems they are confronted with, rather than objective characteristics of the product. Also, as early as possible, intellectual property protection issues have to be considered: brand registration, web domain acquisition, “parking” of the pages on social networking platforms and so on.

Then follows the most interesting and challenging part for a marketing professional in a startup: to generate demand for the product and create opportunities for sales. The activities that define this stage can be grouped into the following two categories: Outbound marketing, which includes activities such as advertising, email marketing and telemarketing, fairs and exhibitions attending etc. – generally, activities that involve an outreach to the target consumers. Inbound marketing includes marketing efforts which, on the contrary, contribute to the consumer finding the product or the company and not the other way around. In the area of digital marketing, inbound marketing activities may involve developing a website that attracts visitors naturally, through search engine optimization, social media, blogging, press etc. Inbound marketing efforts require time, but are generally lees costly than advertising (as they may only require creativity and editorial capabilities) and, very importantly, once built, they produce results and long-term impact. All these aspects may not only make the difference between survival or failure for a startup, but can also offer premises for growth and expansion. And this is where growth hacking comes in, especially when the startup aims to attract financing for further development. Investors are interested in success stories, not just ideas with potential and, most importantly, they want healthy businesses. This is why growth hacking has to be approached with caution and care, as figures are important, but they must have a solid basis, meaning a quality product, which delights and does not disappoint the users. Having the right

experience with a product may determine users to become the main engine for business growth, an ideal situation for a startup with great plans.

Young spirit Mature organization A shared vision Join our journey! | no. 20/February, 2014



Cluj IT Cluster on Entrepreneurship


rom the very first day the Cluj IT Cluster was created, we were confronted with the question of how we plan to support the local entrepreneurship. The question constantly accompanied us all the time in over one year since the Cluster was founded, at all conferences and events attended by our representatives.

Daniel Homorodean Member in Board of Directors @ Cluj IT Cluster


no. 20/February |

The question was – of course - always answered, though we always felt that in a certain measure the given answer was not enough, neither for the ones who asked, nor for us, the members of the Cluster. This comes from the fact that we have assumed an ambitious role in supporting the evolution of the Romanian IT industry, we have created expectations that we have to fulfil, and we are aware that we will be evaluated by the objective results we deliver. Cluj IT Cluster came into being following a direct need, from the understanding that the way Romanian IT industry has grown and worked for many years may not be sustainable in the long term. There are over 8,000 IT professionals in Cluj, an impressive concentration if compared to the population of the city. We got to this number by an organic and relatively fast growth that has relied primarily on the low cost of the labour force, and afterwards on a cost that is competitive for the qualification and quality of our programmers. Still, relying on the cost of the labour force as the main growth factor is not just a dangerous strategy, but a suicidal one, as the continuously growing cost of production in Romania tends to diminish the advantage we have compared to our traditional markets (Western Europe and the U.S.). It is not a secret for anyone that the only

way to ensure growth in the long run is to create additional and sustainable value, and the healthiest way to achieve this is by developing and capitalizing innovative products. How to develop and capitalize innovative products in Romania? Outside an answer to this key question, outside outlined solutions, the answers to all other questions are fragile. Innovation is not a one-time act, it is a process. This process should be developed and carried out with lots of maturity and wisdom. Indeed, the preparation for this takes time, it takes more time than any of us would want, because in itself it is a learning process, and changing the direction from outsourcing to innovation means, and it requires a change of culture. During the first year in the life of the Cluster we have looked for different ways to build the cornerstone of this change. We say that it is a change of culture, because we envision moving from a fragmented IT environment, with a lot of mistrust, into an environment based on cooperation between companies, universities and public institutions. This is about changing the way that fundamental research gets to serve pragmatic and lucrative purposes, through technological transfer, instead of being confined to the narrow circle of researchers. We are talking about the change in our corporate culture,

as our companies do not consist of “resources” hired by the hour, but are made of valuable people who can and want, through their initiatives, to contribute to the success of their companies; they are made of people who deserve to benefit from this success. We are talking about a cultural change from rigid systems, reluctant to risk, where each aspect is strictly regulated, towards dynamic and adaptive systems where courageous initiatives are supported and can flourish organically. All these changes required, and still require time. But now, after over a year of working together in the Cluster, even if less visible from the outside, we know that we are on the right track and that we can assume with great boldness the next steps. Currently in Romania and especially in Cluj, we are in a period of great effervescence of the start-up culture in IT. There is access to information; there are lots of inspirational examples in the more developed markets, we begin to have people and organisations that coagulate this movement through events, meeting centres and co-work centres, and even accelerators. We also know that there is money that can be reached, it is not impossible to get it once we are able to support a value proposition and we can convince that we have the ability and the maturity to put our ideas into execution. We know that innovative products are built and supported by entrepreneurs; therefore it is essential to have them and to support them. Now we have enough organizational maturity to be able to support such a process in a relevant way. Cluj IT aims to be a catalyst for the entrepreneurial environment in Cluj and Romania. We aim to be a coagulation factor and to ease up the communication processes within the ecosystem of start-ups, thus creating an environment where it can grow in an organic, collaborative way and not fragmented or circumstantially. We intend to foster the cooperation between start-ups and mature businesses, academia and the public administration. We aim to develop a durable framework for entrepreneurial education, addressed not just to the start-ups but also to the mature organizations dealing with the challenge of reinventing themselves, the challenge of discovering the potential of their employees and to support the internal entrepreneurship (the term intrapreneurship was already created for this). Between the mature business environment and the start-ups there is still a gap that we need to bridge in order to create a healthy ecosystem. The mature companies benefit from the availability of a good level of resources, both human and financial, they also have operational experience and relevant partnerships, they understand the business verticals and have a good knowledge of the external markets mechanisms, where they operate. However, many of them have rigid structures and processes, there is a great reluctance to risk and the intrapreneurial initiative is not encouraged. On the other hand, the young entrepreneurs don’t have enough resources and operational experience, they have poor knowledge over the business segments that they would like to serve, and they don’t have enough knowledge of the specific geographic markets in which their ideas could turn into success. Enthusiasm is essential, but it does not compensate for the lack of experience and the unavoidable difficulties in the process of entrepreneurial self -education may become discouraging. Those two sides need each other; without filling this gap the Romanian IT environment might evolve through struggles rather than attaining its full potential. Therefore it is very important for us to address this issue and to create a relevant dialogue between the

two sides, to encourage the development of partnerships. Also, the exchange of information and knowledge between the IT companies is essential, especially concerning methodologies and best practices, as well as specific geographic markets. We aim to facilitate the access to relevant consultancy from mature markets, the access to operational or financial partners, and the direct funding, especially to venture capital. Concerning the relationship with the academic environment, a permanent brokerage program is prepared in order to facilitate the cooperation in research, to align the research with the market needs and to foster the technological transfer in order to build innovative products. So, we do have a strategy. You will of course ask if we do have a plan. Indeed we have it, but it matters less to declare it and it matters much more to prove it. Because, as many investors and entrepreneurs say, the ideas are valuable, but the execution is what matters most.. | no. 20/February, 2014



Interview with Radu Georgescu


e are begining to publish a series of interviews from How To Web 2013, most important event dedicated to innovation and entrepreneurship from Eastern Europe. Radu Georgescu is a well known IT entreneur from România as he built several products: RAV Antivirus sold to Microsoft, Gecad ePayment sold to Napster, and the last big transaction have been the selling of Avangate.

Ovidiu Măţan, PMP Editor-in-chief Today Software Magazine

Ovidiu Mățan: Hello, Radu! You are one of the most famous entrepreneurs from Romania! Can you tell us, for the readers of Today Software Magazine, how it all started? Where did you start from? Radu Georgescu: It is history; I started in 1992, I graduated from the faculty, sold my first programs, after which I started others, some of them failed. The first applications were antiviruses? I started by writing an application over Autcad/Autodesk; after that, I continued with other three products, all three have failed, the antivirus was sold to Microsoft. After that, I set up other companies, some of them failed, Gecad ePayments was sold to Napsters. If we are talking about Gecad and the famous selling to Microsoft, can you tell us whether Microsoft was attracted by the technical aspect of the application or there was a marketing part towards them? Microsoft was interested only in the technical aspect. Microsoft purchased the technology and the technical staff that came along with the technology. The technical staff is still working for Microsoft?


no. 20/February |

Abs olutely, only that not f rom Bucharest, but from Redmond. Practically, the company built the technology for Microsoft, and Microsoft made an offer to the respective people to move to Redmond and they have all moved there and are still there. They are the main part of the team that is developing the security for Microsoft today. Related to the last Avangate success, can you tell us a few words about it? How long did it take to develop it? The company started in 2006, seven years ago, growing by 70% each year. Were the clients Romanian? The company was multinational, with headquarters in the United States, offices in Romania, Holland, China, Russia and Cambodia. The startups are becoming fashionable in Romania; how do you see their evolution in the future? Can we expect them to outgrow the outsourcing at a certain point? I am also a vice-president of ANIS, and Andrei Pitis is the president, and together with him we have set a goal in persuading the outsourcing companies to also build within a small part of product. I think the outsourcing has the following problem: it is

generally dependent on one or two clients, it is a cheap selling of minute-person, unscalable, while the product is exponentially scalable, independent of the provider. We are trying to develop this trend of migration from outsourcing towards product and I hope this thing will happen soon. We can already see it happening slowly. ANIS has even done a research which was released two weeks ago. Which do you think will be the domains of interest, with a potential in the future? I attended Robin’s presentation this morning, which was exceptional and there is a lot of truth in it. I don’t know, I am not a visionary, I cannot answer to this question. Radu Georgescu: Now let me ask you a question. Why don’t you write your magazine in English? Ovidiu Mățan: But it does come in English, too. Usually, the one in English comes out a month later. Radu Georgescu: Excellent! Congratulations! A software magazine written in Romania! And do you also have readers abroad? OM: Yes, but most of them are readers from Romania – a few thousands – and a few hundreds from abroad. The last part is technical and the first part is about events. RG: It would be nice for you to turn it into an international magazine, such as TechCrunch and The Next Web. It would be awesome for you to do something like

this! OM: What is your opinion on Google Glasses and how will the technology progress in the future? RG: I don’t know if Google Glasses will be the winner, but it is obvious that what the Americans call whearables will be in our lives. It may be the glasses from Google or from elsewhere, it may be the watches, or the shoes, or the phones or the headbands, I have no idea, but I strongly believe that something will be. What do you think of the Romanian startups? There are hardly any successful startups. Oh, yes, there are! There is Oxygen XML, which is the coolest XML editing tool in the world, an extraordinary business. Avangate didn’t seem spectacular, either … Why is it that all good companies are the ones established by Radu Georgescu and Florin Talpes?! There are so many thousands of extraordinary entrepreneurs. Let’s judge a company by what it is and not by the person behind it. Oxygen XML, without having any connection to it, is selling because it is very good, isn’t that spectacular enough? Could they have done an even better job and promote it? Yes, but that product is an extraordinary one. Which is the ingredient that maybe the Romanian startups are lacking? It is being built. Time is what’s lacking. Let’s find successful examples. People are

building, people are failing; and we need to learn from the experience of my failures, of your failures, or those of other people and start to try more and more. The angel investment infrastructure is being built, the VC infrastructure is being built, there are events; everything is being built and in time will be. Think of MavenHut, Ubervu, Softpedia, think of Emi Gal. They are all successful examples. Is there a recipe for success? What I am trying to do together with Andrei is to persuade outsourcing companies, which are the ones that are capable of doing this, to build their own small products to takeover. A piece of advice for the young people who wish to set a startup? I do not give advice, I am not in the position to give advice. One cannot give general advice from some wise men. | no. 20/February, 2014




Restricted Boltzmann Machines


n the last article I presented a short history of deep learning and I listed some of the main techniques that are used. Now I’m going to present the components of a deep learning system.

Deep learning had its first major success in 2006, when Geoffrey Hinton and Ruslan Salakhutdinov published the paper “Reducing the Dimensionality of Data with Neural Networks”, which was the first efficient and fast application of Restricted Boltzmann Machines (or RBMs). As the name suggests, RBMs are a type of Boltzmann machines, with some constraints. These have been proposed by Geoffrey Hinton and Terry Sejnowski in 1985 and they were the first neural networks that could learn internal representations (models) of the input data and then use this representation to solve different problems (such as completing images with missing parts). They weren’t used for a long time because, without any constraints, the learning algorithm for the internal representation was very inefficient. According to the definition, Boltzmann machines are generative stochastic recurrent neural networks. The stochastic part means that they have a probabilistic element to them and that the neurons that make up the network are not fired deterministically, but with a certain probability, determined by their inputs. The fact that they are generative means that they learn the joint probability of input data, which can then be used to generate new data, similar to the original one. But there is an alternative way to interpret Boltzmann machines, as being energy based graphical models. This means that for each possible input we associate a number, called the energy of the model, and for the combinations that we have in our data we want this energy to be as low as possible, while for other, unlikely data, it should be high. The graphical model for an RBM with 4 input units and 3

nor any of the v’s, only between every v with every h. The hidden layer of the RBM can be thought to be made of latent factors that determine the input layer. If, for example, we analyze the grades users give to some movies, the input data will be the grades given by a certain user to the movies, and the hidden layer will correspond to the categories of movies. These categories are not predefined, but the RBM determines them while building its internal model, grouping the movies in such a way that the total energy is minimized. If the input data are pixels, then the hidden layer can be seen as features of objects that could generate those pixels (such as edges, corners, straight lines and other differentiating traits). If we regard the RBMs as energy based models, we can use the mathematical apparatus used by statistical physics to estimate the probability distributions and then to make predictions. Actually, the Boltzmann distribution from modeling the atoms in a gas gave the name to these neural networks. The energy of such a model, given the vector v (the input layer), the vector h (the hidden layer), the matrix W (the weights associated with the connections between each neuron from the input layer and the hidden one) and the vectors a and b (which represent the activations thresholds for each neuron, from the input layer and from the hidden layer) can be computed using the following formula:

The formula is nothing to be scared of, it’s just a couple of matrix additions and multiplications. Once we have the energy for a state, its probability is given by:

where Z is a normalization factor. And this is where the constraints from the RBM help us. Because the neurons from the visible layer are not connected to hidden units each other, it means that for a given value of the hidden layer neuThe constraint imposed by RBMs is that neurons must form a ron, the visible ones are conditionally independent of each other. bipartite graph, which in practice is done by organizing them into Using this we can easily get the probability for some input data, two separate layers, a visible one and a hidden one, and the neu- given the hidden layer: rons from each layer have connections to the neurons in the other layer and not to any neuron in the same layer. In the above figure, you can see that there are no connections between any of the h’s, Model grafic pentru un RBM cu 4 unități de intrare și 3 unități ascunse.


no. 20/February, 2014 |



is the activation probability for a single neuron:

the logistic function.

In a similar way we can define the probability for the hidden layer, having the visible layer fixed.

How does it help us if we know these probabilities? Let’s presume that we know the correct values for the weights and the thresholds of an RBM and that we want to determine what items are in an image. We set the pixels of the image as the input of the RBM and we calculate the activation probabilities of the hidden layer. We can interpret these probabilities as filters learned by the RBM about the possible objects in the images. We take the values of those probabilities and we enter them into another RBM as input data. This RBM will also give out some other probabilities for its hidden layer, and these probabilities are also filters for its own inputs. These filters will be of a higher level and more complex. We repeat this a couple of times, we stack the resulting RBMs and, on top of the last one, we add a classification layer (such as logistic regression) and we get ourselves a Deep Belief Network.

There are many implementations of RBMs in machine learning libraries. One such library is scikit-learn, a Python library used by companies such as Evernote and Spotify for their note classifications and music recommendation engines. The following code shows how easy it is to train an RBM on images that each contain one digit or one letter and then to visualize the learned filters. from sklearn.neural_network import BernoulliRBM as RBM import numpy as np import matplotlib.pyplot as plt import cPickle X,y = cPickle.load(open(„letters.pkl”)) X = (X - np.min(X, 0)) / (np.max(X, 0) + 0.0001) 0-1 scaling rbm = RBM(n_components=900, learning_rate=0.05, batch_size=100, n_iter=50) print(„Init rbm”)

# plt.figure(figsize=(10.2, 10)) for i, comp in enumerate(rbm.components_): plt.subplot(30, 30, i + 1) plt.imshow(comp.reshape((20, 20)), cmap=plt. cm.gray_r, interpolation=’nearest’) plt.xticks(()) plt.yticks(()) plt.suptitle(‚900 components extracted by RBM’, fontsize=16)

Some of the filters learned by the RBM: you can notice filters for the letters B, R, S, for the digits 0, 8, 7 and some others

Greedy layerwise training of a DBN

The idea that started the deep learning revolution was this: you can learn layer by layer filters that get more and more complex and at the end you don’t work directly with pixels, but with high level features, that are much better indicators of what objects are there in an image. The learning of the parameters of a RBM is done using an algorithm called “contrastive divergence”. This starts with an example from the input data, calculates the values for the hidden layer and then these values are used to simulate what input data they would produce. The weights are then adjusted with the difference between the original input data and the “dreamed” input data (with some inner products around there). This process is repeated for each example of the input data, several times, until either the error is small enough or a predetermined number of iterations have passed.

Some of the filters learned by the RBM: you can notice filters for the letters B, R, S, for the digits 0, 8, 7 and some others RBMs are an essential component from which deep learning started and are one of the few models that allow us to learn efficiently an internal representation of the problem we want to solve. In the next article, we will see another approach in learning representations, using autoencoders.

Roland Szabo Junior Python Developer @ 3 Pillar Global | no. 20/February, 2014



Multithreading in C++11 standard (II)


n the previous examples we discussed ways to protect the data shared between multiple threads. Sometimes it is not enough just to protect shared data, but it is also necessary to synchronize the operations executed by different threads. As a rule one wants a thread to wait until an event occurs or until a condition becomes true. To this end, C + + Standard Library provides primitives such as condition variables and futures. Dumitrița Munteanu Software engineer @ Arobs

In C++ 11 Standard, condition variables have not one but two implementations: std::condition_variable and std::condition_ variable_any. Both implementations can be used by including the header <condition_ variable>. To facilitate the communication between threads, condition variables are usually associated with a mutex, for std::condition_variable or any other mechanism that provides mutual exclusion, for std::condition_variable_any. The thread waiting for a conditional variable to become true should firstly lock a mutex using std::unique_lock primitive, the necessity of which we shall see later. The mutex is atomically unlocked when the thread starts to wait for the condition variable to become true. When a notification is received relative to the condition variable the thread is waiting for, the thread is restarted and blocks again the mutex. A practical example may be a buffer that is used to transmit data between two threads:

std::mutex mutex; std::queue<buffer_data> buffer; std::condition_variable buffer_ cond; void data_preparation_thread() { while(has_data_to_prepare()) //-- (1) { buffer_data data = prepare_ data(); std::lock_quard<std::mutex> lock(mutex); //-- (2) buffer.push(data); buffer_cond.notify_one(); //-- (3) } } void data_processing_thread() { while(true) { std::unique_lock<std::mutex> lock(mutex); //-- (4) buffer_cond.wait(lock, [] {return ! buffer.empty()}) //-(5) buffer_data data = buffer. front(); buffer.pop(); lock.unlock(); //-- (6) process(data); if(is_last_data_entry(data)) break; } } | no. 20/February, 2014


programming Multithreading in C++11 standard () When data is ready for processing (1) the thread preparing the data locks the mutex (2) in order to protect the buffer when it adds the new values. Then it calls the notify_one ( ) method on the buffer_cond condition variable (3) to notify the thread waiting for data (if any) that the buffer contains data that can be processed. The thread that processes the data from the buffer firstly locks the mutex, but this time using a std::unique_lock (4). The thread then calls the wait ( ) method on the buff_cond variable condition, sending to it as parameters the lock object and a lambda function that is the condition for which the thread waits. Lambda functions are another specific feature of C + +11 standard enabling anonymous functions to be part of other expressions. In this case the lambda function []{return ! buffer.empty()} is written inline in the source code and it verifies if there is data that can be processed in the buffer. The wait ( ) method then checks if the condition is true (by calling the lambda function that was passed) and returns the result. If the condition is not fulfilled (the lambda function returns false), then the wait function unlocks the mutex and puts the thread on lock or standby. When the condition variable is notified by calling the notify_one ( ) function of from data_preparetion_thread ( ), the thread processing the data is unlocked, it locks again the mutex and checks again the condition leaving the method wait ( ) with the mutex still locked if the condition is fulfilled. If the condition is not met, the thread unlocks the mutex and waits again. This is why one uses std::unique_lock because the thread that processes the data must unlock the mutex while waiting and then it must lock it again. In this case std::lock_guard doesn’t provide this flexibility. If the mutex remained locked while the thread waiting for data to be processed is blocked, then the thread that prepares the data could not lock the mutex in order to insert the new values into the buffer, and the thread that processes the data would never have the condition met. Flexibility to unlock a std::unique_lock object is not only used in calling the wait ( ) method, but it is also used when the data is ready for processing but before being processed (6). This happens because the buffer is only used to transfer data from one thread to another and in this case one should not lock the mutex during data processing, because it could be a time consuming operation.


Another synchronization mechanism is a future, i.e. an asynchronous return object (an object that reads the result of a condition/setting common to many threads) implemented in C++11 Standard Library through two template classes declared in the header < futures >:unique futures (std::future < >) and in shared futures (std::shared_future < >) , both modeled after std::unique_ptr and std::shared_ptr mechanisms. For example, suppose we have an operation that performs a very time consuming calculation and the result of the operation is not necessary immediately. In this case we can start a new thread to perform the operation in the background but this implies that we need the result to be transferred back to the method in which the thread was released, because the object std::thread does not include a mechanism for this situation. Here comes the template function std::async, also included in the <future> header. A std::async object is used to launch an asynchronous operation whose result is not immediately necessary. Instead of waiting for a std::thread object to complete its execution by providing the result of the operation, the std::async function returns a std::future that can encapsulate the operation result. When the result is


no. 20/February, 2014 |

necessary, one can call the get ( ) method on the std::future ( ) object and the thread is blocked until the future object is ready, meaning it can provide the result of the operation. For example: #include <future> #include <iostream> int long_time_computation(); void do_other_stuff(); int main() { std::future<int> the_result = std::async(long_time_computation); do_other_stuff(); std::cout << “The result is ” << the_result.get() << std::endl; }

A std::async object is a high-level utility which provides an asynchronous result and which deals internally with creating an asynchronous provider and prepares the common data when the operation ends. This can be emulated by a std::package_task object (or std::bind and std::promise) and by a std::thread, but using a std::async object is safer and easier.


A std::package object connects a function and a callable object. When the std::package <> object is called, this calls in turn the associated function or the callable object and prepares the future object in ready state, with the value returned by the performed operation as associated value. This mechanism can be used for example when it is necessary that each operation is executed by a separate thread or sequentially ran on a thread in the background. If a large operation can be divided into several sub-operations, each of these can be mapped into a std::package_task <>instance, which will be returned to operations manager. Thus the details of the operation are being abstracted and the manager operates only with std::package_task <> instances of individual functions. For example: #include <future> #include <iostream> int execute(int x, int y) { return std::pow(x,y); } void main() { std::packaged_task<int()> task(std::bind(execute, 2, 10)); std::future<int> result = task.get_future(); //-- (1) task();

//-- (2)

std::cout << „task_bind:\t” << result.get() << ‚\n’; //-- (4) }

When the std::packaged_task object is called (2) the execute function associated with it is called by default, to which parameters 2 and 10 will be passed and the result of the operation will be asynchronously saved in the std::future object (1). Thus, it is possible to encapsulate an operation in a std::package_task and obtain the object std::future which contains the result of the operation before the std::package_task object is called. When the result of the operation is necessary, it can be obtained when the std::future object is in the ready state (3).


As we could see in the Futures section, sending data between

TODAY SOFTWARE MAGAZINE threads can be done by sending them as parameters to the function of the thread and the result can be obtained by returning arguments by reference, using the async() method. Another transmission mechanism of the data resulting from the operations performed by different threads is to use a std::promise/std::future. A std::promise <T> object provides a mechanism in order to set a type T value, which then can be read by a std::future <T> object. While a std::future object allows accessing the result data (using get () method), the promise object is responsible for providing the data (using one of the set_ ... () methods). For example: #include <future> #include <iostream> void execute(std::promise<std::string>& promise) { std::string str(“processed data”); promise.set_value(std::move(str)); //-- (3) } void main() { std::promise<std::string> promise; //-- (1) std::thread thread(execute, std::ref(promise)); //-- (2) std::future<std::string> result(promise.get_future()); //-- (4) std::cout << „result: „ << result.get() << std::endl; //-- (5) }

After including the header <futures> where the std::promise objects are declared, a specialized promise object is declared for the value it must preserve, std::string (1). The std::promise object creates a shared state internally, which is used to save the value corresponding to the type std::string, and which is being used by the std::future object to obtain this value, as a result of the operation of the thread. This promise is then passed as a parameter to the function of a separate thread (2). The moment that, inside the thread the value of the promise object is set (3), the shared state becomes, by default, ready. In order to get the value set in the execute function, it is necessary to use a std::future object that shares the same state with the std::promise object (4). Once created the future object, its value can be obtained by calling get() method (5). It is important to note that the current thread (main thread) remains blocked until the shared state is ready (when the executed set_value method is executed (3)), meaning the data is available. The usage of such objects as std::promise is not exclusively particular to multithreading programming. They can be used also in applications with a single thread, in order to keep a value or an exception to be processed later through a std::future.

store and load functions, which set and return atomic values stored in the std::atomic object. Another method specific to these objects is the exchange function, which sets a new value for the atomic object while returning the previously set value. Also, there are two more methods, compare_exchange_weak and compare_exchange_strong, performing atomic changes but only if the current value is equal to the actual expected value. These last two functions can be used to implement lock-free algorithms. For example: #include <atomic> std::atomic<int> counter = 0; //-- (1) void increment() { ++counter; //-- (2) } int query() { return counter.load(); }

In this example the <atomic> header will be included first where the templete class std::atomic<> is declared. Then an atomic counter object is declared (1). Basically one can use any trivial, integral or pointer type as a parameter for the template. Note, however, the std::atomic<int> object initialization, it must always be initialized because the default constructor does not initialize it completely. Unlike the example presented in the Mutex section in this case the counter variable can be incremented directly, without the need to use mutex (2) because both the member functions of the std::atomic object and trivial operations such as assignments, automatic conversions, automatic increment, decrement are guaranteed to be run atomically. It is advisable to use atomic types when one wants to use atomic operations, especially on integral types.


In the previous sections we have outlined how the threads in the C++11 Standard can be used, covering both the aspects of the thread management and the mechanisms used to synchronize the data and the operations using mutexes, condition variables, futures, promises, packed tasks and atomic types. As it can be seen, using threads from C++ Standard Library is not difficult and it will basically use the same mechanisms as the threads from the Boost library. However, the complexity increases with the complexity of the code design, which must behave as expected. For a better grasp of the topics above and expanding knowledge relating to new concepts available in the C++11 Standard, I highly recommend the book by Anthony Williams , C++ Concurrency in Action , and the Atomics latest edition of the classic The C++ Standard Library, by Nicolai In addition to the mutual exclusion mechanisms above, the Josuttis. You will find there not only a breakdown of the topics C++11 Standard introduces also the atomic types. presented above, but also other new features specific to the C++11 An atomic type std::atomic <T> can be used with any T type Standard, including techniques for using them in order to perform and ensures that any operation involving the std::atomic <T> the multithreading programming at an advanced level. object will be atomic, that is it will be executed entirety or not at all. One of the advantages of using atomic types for mutual exclusion is performance, because in this case a lock -free technique is used, which is much more economical than using a mutex which can be relatively expensive in terms of resources and latency due to mutual exclusion. The main operations provided by the std::atomic class are the | no. 20/February, 2014



Metrics in Visual Studio 2013


n the article of the last issue, we talked about how we can measure software metrics by using Sonar. This is a tool that can be very useful not only to the technical lead but also to the rest of the team. Any team member can very easily check on the web interface of the Sonar what the value of different metrics is.

If we use Visual Studio 2013 as a pushing it on the source control. Based development environment we will find out on this analysis, we can identify possible that we have the possibility to calculate problems related to: some of the metrics right from the Visual • Design Studio, without having to use other • Performance applications or tools. Within this article we • Security will see the metrics that we can calculate by • Globalization using directly what Visual Studio provides • Interoperability us with. • Duplicated code • Code that is not being used

Why should we run such a tool?

Such a tool can help us not only detect the possible problems that our application might have, but also, it can reveal to us the quality of the code we have written. As we will see further on, all the rules and recommendations that Microsoft has in relation to the code can be found in this tool. Some of the defaults discovered by such a tool are sometimes difficult to find by using unit tests. That is why a tool of this kind can reinforce our trust that the application we are writing is of a good quality.

Ce metrice putem să obținem?

Starting with Visual Studio 2013, all the Visual Studio versions (except for Visual Studio test Professional) offer us the possibility to calculate metrics directly in it. Right from the start we should know that the number of metrics we can calculate by using Visual Studio is limited. Unfortunately, we do not have the possibility to calculate all the metrics available in Sonar, but there are a few extensions for Visual Studio which help us calculate other metrics, too, not only those existing in Visual Studio. Visual Studio allows us to calculate some of the metrics by using the Static Code Analysis. It analyses the code, trying to give the developers data on the project and the code they have written, even before


And many other problems. It all depends also on the developer’s ability to interpret these metrics. A rather interesting thing of this analyzer is the fact that all the rules and recommendations that Microsoft has in relation to the code, the style of the code, the manner in which we should use different classes and methods can be found within this analyzer. All these rules are grouped into different categories. This way it can be extremely easy to identify areas of our application that do not use an API as they should. In case you wish to create a specific rule, you will need Visual Studio 2013 Premium or Ultimate. These two versions of Visual Studio allow us to add new rules that are specific to the project or the company we are working for. Once these rules added, the code analyzer will check whether they are obeyed, and if they are not obeyed, it will be able to warn us. Unfortunately, at the moment we can only analyze code written in C#, F#, VB and C/C++. I would have liked it very much to be able to analyze code written in JavaScript in order to be able to see what its quality is. Some of our readers might say that this thing could also be done in older versions of Visual Studio. This is true. What the new version (2013) brought as new is the possibility to analyze the code without

no. 20/February, 2014 |

having to run it. This thing could also be done more or less in Visual Studio 212.

How do we run this tool?

These tools can be run in different ways, manually, from the “Analyze” menu, as well as automatically. In order to be able to run them automatically, we need to select the option “Enable Code Analysis on Build” for each project that we wish to analyze. Another quite interesting option is to activate from TFS a policy through which, before being able to check-in on TFS, the developer has to run this analyzer. This option can be activated from the “Check-in Policy” area, where we have to add a new “Code Analysis” type rule. We must be aware that enforcing such a rule does not guarantee that the developer will also read the report that is being generated and will take it into account. All that it guarantees is that this report is generated. That is why each team should be educated to observe these reports and to analyze them when we decide to use such tools. The moment when we enforce this rule, we have the possibility to select which rules must not be breached when there is a check-in on TFS. For instance, one will not be able to perform a check-in on TFS for a code that uses an instance of an object implementing IDisposable without also applying the Dispose method. When a developer will attempt a checkin for a code that does not obey one of the rules, he will get an error which won’t allow him to enter the modification on TFS without solving the problem first. In addition, we have the possibility to also run this tool as part of the build. In order to do this, we have to activate this option from Build Definition.

TODAY SOFTWARE MAGAZINE What does the Code Analysis tell us?

The result of running this tool is a set of warnings. The most important information that a warning contains is: • Title: the type of warning • Description: a short description of the warning • Category: the category it belongs to • Action: what we can do in order to solve the problem

Each warning allows us to navigate exactly to the code line where the problem is. Not only that, but for each and every warning there is a link to MSDN which explains in detail the cause of the warning and what we can do to eliminate it.

How can we create custom rules?

As I have already said before, this thing can only be done through Visual Studio Premium or Ultimate. In order to do this, we have to go to “New>File>General>Installed Templates>Code Analysis Rule Set”. Once we have a blank rule, we can specify different properties that we want it to have. Besides this tool, in Visual Studio there are also two other extremely interesting tools available.

Code Clones

This tool allows us to automatically detect code that is duplicated. The most interesting thing to it is that there are several types of duplicated (cloned) code which

it can detect: • Exact match: when the code is exactly the same, with no difference • Strong match: the code is similar, but not 100% (for example, it differs in the value of a string or in the action that is executed in a given case) • Medium match: the code is pretty similar, but there are a few differences • Weak match: the code resembles a little; the chances that this code be duplicated are the smallest

because if we implement a complex algorithm, then we will always have a rather high value for this metric. • Maintainability Index: is a value between 0 and 100 which indicates how easily the respective code can be maintained. A high value indicates that we have big problems (over 20). Any value below 10 shows us that we are in a good area and anything between 10 and 20 is of a medium level. It is not serious, but we have to be careful. This metric is calculated according to other metrics.

Besides this information, we can also find out, for each duplicated code, in how many locations it is duplicated and we can navigate up to the code line where it appears. Another metric which I like quite a lot is the total number of duplicated (cloned) lines. By this metric, we can quite Conclusion easily realize how many code lines we could In this article we have discovered that get rid of. Visual Studio provides us with different methods to assess the quality of the code. Code Metrics Some of these tools are available in normal By means of this tool, we can analyze versions of Visual Studio, and others only in each project that we have to solve and we the Ultimate version. Compared to Sonar, can extract different metrics. Being a tool Visual Studio does not allow us to share integrated with Visual Studio, we can navi- these metrics through a portal. Instead, it gate in each project and see the value of allows us to explore them in Excel in order each metric from the level of the project, to to be able to send them to the team. The the level of namespace, class and method. Visual Studio tools are a good start for any There are 5 metrics that can be analyzed team or developer that wishes to see the by using Code Metrics: quality of the code written by him or by the • Lines of Code: this metric tells us team. the number of code lines that we have at the level of method, class, namespace, project. It is good to know that, when at project level, this metric indicates to us the total number of code lines that the project has. • Class Coupling: we could say that this metric indicates how many classes a class is using – the smaller the value, the better. • Depth of Inheritance: it indicates the inheritance level of a class – just like in the case of class coupling, the smaller the value, the better. • Cyclomatic Complexity: indicates to us which the complexity level of a class or of a project is. We must be careful Radu Vunvulea Senior Software Engineer @iQuest | no. 20/February, 2014



How (NOT TO) measure latency


atency is defined as the time interval between the stimulation and response and it is a value which is of importance in many computer systems (financial systems, games, websites, etc). Hence we - as computer engineers - want to specify some upper bounds / worst case scenarios for the systems we build. How can we do this?

The days of counting cycles for assembly instructions are long gone (unless you work on embedded systems) - there are just too many additional factors to consider (the operating system - mainly the task scheduler, other running processes, the JIT, the GC, etc). The remaining alternative is doing empirical (hands on) testing.

Use percentiles

So we whip out JMeter, configure a load test, take the mean (average) value +- 3 x standard deviation and proudly declare that 99.73% of the users will experience latency which is in this interval. We are especially proud because (a) we considered a realistic set of calls (URLs if we are testing a website) and (b) we allowed for JIT warm-up. But we are still very wrong! (which can be sad if our company writes SLAs based on our numbers - we can bankrupt the company single-handedly!) Let’s see where the problem is and how we can fix it before we cause damage. Consider the dataset depicted below (you can get the actual values here to do your own calculations). For simplicity there are exactly 100 values used in this example. Let’s say that they represent the latency of fetching a particular URL. You can immediately tell that the values can be grouped in three distinct categories: very small (perhaps the data was already in the cache?), medium (this is what most users will see) and poor (probably there are some corner-cases). This is typical for medium-to-large complexity (ie. „real life”) composed of many moving parts and it is called a multimodal distribution. More on this shortly. If we quickly drop these values into LibreOffice Calc and do

average (mean) of the values is 40 and according to the six sigma rule 99.73% of the users should experience latencies less than 137. If you look at the chart carefully you’ll see that the average (marked with red) is slightly left of the middle. You can also do a simple calculation (because there are exactly 100 values represented) and see that the maximum value in the 99th percentile is 148 not 137. Now this might not seem like a big difference, but it can be the difference between profit and bankruptcy (if you’ve written a SLA based on this value for example). Where did we go wrong? Let’s look again carefully at the three sigma rule (emphasis added): nearly all values lie within three standard deviations of the mean in a normal distribution. Our problem is that we don’t have a normal distribution. We probably have a multimodal distribution (as mentioned earlier), but to be safe we should use ways of interpreting the results which are independent of the nature of the distribution. From this example we can derive a couple of recommendations: 1. Make sure that your test framework / load generator / benchmark isn’t the bottleneck - run it against a „null endpoint” (one which doesn’t do anything) and ensure that you can get an order of magnitude better numbers 2. Take into account things like JITing (warm-up periods) and GC if you’re testing a JVM based system (or other systems which are based on the same principles - .NET, luajit, etc). 3. Use percentiles. Saying things like „the median (50th percentile) response time of our system is...”, „the 99.99th percentile latency is...”, „the maximum (100th percentile) latency is...” is ok 4. Don’t calculate the average (mean). Don’t use standard deviation. In fact if you see that value in a test report you can assume that the people who put together the report (a) don’t know what they’re talking about or (b) are intentionally trying to mislead you (I would bet on the first, but that’s just my optimism speaking).

Look out for coordinated omission

Coordinate omission (a phrase coined by Gil Tene of Azul fame) is a problem which can occur if the test loop looks something like:

the number crunching, we’ll come to the conclusion that the


no. 20/February, 2014 |

start: t = time() do_request() record_time(time() - t) wait_until_next_second() jump start

TODAY SOFTWARE MAGAZINE That is, we’re trying to do one request every second (perhaps every 100ms would be more realistic, but the point stands). Many test systems (including JMeter and YCSB) have inner loops like this. We run the test and (learning from the previous discussion) report: the 85% of the request will be served under 0.5 seconds if there are 1 requests per second. And we still can be wrong! Let us look at the diagram below to see why:

On the first line we have our test run (horizontal axis being time). Let’s say that between second 3 and 6 the system (and hence all requests to it) are blocked (maybe we have a long GC pause). If you calculate the 85th percentile, you’ll 0.5 (hence the claim in the previous paragraph). However, you can see 10 independent clients below, each doing the request in a different second (so we have our criteria of one request per second fulfilled). But if we crunch the numbers, we’ll see that the actual 85th percentile in this case is 1.5 (three times worse than the original calculation).

Where did we go wrong? The problem is that the test loop and the system under test worked together („coordinated” - hence the name) to hide (omit) the additional requests which happen during the time the server is blocked. This leads to underestimating the delays (as shown in the example). Make sure every request less than the sampling interval or use a better benchmarking tool (I don’t know of any which can correct this) or post-process the data with Gil’s HdrHistogram library which contains built-in facilities to account for coordinated omission This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Attila-Mihaly Balazs

Code Wrangler @ Udacity Trainer @ Tora Trading | no. 20/February, 2014



Thinking in Kanban


isual card - this would be the exact meaning of the Japanese word “Kanban”, a term widely used nowadays in the world of IT. The meaning that we recognize today refers to the software development methodology famous for its simplicity but also efficiency.

The first Kanban system was created more than 60 years ago at Toyota in their effort of excelling on the automobile market. At that moment Toyota couldn’t compete based on technology, marketplace or volume of cars produced, so they choose to compete by redefining their method of organizing and scheduling the process of production. Toyota Production System laid the foundation of Kanban, with the following directions: • Reducing costs by eliminating waste. • Creating a work environment that responds to change quickly. • Facilitate the methods of achieving and assuring quality control. • Creating a work environment based on mutual trust and support where employees can reach their maximum potential. Even if, along the years, IT has reshaped Kanban by assigning new values to it and completing it with metrics and rules, the general effect is still about the ideas expressed back then.

Kanban Principles

Kanban is based on two fundamental principles: 1. Visualize the workflow 2. Limit work in progress

becomes visible at any time for all teammembers and this global view of the state of the project facilitates the rapid discovery of problems or bottlenecks. The basic structure of a Kanban board is resumed to three states (columns): To-Do, Ongoing (or In Progress), Done. However, states can be defined according to project specific needs. A classic example of Kanban board for software development can be found in the attached figure.

workload, we use Kanban metrics that help us define them. Like the basic principles, the calculation of metrics is also simple. Lead Time: is the time measured from the moment of introduction of a task in the system until it is delivered. It is important to remember that Lead-time measures the time and not the effort needed to execute a task. Lead-time is the metric most relevant to the client and he will evaluate the team’s work-performance according to it. Cycle Time: is the time measured from the moment when work has begun on a task until it is delivered. Compared to Lead-time, this is a rather mechanical measure of the process capabilities and reflects the efficiency of the team.

The WIP (Work-In-Progress) limit To increase a team’s performance Leadensures focused and thus more effective time and Cycle-time metrics should be work. The columns of type „In progress” reduced. are each assigned such a limit and the number of ongoing tasks shouldn’t exceed the specified WIP limit. By reducing multitasking, the time needed for alternation between tasks is eliminated. By performing tasks sequentially, results appear faster and the overall consumed time becomes shorter. Little’s Law:

The visual effect is obtained by a Kanban board, where all the tasks are mapTo improve Cycle-time there are two ped depending on their current state. The possible options: states of the board are defined according to • reducing the number of tasks in the complexity of the project and the numprogress ber of existing steps in the process. Tasks • improve the completion rate of are written on colored cards (or stickytasks notes) and this way they are understood Metrics in Kanban and processed more quickly and easily. In order to estimate as accurately By reducing Lead and Cycle-time As a result, the current state of the project as possible the dimensions of time and metrics, the development team can assure


no. 20/February, 2014 |

TODAY SOFTWARE MAGAZINE on-time delivery of the product.

Flexibility in Kanban

A Kanban board is completely configurable according to the purpose it serves, quality that enables the methodology to be used in a variety of areas. With roots in production and manufacturing, it has a natural way of fitting into any non-IT business process. A Kanban board can be configured according to the domain and according to the stages needed to get to the final product/ service. Apart from IT, here are some areas where a Kanban system can easily integrate: • Marketing and PR • Human Resources • Logistics and Supply Chain • Financial • Legal The short and long term benefits are similar to those mentioned in the IT field: better visibility of workflow, increased productivity and improved team collaboration.

Types of Kanban Physical board. Online board Initially, the notion of a Kanban board was quite simple, a board or an improvised space on the wall where cards or sticky-notes with tasks written on them were pinned to. The board was in the room where team members worked and represented their focus point. The concept of having a physical board is still very popular and considered an excellent opportunity to improve collaboration and communication among team members. However, the increased interest in the use of Kanban methodology has inspired a number of programs and online tools offering the similar functions of a physical board. Moreover, they have a number of additional possibilities and advantages: easy configuration, archiving tasks, editing,

classification, timer, remote access, colla- application of principles and rules of the boration between teams divided in several two methodologies according to the prelocations, etc. ference of the team: “making the most of both”. Time-driven & Event-driven Kanban PomodoroBan combines the Pomodoro I have talked before about flexibility technique and Kanban. According to and about the way we can create a structure the Pomodoro technique efficiency can of Kanban according to the specific needs be achieved by alternating the following of the proposed project. Over the years, a two cycles: 25 minutes of focused and few general structures have proved to be uninterrupted work, 5 minutes break. elementary in the use of the methodology. PomodoroBan keeps and applies the prinTime-driven Kanban Board: used in ciples of Kanban but focuses on additional temporal planning of activities. efficiency. Event-driven Kanban Board: useful when an external intervention (e.g. appro- Conclusions val) is needed to continue the execution of Whether we apply Kanban in IT, related tasks in the process. domains or even personal life, it seems that „less is more” is true every time. Kanban is Personal Kanban distinguished by its ease of application to In 2011, Jim Benson first introduced the any process, the simplicity of basic prinidea that Kanban is perfect for organizing ciples and the fast improving effects on personal time, tasks and activities. The quality and work processes. concept became increasingly popular and Personal Kanban users say that the method References really works. In these cases we encounter boards with a simplified process and many visual Jim Benson: Personal Kanban (2011) effects. Finally, the purpose of Personal Kanban is to improve personal productivity and facilitate achievement of long-term goals.

Scrumban. PomodoroBan. The suggestive names already indicate the combination of the Kanban with other methods and techniques. Scrumban is the combination of Scrum and Kanban. Briefly, it means the

Püsök Orsolya Functional Architect @ Evoline | no. 20/February, 2014



New business development analysis


t’s a pleasure to watch as an observer how businesses are growing. Still, there are a few that think about the roadmap to success, from where it all begun and which were the first steps that led to success.

Today we know as a fact that, before starting a new project, no matter the industry, there is need for a plan. In most fields, the business plan is most used. More recently, in the software industry the term of POC appeared. According to Wikipedia, a POC (proof of concept) is defined as the achievement of methods or ideas in order to demonstrate its feasibility. In some cases this takes the form of a prototype which provides information about the future of the project, identify needs, outlines the main characteristics and presents risks. POC concept quickly gained trust in engineering and software industry and is appreciated both by the client and contractor as it allows the stakeholders to roughly estimate the effort, and what needs to be done and what resources are needed. A POC involves a mix of strategies such as characteristics, price, market, branding and business model. As reported by Marius Ghenea in an interview during Business Days, business sites that he invests his money in must be validated in the market. “It is essential to have a proof- of- concept, which means a certain validation that can be done in the beginning, before the product or service is put on the market. This validation can be made with focus groups, beta -testers, pilot projects, limited commercial tests, other testing market. If the market does


not validate the business concept, we do not have good premises for the future business, even if the idea seems spectacular and unique business. “ In a software project there is a frequent need to demonstrate the functionality before starting the development itself. Moreover, there is no surprise that more investors and business angels like the idea of POC instead of a classic business plan. Wayne Sutton , blogger at The Wall Street Journal claims the same truth in his article ‘ Do not Need No Stinking ‘ Business Plan ‘. The argument would be that starting a new business is much easier than 15 years ago and compares the business plan with the waterfall method of software development, which is quite obsolete. Everything must be fast, ‘ lean ‘, learning all the time with the clients you serve. A true entrepreneur should always involve his clients, make use of his experience in

no. 20/February, 2014 |

everyday life as much as possible. The author refers exclusively to start- ups in the IT industry and the classic businesses are put aside. Since the business environment in Romania is still developing and we are moving on with baby steps, the business plan still remains an important point of reference for young entrepreneurs. This being said, I would mention some important documents for a tech start- up: Use Cases - who the customers are and how they use the product / service Sales Plan - what, how, where, how and who will sell your product / service Human resources - ensuring business continuity even if people leave the firm Cash flow - how much money is needed and when


Another issue that is worth mentioning is what I noticed about young entrepreneurs. At first start, it’s hard for them to distinguish between core -business (main activity) and other additional activities or support they need to make. In a start- up it is hard to determine the main functions that must satisfy a system and additional functions. However, it is vital for the success of the business to make a list of all the possible features of the future system, then prioritize and divide them into primary and secondary features. All features encapsulated as use cases help the entrepreneur to stick to his initial plan. I will continue speaking about the use cases as I consider them relevant to tech start -ups. Use cases are a way to use a system to achieve the specific goal for a specific user. For example, a user logged on Amazon wants to be able to pay with credit card whenever he buys something. This can be seen as a goal that needs to be reached: As a logged user on Amazon, I want to pay with

my credit so that I can buy whatever I want. The sum of goals make up a set of use cases that shows the way in which a system can be used and also shows the business value for the customers. The book ‘Use- Case 2.0 The Guide to succeeding with Use Cases’ written by Ivar Jacobson, Ian Spence, Kurt Bittner presents the development based on use cases in a very affordable and practical way. Use cases can be used in the development of new businesses, in which case all the business are associated with the system. The system is implementing the requirements and is the subject of the use case model. The quality and level of completion of the system is verified by a set of tests. The tests are designed to verify the implementation of the slices, if the use cases was a success or not. Going in further detail about how to use the use cases, I think everyone is already familiar with the concept of user stories. These ‘stories’ are the link between stakeholders, use cases and parts of use cases. This way to communicate the requirements for the new system is widely used because it helps identifying the basic parts that have to be implemented for a basic functional system. When describing a business idea, the best way to tell the others what it’s about begins with ‘ I want to make an application which allows users to control smartphones

from taxi through an application that can be installed on Android and iOS’. I conclude here my short introspection into new business development analysis mentioning that without a clear idea and a structured plan, the odds of having a successful project are limited. Whether you start from a classic business plan, make use of POC or you reach a prioritized list of use cases for the new system, planning every step must be made carefully. Every entrepreneur has to act with responsibility as to avoid all known risks.

References 1. embrace-the-executive-summary/ 2. Use-Case 2.0 The Guide to Succeeding with Use Cases, Ivar Jacobson, Ian Spence, Kurt Bittner 3. id_30930/In/ce/investesc/antreprenorii/care/au/ adoptat/o/cariera/de/business/angel/in/Romania. html#ixzz2pwAZCe5

Ioana Armean Project Manager @ Ogradamea | no. 20/February, 2014



Business pitching or how to sell in 4 simple steps


as it ever happened to you to attend a great event or dinner party with great people that you wanted to talk to, but you had no clue how to break the ice? Or you just wanted to exchange some ideas and get connected to some of them but it seemed like words weren’t there for you to make you feel at ease?

Most of the times, one’s got only 60 seconds to talk about what he knows to do best in his business or in his job, in front of his partners or co-workers. A pitch is a speech of no longer than 30 seconds - 3 minutes, generally, delivered apparently in a spontaneous manner and it allows one person to promote his/her competencies, services and also the added value that he/she brings through these. Although the pitching concept started to be promoted recently, in the business environment, e.g. in the startup ecosystem, just like we can see it happening in the USA, in Europe and even in Cluj, Romania, in the last few years, it describes and refers to a reality which is lots older than that. Throughout all the times, since prehistoric times, the leaders of the peregrine people were conquering new lands and took ownership of them through short proclaiming speeches. Actually, the very simple decision of choosing the maternal language in the daily interactions, over the newly conquered lands, was the basis of nowadays business pitches. The peace messengers and the religious missionaries were also predecessors of today’s pitchers, through the messages that they were bringing to new people, spreading the word or even persuading through it. Why would anyone deliver a pitch? Well, this ability is inborn, they say that the most gifted salesmen are .. children. It would be very difficult for us to give up to this instinctual tool of attaining our objectives, which is the pitch, be it in personal or professional environments. From another point of view, we have the need of socializing and of getting connected, and for this we deeply need persuasion and persuasion skills, just like we need money. On top of everything else, beyond any other reason of


pitching, it’s a starter. „A pitch is not used only to convince the other to adopt your ideas, but to offer something which is that attractive, that it determines the start of a conversation.” (Daniel H. Pink, To Sell is Human) If you take a look around you, in the social circles that you attend, or even in your past experiences, be it personal or professional, you will be able to notice for sure that in numerous contexts, you have already used a few introductory words, having also an explanatory, descriptive or motivating role into doing this or that. What I am trying to say is that we do pitching in every area and in any moment of our lives, be it for earning more pocket money, in childhood, be it for a salary raise or to gain an angel investor or an investment for our startup business. Now that we are calibrated on why would one use pitching in his daily life, the next step is to follow through the structure of a pitch and learn how to create one yourself. Firstly, a pitch should not have more than four high level objectives, thus, maximum four long phrases. To start with, it’s recommended to say your name and a key role or characteristic that you have in your business, project, role or team. The name is a unique figure and it offers identity and authenticity to your role, profession or to the business that you are developing. The key role is critical also, because it offers authority to you and your presentation and it will make people believe in you and in what you have to say next. Through the characteristic that you will choose to present as key descriptor in the beginning of the speech, be it CEO of your own company, expert developer for 7 years, a great lover of medieval reading, whichever of these you might choose or

no. 20/February, 2014 |

any other, the key characteristic needs to be in line with your project and your area of expertise and it will determine the audience to fully listen to you. Afterwards, in the second part of your pitch, or in the second phrase of it, you will have to say what is it that you actually do in your project and how you will bring added value through that. There is a very good example from Phil Libin, which I would recommend to you too: „[I am Phil LIbin, the CEO of Evernote.] Evernote is your external brain.” Phil creates an excellent method of delivering in a pitch the crisp idea of added value through a metaphor which is very powerful. The pure advertising objective of any pitch can be found here, in the second part of it and, also, in the end, in the call to action. Therefore, the preparation and the creative effort from your side should be mostly focused in these two critical parts of the speech.

1. Who I am and a key characteristic 2. What I do and how I bring added value through my product. 3. How I differentiate myself from the competition. 4. What my objectives are on short or medium term. Why do I deliver this pitch? Call to action.

TODAY SOFTWARE MAGAZINE Furthermore, in the third part of the speech, you need to say how you are different from your competition. There are high chances that what you offer on the market through your product or service, as a professional, through the competencies or through your services, is already being offered by someone else or by some other organization. Therefore, you need to highlight in a subtle manner, shortly, how you are different. Depending on the duration of your speech, this part can be more concise than other parts, in order to get adapted to the rhythm and conciseness of your presentation. Many times, it is not even needed to name the organizations, the startups or the experts that you split your market with, but you only need to express powerfully what it is that you do best: we are not only offering an online data storage „cloud” platform, but it’s also „your external brain”. Finally, I recommend you to name the objectives that you have on short term (no more than 1-2 of them), which will actually give the flavour to your pitch, directing the audience into the specifics of it and into the reason for which you have decided to interact with them. For example, at the startup competition that you will be attending, there will be pre-selection and selection stages that will determine you to pitch your idea over and over again. The goals of the repeated pitches might be different, like, for example, to get new members in your team, depending on their abilities and expertise (online marketing, programming, copywriting, programming etc.), to get new investments for your business, to promote the team or the business idea and to include your team in an accelerator or business incubator. Actually, through the conclusion of your pitch, you will show others how they can help you, what you want to get from them and also what is the stage you are with your business. Out of the unwritten rules of persuasive speaking, that you can excellently practice in any Toastmasters Club, I can recommend the use of powerful words, like verbs, who will call to action. “This is who we are, come join our team!”, „If you are a iOS developer and you want to

become famous, come join us!”, „Let’s make the world a better place through an online platform meant to improve the quality of your relationships, so please support our crowdfunding campaign.” If I already convinced you that absolutely everyone needs pitching, if you are used to the reason for which you would need that yourself, if you realized that you have already used a pitch formally or informally several times now, but you did not know that it’s called this way, the only step left for you to complete is to expect and prepare for pitching opportunities. This type of business speech should be spontaneous, but practiced, nevertheless. If you are looking for partners to implement your business idea, for a new job and you are getting ready for interviews, it’s definitely worth investing one hour of your time to identify the basis of a great customized pitch for yourself. You can write it on a paper, tape it with a video camera or a smartphone or ask a friend to watch you, in order for you to be able to identify the best words, the tone, the content that best promotes you. Afterwards, you only need to look for advertising opportunities, for networking events, for job fairs or celebration times in your life or job, you enjoy the atmosphere and you answer to a very simple question: „What is it that you do?” Every one of us can need several pitches, according to every project which he is involved in, to every role or project that he is a part of. This way, the speech can be easily adapted to the audience, through a change in perspective and in language. Technically speaking, what changes is the key role, the name of activity and added value, the way you differentiate and the call to action. If you speak of the same business or project, instead, and the only thing that is different is the stage you are in at the moment, then you will only change the call to action. At Startup Weekend, you might be looking for teammates, while at Le Web, you might be already looking for angel investors or new investment rounds, after you had launched the minimum valuable product of your business.

It was a real challenge for me to create my first pitch. In the beginning, I did not feel myself, writing down and rehearsing just like for a school contest who I am and what I want from the others. It was a psychological and motivational challenge. Nevertheless, the benefits of such an effort have been higher than the costs. So, I learned to sell my ideas with enthusiasm and determination. Set your own goal to answer in an excellent manner, one that would make you proud, to the question „What is it that you do?” from now on in the following 3 months, and you will see pitching as you second nature. It will become a habit that will bring you money, friends, results, partners or, at least, plenty of exciting conversations.


Ana-Loredana Pascaru Training Manager @ Genpact | no. 20/February, 2014



Gogu and the Ship “Tell me again, dad, what do you do at work?” I muck about, was the thought that crossed Gogu’s mind, but he refrained from saying it aloud. He was spiteful for not having succeeded till now to offer an answer so that the child could understand, neither had he an idea about what explanation to give. Oh, brother… It isn’t easy to manage a project, but it looks like it’s even harder to explain how you do that. He grumbled: “I am in charge of projects; that’s what I do. But what are you about again?” “Our teacher told us to invite our parents to tell us what they do at work, to give us an example. And she also told us to ask them to come dressed in their uniform… if they have one”, he quickly added upon seeing Gogu’s grimace. “Mircea’s dad is a firefighter, so he can come in his uniform, and Maria’s mom is a doctor and she has an overall, and Danut’s dad is a cop…” “I see…”- said Gogu – “and what would you like me to wear?” The child looked at him with his big eyes; one could see that he was intensely thinking whether he had ever seen his dad in some sort of special attire. He probably didn’t find anything in his memory, ‘cause he said nothing more, he just remained staring at his father, waiting for a verdict. And so, the plot thickens, thought Gogu. Instead of explaining to a single child, now I have to explain to an entire class. And I don’t even have a uniform… He reviewed the child’s list: fireman, doctor, policeman. How can you compete with them? They surely have stories to tell, spectacular situations in eager rivalry. Plus the uniform… The child’s voice broke the daydream: “And what do they call you?” Superman. Or Wonderwoman, respectively. No, he didn’t say this aloud. Instead, he asked: “When did you say this meeting of yours with the parents is?” “Anytime next week, provided we let the teacher know in time. This is what she said.” This is what the teacher said, Gogu mocked the child to himself. There goes my possibility to escape. What excuse can you find for an entire week?! Yes, this was something serious. “Let me think about it and I’ll tell you when I can come so that you can let your teacher know. And I will also think about what I’m going to say, so as not to embarrass you. Ok?” The child’s eyes glittered, he smiled with satisfaction, kissed his dad and, clearly relieved, uttered above his shoulder while exiting the room: “Yes, dad, very well. I’m counting on you.” Firefighter, doctor, policeman. For the first time Gogu thought he had a dull job. No, I don’t, he came back to his senses. ‘Cause I like what I do. It’s just that I don’t have a uniform, he added with a bit of sorrow. But it is not the coat that makes the man. And then, I really am a kind of superman sometimes… He found himself smiling at the thought of some tights and a red cape hung over his shoulders, but he banished the hilarious image to concentrate on the speech.


no. 20/February, 2014 |

*** Gogu was sweating and his hands were cold. He, who used to unblinkingly deliver presentations for the management or for the client, was fearsome at the thought of talking to some kids, his son’s colleagues. He realized he wanted to make an impression on them more than on any potential client. All eyes were fastened on Gogu. He looked about for his son, saw him and smiled at him. Then, he focused on him and began: “My son told me that until now you have heard about the jobs of a mechanic, a firefighter, a doctor and a policeman. Is that true?” “And the pharmacist!” added a little girl from the back of the classroom. Another one with a uniform, Gogu couldn’t help thinking; you couldn’t find an engineer… But he knew exactly what he was going to tell the kids, so he no longer hesitated and went on, taking out of his bag a thick rubbery raincoat, which was quite shabby and smelled like salted water: “I am a project manager.” He put his raincoat on, made a theatrical pause, laughed to himself and continued: “And I’ve got the raincoat of a sea captain. Because this is our function - that of a captain, except that it is on land. We coordinate the team that successfully brings the ship to its destination. From the moment we get in charge of the ship, we decide if we are going to use the engine or the sails and we coordinate the team to keep the compass course, to schedule a trip, to use the navigation tools, to lift up or to lower the sails. Everyone in the team knows their role, but the captain is the one who helps them synchronize, who interprets the weather forecast, monitors the position on GPS, decides if they anchor for the night or if they continue along their way. In nice weather or under a storm, my job is to keep the team safe and to make sure their work is not affected by external factors. A little fair-haired girl with cornrows, sitting in the first row, suddenly put her hand up, but didn’t wait for Gogu’s approval: “Sir, but who is in your team?” “Anyone can be in my team: mechanics, firemen, doctors, policemen… even pharmacists.” “And the raincoat, what do you need it for?”

Simona Bonghez, Ph.D. Speaker, trainer and consultant in project management, Owner of Colors in Projects


powered by