Page 1

Nr. 21 • March 2014 • •



A JavaScript Logging Library HipHop Virtual Machine

AOP using .NET stack The Developers of Mobile Applications and the Personal Data Discipline in Agile projects Tick Tock on Beanstalkd Message Queues PM in Agile Interview with Csaba Suket Director of Technology @ Betfair România

How to make the machine write the boring code Our Fight against Technical Debt Agility over Agile Autoencoders Phases and processes in the structure of a project

6 Cluj Startup Weekend 2014 Marius Mornea

8 Interview with Csaba Suket Director of Technology @ Betfair România Ovidiu Mățan

10 ClujIT Cluster: Interdisciplinar Innovation and advanced IT solutions

32 How to make the machine write the boring code Dénes Botond

35 The HipHop Virtual Machine Radu Murzea

39 Autoencoders Roland Szabo

Paulina Mitrea

13 Our Fight against Technical Debt Septimiu Mitu and Daniel Ionescu

15 PM in Agile Ciprian Ciplea

19 Agility over Agile Alexandru Bolboaca and Adrian Bolboacă

21 Phases and processes in the structure of a project Augustin Onaciu

23 Tick Tock on Beanstalkd Message Queues Tudor Mărghidanu

26 Back to the Future: HTTP 2.0 Rareș Rusu

29 A JavaScript Logging Library for Productive Programmers Bogdan Cornianu

41 AOP using .NET stack Radu Vunvulea

44 Discipline in Agile projects Mihai Buhai

46 The Developers of Mobile Applications and the Personal Data Claudia Jelea

48 The root of all evil in software development Cluj Business Analysts

52 Impact Training Monica Soare

54 MWC 2014: Smartphones, Tablets, Phablets and Smartwatches Diana Ciorbă

56 Gogu and the justification of action Simona Bonghez, Ph.D.



Ovidiu Măţan, PMP Editor-in-chief Today Software Magazine

umber 21 of Today Software Magazine has Project management as a general theme. This theme was not randomly chosen, but to be in keeping with … even mammoths can be Agile, event which is entirely dedicated to the project management domain. It is now at its fifth edition and it will take place in Cluj. Today Software Magazine has manifested its support and involvement in the organization of this event, acknowledging its undeniable efficiency in conveying and promoting the basic principles of this complex universe which is project management. The ampleness of this domain most of the times generates the risk of some much too personal interpretations which lead to the undesired direction of failure. Project management is often understood as a way to promote technical leaders who have distinguished themselves by their results. Unfortunately, through this endeavour of turning PM into a promoting title, one may fall into the trap of ignoring the managerial side. There is the risk that the person invested with the new role might not prove the same abilities and higher level skills in the domain of management, too. Beyond the ideal formula of the technical expert doubled by a good manager, which the project manager aspires to, it is important to be given the opportunity to highlight and develop the technical leader skills of those of a manager. The correct strategy in such cases is the existence of two directions of career evolution. At the end of 2013, Project Management Institute published the awards offered for the best projects. The winner was Adelain Desalination Project from Australia. Launched in 2008, to reduce the risk of running without water in the South of Australia, the project was built to cover 25% of the water demand. Afterwards, they were required to double the quantity of delivered water. Under these conditions, the project management team succeeded in completing the project 19 days before the deadline and having exceeded the budged only by 1%. Another award was given to the construction of the biggest highway in Utah, USA. The initial value of the project was 1 billion dollars and they managed to complete it saving 260 million dollars, two years earlier and delivering 10 extra miles. By these two projects I wanted to point out the advantages of a quality management and the real value that can be added. From this perspective, I recommend a PMP or a Prince 2 certification to all project managers in terms of standardizing the processes and the project structure. Of course, we will discuss all these in detail during the event… even mammoths can be Agile. Here is a short presentation of the articles in this issue. Startup Weekend Cluj is a recent event, whose presentation we wanted to offer you from the perspective of the person who has built the winning team, Antonia Onaca. Still in the area of events, we mention the article on Barcelona Mobile World Congress, which presents the main tablet and phone launches that took place. We continue with the interview given by Csaba Suket for our magazine. He talks about the main challenges that emerge from being the biggest online gamble developer. The following articles: Our Fight against Technical Debt, PM in Agile, Agility before Agile, Phases and Processes in Project Structure, are a first series of articles dedicated to the project management theme. Among the articles with a technical theme, which are all distinguished by interesting subjects, we offer you a few titles such as: Tick Tock on Beanstalkd Message Queues, Back to the Future: Http 2.0, JavaScript Logging Library for Developers. The virtual Hip Hop Machine presents in detail the evolution of the virtual machine used by Facebook, and Autoencoders continue the series of articles in the area of neural networks and artificial intelligence. Enjoy your readings !!!

Ovidiu Măţan

Founder of Today Software Magazine


no. 21/March |

Editorial Staf Editor-in-chief: Ovidiu Mățan Editor (startups & interviews): Marius Mornea

Authors list Alexandru Bolboaca

Claudia Jelea

Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Attorney @ IP Boutique

Radu Vunvulea

Graphic designer: Dan Hădărău Senior Software Engineer @iQuest

Copyright/Proofreader: Emilia Toma Translator: Roxana Elena

Roland Szabo Junior Python Developer @ 3 Pillar Global

Diana Ciorba

Monica Soare

Marketing Manager @ Codespring

Manager @ Artwin

Cluj Business Analysts

Bogdan Cornianu

Mădălina Crișan, Business Analyst Monica Petraru, Product Manager Cătălin Anghel, Business Analyst

Java developer @ 3Pillar Global

Marius Mornea

Rareș Rusu

Reviewer: Tavi Bolog Reviewer: Adrian Lupei Accountant : Delia Coman Business-Analysts-Cluj

Made by

Today Software Solutions SRL str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania

Project manager @Epistemio Inginer Software @ Betfair România

Mihai Buhai Septimiu Mitu Development Lead @ Endava Delivery Manager @ Yonder

Tudor Mărghidanu Radu Murzea

ISSN 2285 – 3502 ISSN-L 2284 – 8207 PHP Developer @ Pentalog

Software Architect @ Yardi România

Simona Bonghez, Ph.D. Augustin Onaciu

Copyright Today Software Magazine Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code Speaker, trainer and consultant in project management,

Project Manager @ Fortech

Owner of Colors in Projects

Paulina Mitrea

Dénes Botond

Coordonator of Innovation group @ ClujIT Cluster

Software engineer @ Accenture Romania

Daniel Ionescu

Ciprian Ciplea Scrum Master @ Endava Project manager @ ISDC | no. 21/March, 2014



Cluj Startup Weekend 2014


e were once again glad to be a part of StartupWeekend Cluj and we would like to congratulate the team of organizers for the clear signs of maturity. All throughout the event I had a clear feeling that everything is on schedule, and you could feel it in the attitude of the attendees. Compared to last year, when there was a clear leap in participant count, this year the same number of attendees managed to pitch 49 ideas, almost as much as the combined amount of the previous editions (2012 + 2013 - 59 pitches). Proving that also the audience matured and most of the attendees came to the event with a very precise agenda. Beyond the discreet and efficient orchestration, I would also like to underline two important merits of the organizing team. First that the audience expectations were so well set that participants came fully informed and ready for work, validating the pre-event awareness campaign efforts and evoking the success of the previous editions in creating startups and consolidating the local startup ecosystem. Second, the punctuality, especially considering the 49 pitches on Friday which, I can’t imagine how, didn’t upset the schedule at all, proving the full extent of the maturity and experience of the organizers. To better understand the experience of a participant, we invite you to explore the following insights from the winning team: Engagement Management, formed by: Dragoș Andronic, Emil Vădana, Ionuț Radu, Călin Vingan, Marius Mocian, Horațiu Dumbrean, Oana Vezentan and, responding to our questions below, Antonia Onaca.


When did you get the idea and when did you decide to use it at StartupWeekend? The engagement topic has appeared for a while now in my current work and I think it’s something on the radar of anyone who works for at least a minute with another person. The main issue that most organizations raise, especially in the knowledge economy, is that performance management methods don’t work and sometimes do more harm than good. Frequently one faces the question: how to reward cool people to keep on being cool (since most perks don’t work), and how to get the others to join in. According to organizational psychology research (a massif research field) the answer is: employee engagement. The premise is that employees want to do a good job; another premise is that people know best where they can perform and shine; frequent exploration requires embracing failure (Innovator’s dilemma); people will be more involved and responsible of things they decided to act on; in knowledge economy we can’t define the expected outcome

no. 21/March, 2014 |

up front (like in waterfall), but rather continuously create it; people expect their leaders to reward effort, not just the results (sometimes we do a lot of exploring, with little to show, but not without a lot of work); thus the leaders are responsible with channeling the efforts and helping each team member grow. To answer your question, the idea has teased me for many years as an existing need, but presented itself as a solution for organizational development (consultancy) in the last 6 months and as an actual product only while pondering what to do at SWCluj. The organizers have presented this event (at least to me) as an opportunity to do good and relevant things. They’ve set very high standards for the attendees and I wanted to meet them. How did you prepare for the event? Before the event I had no idea on how an actual product would look. I knew the market was in need for solutions to increase employee engagement, therefore I

TODAY SOFTWARE MAGAZINE researched the concept, how it is currently being approached and which methods, validated by existing organizational solutions, influence it. I was also aware, due to my consultant work, that any implementation of the concept would require: no overhead; because it’s hard to move people away from their workspace, it should be integrated in their email client; most organizational initiatives die out due to steep learning curves. I’ve also tried talking to people in different organizations to see how satisfied they are with their current solutions. I’ve found out that most of them use variations on traditional performance management processes, but they don’t really find value in them and consider them difficult to implement. Most of them have given considerable thought to employee engagement and are already trying to influence it with different organizational solutions. Before the event I had millions of ideas and concepts and information that, once at the event, aligned and transformed in a realistic vision. Taking into account the importance of the team, how many members did you attract and what was your strategy? We were 8 cool team members. We were all acquainted from different contexts and I think the existence of a previous relationship was a determining factor. I think each of us were at some point upset with the existing performance management systems and thus saw the value behind our idea (as possible beneficiaries). Furthermore there was a certain amount of respect and fondness, circling around between us prior to the event, and it certainly helped. We knew the event would be a cool experience and we each sought that in the two days. How many will keep working on the project? We’ve liked each other very much and we’ve worked together exceptionally (both fun and results), each one of us engaged (to validate the tool before building it). After SWCluj each will go back to our previous projects, to fulfill our standing engagements, but we settled to meet again after a few days of rest and celebrate and figure out a way for the future. For this we will probably write a new TSM article What was the best advice received at SWCluj? One thing I really liked about SWCluj was the fact that both the coaches and the jury didn’t give advice and expect

compliance. It was a really nice experience to see them try to understand what we want, what we’re trying to solve and how. The coaches were extraordinary in asking us the best questions to get us thinking, to help us push our ideas further, and really challenge or nourish them (constructive criticism). I think simple advice would have lacked in impact. They actually did a great job at teaching us how to think. Which is a really good strategy, since a piece of advice the challenges brought by new beginnings. you get once, and most often forget, but when someone activates a thought proAny advice for future participants and a cess, it sticks even after the event, when recipe/ ingredient of success? you really need it and there are no mentors There’s one piece of advice that comes around to help. from entrepreneurial experience, but found its place at SWCluj: Don’t rely on intuition, What are the biggest accomplishments of but rather make sure that there’s a real and the weekend? documented need. We often think our First place would be the validation of solution is really useful, because we relate the idea. Each member of the team has to it, but we need to check whose problem prior entrepreneurial experience and were we solve and whose life we make better aware that it is the most important thing. and even if that person wants that partiSecond comes the access to real people’s cular problem solved. Furthermore, read experience. I know we each need to make the existing related research and see if your our own mistakes, but being able to learn assumptions are real. There are incredifrom the real life experience of others is ble amounts of already relevant research actually priceless, even if only to make done and easily retrieved through scholar. room for new mistakes. Unfortunately life or sagepub. Even if the IT is too short to be able to make all the possi- industry is innovative and disruptive, the ble mistakes and learn from them. problems it solves are human problems, The third accomplishment was the fun and in these matters there’s already a lot of of it. We really had a great time. And the useful know-how. credit goes to the organizing team who was really wise in the way it created our experiAnother advice is to have sincere fun ence. Because it was an experience and not at the event. It’s incredible how many cool an event, and it set every one of the partici- things are born out of fun and happipants on steroids. Based on my knowledge ness (zenQ actually built a nice product it would have taken us at least one year to on this concept). When you’re happy, as learn all of the things we learned during the they mentioned in their pitch, you’re more weekend. This is what the organizers did productive, you think better, find cooler very well, they condensed a years’ worth of solutions, collaborate better, and the whole entrepreneurial life in 24 hours. experience is a reward in itself. What prizes did you win? So many prizes, so relevant for a startup. They are already listed on the website, and mostly consist of access to contexts in which both the idea and the work on it can be significantly accelerated, matched with tools to make our product become a reality. The prizes are well thought to get you started in turning your idea into a product that would improve the world, at least a little. But besides the ones on the list, there were many others: contacts that want and will support you in building your product, a lot of encouragement, essential in the inception phase, when all you’ve got is emotional kerosene and the fact that you become a part of a bigger group that helps you face

Marius Mornea Project manager @Epistemio | no. 21/March, 2014



Interview with Csaba Suket Director of Technology @ Betfair România


Part of the British Betfair Group, Betfair Romania is the company’s largest Development Centre. Based in Cluj, it employs over 250 people skilled in a wide range of programming languages, business analysis, Information security and programme management. The Cluj centre’s work is focused on 5 main streams: Platform Development, e-Commerce, Gaming, Product and Enterprise Data Services. The innovative technology at the core of Betfair products allows the processing of over seven million transactions a day, meaning more transactions than all of Europe’s stock exchanges combined; 99,9% of these transactions are processed under a second. . Considering these challenges, Csaba-Attila Suket, Director of Technology at Betfair Romania, has answered a few questions for us: Hello, Csaba. Could you please tell us a few words about yourself? Hi, Ovidiu. In short, I am 33 years old. I studied Computer Science in Cluj, at the Babes-Bolyai University. I started at Betfair in 2008 and had different roles, beginning with Engineering Manager and progressing to the Director of Technology role. Given I started my career as a data base developer, I feel closer to Oracle and the Data space, but at the same time I am a fan of SOA, Cloud, distributed systems and new languages such as Go, Agile/Lean - Betfair Romania is amongst the pioneers of Scrum in Cluj. If I were to name one guiding principle for both my career and day to day life it would be: “think simple and positive, believe and achieve”. My hobbies are hockey and the Oriental philosophies.

Csaba Suket

Templating (FreeMarker). For these systems we are using a variety of internal frameworks based on Jetty and Spring (Core, DataAccess, AOP, Batch) and JMS where necessary. For build and release we are using different tools such as Maven, Jenkins, Sonar, Nexus, Chef. Regarding the most important flows, we make use of caching systems, load balancing. The quality of our applications is very important to us. Therefore, to be able to deal with such a load we are using Mockito, TestNG, JUnit, Selenium. At the beginning, we mentioned the processing of over seven Last but not least, the implementation of the DevOps concept million transactions a day. What technologies is the Betfair plat- provides us with an increased autonomy on the delivery part and form based on, if we are referring to the server and the data base ensures a prompt reaction to operational emergencies. components? During the last few years, Betfair’s Technology component Which is the development strategy from the technological point has gone through several technological and organizational trans- of view? formations in order to reach to a Delivery Centric model. At the At the moment, we are following several strategic directions, moment, our teams provide end to end delivery for most of the some of which are Cloud, new technologies/ programming lanBetfair platform, from the specification stage, design, implemen- guages and architectural evolution based on efficiency and ease in tation, testing, to release and operational process. operating our products and components. From an architectural point of view we believe in SOA: our middle-tier is based on Java technologies; we are using Oracle How do you see the evolution of Betfair Romania during the besides other NoSQL systems for data storage. All this is expo- next year? sed through our web applications using JavaScript, HTML, CSS, During the next year, our goal is to continue to invest in


no. 21/March, 2014 |

TODAY SOFTWARE MAGAZINE the quality of the products and the projects we deliver. Betfair Romania is a powerful business from all points of view; we are well positioned as an employer due to the technologies, opportunities and benefits we offer. On a medium term, we will focus on increasing efficiency by continuing to offer opportunities for development and improvement on all levels for our employees. At the same time, we will keep on focusing on the organizational culture of our company. For a more complete view, can you tell us what would be the main achievement of the last year, from a technical point of view? It was a wonderful year, full of accomplishments, when we literally raced at a very high speed and our teams constantly delivered at a pace of about 40 releases in production every month. This allowed us to improve, starting from the architectural level, to processes (operational, Scrum, development and testing, etc.). Moreover, in the last year we focused on creating opportunities for technological innovation for our employees; some of the business ideas that had great impact came from them. All in all, I would say that our main achievement is, by far, the value added to the business through the newly developed applications and capabilities. How do you see the evolution of Betfair Romania during the next year? During the next year, our goal is to continue to invest in the quality of the products and the projects we deliver. Betfair Romania is a solid and powerful business from all points of view; we are well positioned as an employer due to the technologies, opportunities and benefits we offer. On a medium term, we will focus on increasing efficiency by continuing to offer opportunities for development and improvement on all levels for our employees. At the same time, we will

keep on focusing on the organizational culture of our company. We have heard about the Betfair University. Can you tell us a few words about it and whom it addresses? Betfair University is an internal program for our employees, carried out by Camelia Hanga and Andreea Misarăș, which provides professional and personal development opportunities. It is basically an umbrella concept for all learning activities ranging from courses, workshops to certifications, mentorship, round tables and various learning events. All programs are personalized according to the individual needs and those of the team. The trainer – student roles often change – each employee can contribute with his/her expertise. The range of learning activities is very wide – from technical courses to soft skills, management school, coaching sessions, programming competitions and Olympic Games. Betfair University is a key element of the organizational culture, as we give our employees ownership of their personal and professional development. If you were to write a technical article, what would its title be? I think it would be: Capacity and Scalability on an E-Commerce Platform.

Ovidiu Măţan Editor-in-chief Today Software Magazine | no. 21/March, 2014



ClujIT Cluster: Interdisciplinar Innovation and advanced IT solutions, for an avangarde urban community


t is already well-known that the mission assumed by Cluj IT Cluster consists in generating and sustaining a relevant interaction between the software industry, the academic environment and the public institutes, for further developing Cluj as one of the leaders in the IT domain. This leading position was won and sustained until now mainly by the outsourcing services, but, in order to further ensure the sustainable growth, it will be necessary to have a diversification of activities, through the realization of the owner IT products generated inclusively in the context of the innovative start-ups. Paulina Mitrea Coordonator of Innovation group @ ClujIT Cluster


no. 21/March |

A consistent list of firm objectives targetted in this direction, having already integrated some very concrete activities, aim at the following aspects: ·the creation of the necessary premises in order to increase the competitivity of the companies from the IT&C area ·the identification and promotion of the initiatives that generate innovative products and services ·the acceleration of the collaboration, in the

domain of the scientific research, between the academic environment and the companies ·the generation and attraction of funding for research, development and innovation projects, as well as the creation of some mechanisms for the collaborative approach regarding the important innovative projects ·the creation and promotion of the local IT&C industry brand ·the consolidation of the innovative potential from

TODAY SOFTWARE MAGAZINE the IT domain through specific training sessions ·the generation of interactions and contexts that can sustain the possibility of development, at the level of the companies which are members of the cluster, of some IT innovative products, aimed to position them at the top level on the existing markets and to open new opportunities for them on the market. This list of really ambitious objectives confirms its viability through an interaction based behaviour, which already has some concrete effects in the competitivity area. We are talking here, for exemplification, about the framework for education having as purpose the awareness and consolidation of know-how concerning the competitive advantage, offered in the context of the „Competitivity and Innovation” program affiliated to Harvard Business School and developed through the DECID department of Technical University of Cluj-Napoca, as well as about the support for the activities of branding and trademark registration at OSIM (State Office for Inventions and Trademarks). On the other hand, in order to find ways for financing research and innovation, the consortia building for project proposals was started in the context of Horizon 2020 framework program of the European Committee. In the virtue of a belief assumed through the statute of the ClujIT Cluster association, the most important objective is, however, that of offering innovative solutions for the community, based on collaborative know-how contributions and advanced competencies /even vanguard competencies/. They are originated from all the environments able to provide knowledge and expertise which are so well represented in our town through their potential in the IT domain, successfully proved by the local IT companies in conjunction with the innovative capabilities offered by the four universities which are members of this cluster. Thus, it is about a huge potential, and the scale of the projects that are being developed in this major direction is concordant with this dimension, since we are talking here about the project “Innovative Development through Computerization of the Local Urban Ecosystem” also known as „Cluj-Napoca: Next Generation Brained City” – project approved for financing on POSCCE/ Op.1.3.3, as well as of the major project “Cluj Innovation City” – considered to be the biggest urban project of our city, being consolidated by a strong public-private partnership that unifies the local authorities, the academic environment and the business environment represented through the ClujIT Cluster. These very important projects are aimed to generate a living area based on the innovative concept of urban community, ecologic and completely computerized, established under the paradigm of „networked ecological city”. According to this concept, our urban environment has chances to become a harmonious and eco-efficient environment, in which the specific components and levels are harmonized through an integrative and exhaustive computerization, this computerized system playing the role of a „brain” which uses data collected through sensor networks („smart sensor networks”), but also through the human operators from all the contexts which are specific to the community (business, cultural, educational, social, medical etc.), in order to manage and harmonize in an intelligent manner the resources and components of the community from

each of these levels. It is absolutely clear that the energies involved for the realization in practice of this concept are and will be very important! And the signal given by the fact that there already exists the support of the local, governmental and European authorities, is obviously a positive fact! | no. 21/March, 2014



IT Communities


n March, we will have the launching events for no. 21 TSM in Cluj and Timisoara. We invite you to join us, so that we can browse through the magazine together and you can attend the presentations of the published articles. Also, we invite you to take part in Innovation Days in Cluj and in … even mammoths can be Agile.

Calendar Transylvania Java User Group Community dedicated to Java technology Website: Since: 15.05.2008 / Members: 563 / Events: 43 TSM community Community built around Today Software Magazine. Website: Since: 06.02.2012 /Members: 1241/Events: 11 Romanian Testing Community Community dedicated to testers Website: Since: 10.05.2011 / Members: 721 / Events: 2 GeekMeet România Community dedicated to web technology Website: Since: 10.06.2006 / Members: 573 / Events: 17 Cluj.rb Community dedicated to Ruby technology Website: Since: 25.08.2010 / Members: 170 / Events: 40 The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile methodology. Website: Since: 04.10.2010 / Members: 396 / Events: 55 Cluj Semantic WEB Meetup Community dedicated to semantic technology. Website: Since: 08.05.2010 / Members: 152/ Events: 23 Romanian Association for Better Software Community dedicated to senior IT people Website: Since: 10.02.2011 / Members: 235/ Events: 14 Testing camp Project which wants to bring together as many testers and QA. Website: Since: 15.01.2012 / Members: 285/ Events: 27


no. 21/March, 2014 |

March 10-14 (Cluj), 17-18 (Timișoara), 20-12 (Cluj) Microsoft TechWave March 15 (Cluj) Coderetreat March 18 (Târgu Mureș) TEDx Târgu Mureș March 19 (Cluj) Launch of issue 21 of Today Software Magazine March 20-21 (Cluj) Cluj Innovation Days - TSM recommandation March 24 (Timișoara) Launch of issue 20 of Today Software Magazine March 24 (Iași) Open Source Iași March 29 (Cluj) Windows Azure BootCamp in Cluj-Napoca March 31 (Cluj) Mobile Monday Cluj #6 April 4-5 (Cluj) ..even mammoths can be Agile - TSM recommandation April 4-6 (Timișoara) Timișoara Startup Weekend


Our Fight against Technical Debt


hen implementing a new functionality you have two options: blue or red – quick and dirty or smart and clean. If you chose the first option you create a debt that has to be paid at some point. If you go with the second option it takes longer to implement new features but it makes change easier in the future.

Septimiu Mitu Development Lead @ Endava

The graphics were adapted from, PSPO course.


Daniel Ionescu Scrum Master @ Endava

Our project started from scratch, so in the beginning all code was new. As the project progressed and sprints came and went, we were sometimes hard-pressed for time, usually at the end of the sprint, seeing we overestimated our ability to deliver. At this point we started to care a bit less about sound engineering practice and we just got the job done. The client architect would provide input very late and when he did, we saw that some of our code was not very close to his guidelines. Sometimes we patched things up to make them work and get the thing released. We love the concept of emergent architecture, whereby the ‘big picture’ of how the system should work emerges with the system, during its creation, and people add and change the system components as it grows out of the requirements. This is a very nice way of fighting uncertainty - we don’t know the final requirements of the system, so we are building and designing it as we find out more. By adding a component here, a component there, our application was starting to resemble Frankenstein. It did all it needed to do, that is for sure, but it was becoming harder and harder to understand what each

piece did and if it still had a purpose. We took another quality hit when some of our specialized people were on leave - for example Alex, our only strong BPM developer. When he was away, the other guys writing BPM would take some slightly uninformed decisions which would decrease the code quality. People call this the ‘bus factor’ - how many people would need to be run over by a bus in order to cause the team to stop working properly. Our ‘bus factor’ for certain things was one. To mitigate this issue we tried to pair Alex with Stefan and Cosmin who started to learn BPM.

We need to change

As we continued sprinting, the project codebase became larger and larger. Whatever we pushed under the rug was starting to surface. People were unable to write code until they ‘fixed’ something or they ‘made it work again’. What used to be a quick fix now became quicksand. We started to hear a recurring beat during the daily Scrum: my task took twice as long because I had to dig, fix and refactor the system to make it stable enough to work on it. Needless to say, people started to get frustrated. Developers felt they were being held back and were getting angry because they | nr. 21/Martie, 2014


management Our Fight against Technical Debt were fixing stuff more than writing new code, the project leadership was getting alarmed with the lower team velocity and in the same time everyone ended up working overtime to try to have as much as possible done in the sprint. We were using both a physical board and Jira because our client, including the Product Owner, was not collocated with the team. The internal communication revolved around the physical board, the heart of our team. The communication with the PO and the outside, as well as reporting, was done using Jira. Over time, Jira became more and more difficult to follow because of unclosed tasks, changed user stories, unplanned tasks. From Jira it became impossible to tell the real state of the project. At a certain point, the problem became so painful that we were What we did was, after acknowledging the size of the debt prono longer able to work. blem and putting up related tasks on the physical board, we started working on paying back some of it every sprint. At some point we Make it visible realized there was so much technical debt that we needed to take We needed to identify the expanse of the technical debt and to a sprint off building new functionality and just pay it back. We’ve display it - on the board, in Jira, on Confluence, tools we used to stopped creating technical debt. share information. The tough part was discussing with the client, admitting that we have a problem and getting their support. Some of the actions we started were a full architecture review with the client architect and team code reviews. We also extended the use of Sonar, our Code Quality tool, and Jenkins dashboards to let us know the state of the Continuous Integration system. Our colleague Madalin from Endava Cluj even created a dashboard that gives stakeholders a one-page view of the most significant metrics around quality for all teams working on the same account. It shows the number and types of bugs and the trends in their number, the unit test code coverage and how the automated build system is faring, as you can see in the picture below.

Pay it back

There are three steps in this process, according to the PSPO course from 1. Stop creating debt 2. Make a small payment 3. Repeat from 2


no. 21/March, 2014 |


PM in Agile


enerally speaking, you cannot separate projects from project management. It sounds like a 100% rational process but it is truly an art to handle the iron triangle of Quality, Time, and Cost.

Ciprian Ciplea Project manager @ ISDC

Historically speaking, project management started its evolution in the industrial and construction projects, while, in the latest decades, it has been applied also to the IT domain, in order to materialize the vision of courageous entrepreneurs and investors: IT as a new „commodity” alongside already existing ones: oil, energy or, recently, the Internet. If we go more specifically towards web applications development, I would like to use this opportunity to highlight the insights I have experienced as a project manager (PM) in project lifecycles with my current employer - ISDC, a European IT software and solutions company with expertise for enterprise clients in the fields of innovation, intelligence, and integration. In what follows I will pass through the most important aspects that deserve the PM’s special attention in an agile project: from a short history about the evolution of development methods to the agile project start-up, identifying stakeholders, fundament phase, defining functional and non-functional requirements, change management, high-level planning, communication, risk management, progress monitoring, QA,

management of visits, demo/UAT to client and concluding with the retrospective.

Client Satisfaction

From the moment you are assigned the role of project manager, you carry a great responsibility: client satisfaction. A client can be an entire organization with a series of decision makers involved or a small group of persons. I strongly believe that client satisfaction is the essential vector in controlling the iron triangle and the development of a long business relationship with that client. Everything starts even before the project start-up. The pre-sales and sales process is often long and full of challenges. Once the parameters of the collaboration with the clients are set and the contract is signed, the PM takes over these parameters and starts their refinement.

“To be or not to be”… Agile

Agile or non-agile, a project needs to be managed in such a way that it reaches its targets under the conditions agreed with the customer and it also delivers the product desired by the customer at the right moment and within budget, complying with a series of | no. 21/March, 2014


management PM in Agile complex technical and functional parameters, that define product’s quality. Over time, specialists, engineers and business people have tried several working formulas to successfully complete IT projects and collaborate together. The wellknown waterfall development method was used a lot in early IT projects, cascading phases of requirements development, analysis, design, development and finalizing with integration testing. Several intermediary iterative methods followed, by breaking the monolithic waterfall model in smaller waterfall iterations or increments. Finally the development model that has imposed itself globally was the agile one, given its flexibility and shorter time-to-market. When mentioning agile, I will refer in what follows to Scrum, as the development framework but there are for sure other methods too, like DSDM or Extreme Programming for example.

Stakeholder Identification

In most projects, I have worked with clients in complex organizational setups. This makes it difficult, sometimes impossible that all main stakeholders of the project join kick-off meetings at start-up. Therefore an important target for a PM is to know all its partners interested directly or indirectly in the project and in addition find out their stakes related to the project or product(s) delivered by the project. You would like to avoid the unpleasant situation when a stakeholder is identified too late, in the final phase of the project which could possibly ruin your plans and the final acceptance of the developed IT system.

Fundament Phase

ISDC offers its clients a fundament phase as part of the project, before the actual start of the development sprints. In this phase we can identify and communicate directly with the involved stakeholders. The benefits of the early step are tremendous; working teams (client and supplier) will know each other and they will align on the way of working plus other important project aspects. For sure this phase needs a clear plan and agenda in such a way that time is spent efficiently, covering the most important topics for the start of the development.

both sides might come up to light even from the sales phase. The PM needs to have his/her attention on the identification and clarification of these differences and pursue for common agreements with the client. These agreements refer to planning aspects as well, like defining project phases and milestones, number of development sprints, team size and even way of working. In practice at project start-up, you need to cover a full range of topics in the client discussions: from functional/nonfunctional requirements, that influence high-level architecture and expected quality attributes, to the way in which software development and testing is done within your project. There is still the case when a list of open points remains. In this situation I recommend that the PM collects them and monitors them further till completion.

Non Functional Requirements (NFR)

There are cases when you have to come back to certain aspects several times when the project is a complex long run. One situation I have met was the discussion and clarification of the NFRs (after project start-up) like performance, maintainability, robustness and security of the developed software system. You have to be careful with these, since they can have a big impact on the high-level system architecture and that is something your architect should tell you. Changes of the NFRs, during the software system development, being a web application or not, might impact planning and project budget since they have a great potential of generating re-work or refactoring. For a fix price project it will give you the chance to exercise your change management skills to convince the client to pay for the additional changes.

Roles and Governance

Defining, clarifying and assigning the project roles from the start-up is essential. Starting from the governance at Project Board level and to the Scrum team roles: Product Owner (PO), Proxy-Product Owner (PPO), Scrum Master (SM) or Team leader (TL), Architect, PM etc. There are certainly a lot of role setups that work; depending on the way of working, for example with mixed development teams (supplier and client) or only from supplier or in collaboration with subExpectations Gap contractors and consultants from client During the alignment of the supplier side. Some roles can be aggregated, one PM with the client, a gap of expectations on can be also SM/TL, sometimes a SM is also


no. 21/March, 2014 |

PPO etc. In general, what needs to be taken into consideration when aggregating roles is to avoid the conflict of interests among roles and of course the internal organization of the software supplier.

Scope and Change Management

Another lesson learned is that there are customers who know exactly what they want but there are many that cannot indicate the details and those need to be guided. In the end, as a software services provider, one of the challenges we have is to guide the client to the best solution that develops and improves their business. Here comes in picture the definition of functional requirements, whether it is done by the customer’s business people together with specialized consultants or by the software provider in collaboration with the client. Since the transition from waterfall to agile, detailed requirements definition is done in steps, having the development team involved in those pre-grooming or grooming sessions before each sprint. Rarely do we have complete functional specifications and even then, at implementation time they are already obsolete. Therefore, a preliminary per sprint prioritization of the major functional requirements (epics) from the product backlog is desirable. This needs to be done together with the client. As a PM, you need to ensure that the prioritization is realistic and approachable from the technical and functional perspective, after consulting your team, including SM and architect. For a fix price project refining the major requirements (epics) into more depth (detail them in userstories) in advance might prove necessary. We need it in order to establish a clear delimitation of the project scope. This delimitation is one of the big challenges for a fix price project. In order to guard the project scope it is recommended to setup the change management that consists in: defining a process (change management flow) for reporting & approving change requests and identifying the actors with their responsibilities in this process (client, team, OP, PM, Project Board, etc.). The Change Request (CR) definition should be crystal clear for the development team. The role of the PM in this circumstance is essential; therefore a good collaboration with the SM and constant alignment is mandatory. Any request from the client that triggers a potential change needs to be notified by the team to the PM.

TODAY SOFTWARE MAGAZINE platform, meeting, Skype call etc.) be it an internal stakeholder of the provider or one from the client side; • PM to inform the team periodically and reversely; a weekly information meeting with the team might help; • alignment of the internal stakeholders within the software supplier (PM, SM/TL, Architect, Management, Sales). • I have several recommendations for things that should be avoided: • interruption of the daily stand-up by PM, with topics that can be discussed later and are not in the interest of all team members; in fact it is not always necessary for the PM to be present in these meetings; • direct tasks assignment by the PM to the team members without prior consulting and informing the SM/TL; • arranging meetings with the whole team when you do not need all its members; • neglecting problems and risks disclosed in daily stand-ups or later in discussions with team members.

High-level planning, Assumptions, Constraints and Risks

A high-level project planning takes into account the budget, the roles’ effort needed in the team per sprints/weeks, a breakdown in time of the budget (hours/money) and a sequencing of activities and milestones on completion of work stages. Planning should take into account the assumptions from which the development team starts, that may be technical, functional, etc. Their diversity is surprising. They come bundled with the project constraints that you find all over, like budget in most cases, delivery time, quality attributes, availability of both client and your team members. And, last but not least, planning is related to the risks identified at the beginning of the project. The high-level planning has to be validated internally in the supplier organization and with the client even from the project start-up, with the remark that it holds in the context of the list of assumptions, constraints and risks. For further Identify and “defuse” the risk changes that appear in the assumptions list, Indeed this is about risk management. the PM has to determine their impact on Consider the risks as something with desthe baseline planning. tructive capacity for your project. Certainly there are also opportunities but now I Communication and Stakeholder Manmean just the negative risks. Wouldn’t you agement want to treat them carefully? “Defusing” Communication is one of the key suc- these risks should be among PM’s main cess factors of a project and the PM is the concerns. one that orchestrates it. This is something I do permanently in Often communication is neglected projects I am involved: reduce the likeliand then miscommunication impedes the hood and impact through various actions. smooth running of the project; neverthe- This is part of a PM’s daily activity. Along less there are some things that can prevent with the team and the SM, the PM has to such problems if they are set in place: identify and find ways to reduce the damage • how and to whom progress should in case risks occur. Furthermore, transpabe reported and how often (daily, weekly, rency is needed in the team for these risks monthly) and in what format/channel to be put on the table by its members. After (document, email, on a collaboration that we need the perseverance of the PM



Big Data/ Analytics/ Business Intelligence

to document, monitor and mitigate them. There are even cases in which some risks should be put on the table of the Project Board. Risks should be collected permanently by PM/SM/Architect, in all circumstances. Do not confine it to just asking team members during daily stand-ups. Occasionally participate as observers at technical meetings, sprint planning and grooming sessions, in this way you will identify new risks. Talk regularly with your team members, so you gather along the daily impediments also the risks the team and project are exposed to.

Monitoring progress

Measuring progress is a key factor in determining the success of a project and applying the right corrective actions. Currently, there is a multitude of software applications that facilitate development teams with storage of functional requirements, acceptance criteria, activities status and initial estimates, time spent on activities, connection with the source code, code review remarks and more. In recent projects I have worked on, we used JIRA (Atlassian) or TFS (Microsoft). Using the burn-down charts of these applications you can identify daily during a sprint if progress is as according to the initial sprint planning. In case of deviations the PM together with the SM can take corrective actions. Periodically, once a week for example, you as a PM can check out the budget status (hours/money) as well. In the case of a fix price project, it is useful to have a description of the product(s) that your project has to deliver, with a breakdown on functionalities, this is called PBS (Product Breakdown Structure) in which you can fill the percentage of implementation of that functionality and you could also have an effort estimate to completion of what is left to be implemented.

BECOME AN ISDC PAID INTERN! Experience a company that understands and


rewards talent. If you are an IT student or fresh graduate and have the latest technologies at



Testing Java

your fingertips, apply to join a competitive team of professionals! Good knowledge of OOP is a must and team spirit definitely helps. We promise tough development assignments because you can! e-mail to | no. 21/March, 2014


management PM in Agile Most of the time, you can derive the PBS Board, before which you can bring the profrom the product backlog. blem with the proposed alternatives. But do not surprise the board members, timely Quality Assurance prepare an agenda and communicate it, fix Delivery times, effort and cost are not a date when all members meet and clarify everything; the developed software system what you expect from them. must have certain quality attributes desired by the client. Client availability and Common PlanIn ISDC I had the opportunity to ask ning for expertise on the functional, architectuOne of the major problems in many ral and quality of development and testing projects is the lack of availability of key perprocesses. This expert opinion input gave sons on the customer’s side. To tackle this, me an objective indication of the health of I propose you to organize a common joint the software system developed and of the planning session with the persons concerprocesses used within the project. ned from the client or their manager. The default “built-in” mechanisms for Basically, such a reconciliation of the source code review and functional test exe- planning, made ​​by the development team cution, found in the best practices of many together with the key partners from the development teams, offer you the certainty client, will lead to better results and comthat no serious technical or functional mitment from both parties (supplier and error escapes the quality checks. In addi- customer). tion I would also recommend, during the life-cycle of a project, the following actions: Visit Management • an architectural review done by a Another common cause of bottlenecks technical lead or software architect; and unsolved problems is the lack or scar• a review of the release procedu- city of in person meetings between the res and an audit on the configuration development team (or its representatives management; PM, SM, Architect, PPO) and client (or • a load performance test of the their representatives: peer PM at customer software system; side, PO, Key Users, Test Manager etc.). • a security test (white/black box) The PM is the one able to propose the cli• Issue Management ent as part of the communication plan: a schedule of mutual visits to remove barriers There are impediments and problems and foster communication between memthat cannot be removed by the SM/TL and bers of both teams. so they get to the PM’s table. In this case, as a PM, it is necessary that you identify Demo and User Acceptance Testing alternatives and means to solve them: you Whenever you can, plan a demo with communicate directly with the customer, the client or an acceptance period: a User resolve resource issues or if you cannot Acceptance Testing (UAT) on-client-site solve it yourself, you have got a power- or invite the client on your site. It is vital ful and influential instrument the Project for the success of UAT to be close to the


no. 21/March, 2014 |

customer. Plan visits to the client to demo the latest features developed or to begin a period of UAT. You may encounter resistance due to the travel costs but argue by presenting the potential benefits.


At least at the end of each work phase (a new version for example) consisting of a sprints series, is useful to have a moment of retrospective with the team and the client, to analyse project parameters like: estimates deviation, re-work level, quality of userstories, code and test scripts quality. You may want to do this in separate sessions, but it is better to identify with your team things that worked and you wish to keep and things that you should avoid repeating in future.


From my perspective, a PM in an agile project should be dynamic, flexible and receptive to the short feedback loop from the client and team, while the PM focus remains on customer satisfaction and quality of the system developed within the budget and on time. These targets are achieved through sustained communication and stakeholder management, periodical presence on-client-site, constant evaluation of the progress together with the customer and the decision of implementing CRs, identifying and reducing risk impact conjugated with effective QA.


Agility over Agile


ere’s a common situation nowadays: a large company learns about Agile and decides to adopt “agile processes”. The leadership team creates a transition plan that includes training, coaching and consultancy. They monitor the advance by asking: how many teams are agile? Because when we’ll be over 80%, we’re done transitioning and we’re agile. Right?

Alexandru Bolboaca Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Wrong. How agile the company is doesn’t matter. The only thing that matters is its agility: how fast can the company change when the context changes. Agility is a property of the organization, not of a team. Improving agility usually includes using agile and lean practices at team level, but it’s not the full story.

Who needs increased agility?

Adrian Bolboaca Programmer. Organizational and Technical Trainer and Coach @Mozaic Works

If we look at it this way, then it’s obvious that not all companies need agility. Specifically, the company doesn’t need agility if all the following criteria apply: • it has a strong business model that brings great profit • it doesn’t want to expand in other markets or business models • its clients want a long release cycle (1-2 years). Hospitals are an example of clients that don’t want to update their software more than twice a year. Accountants are also known to be conservative users who would prefer less updates rather than more. While agile practices can help structure the work in such a context, the business value they create will be limited. Certainly, not all hospitals or accountants fit this model, therefore any business needs to decide for itself. If we turn this argument around, we end up with reasons for increasing a company’s agility: • when the market forces it to deploy improved quality faster • when the market has changed and the business needs to adapt • when the business needs to enter a new market • when the business wants to grow fast

external forces that push it to change.

Measuring agility. An utopia?

Can we measure the agility of a company? Measuring it directly is obviously highly risky and very expensive, because we would need to simulate external forces on it and see how fast it changes. We can however look at past events and discuss hypothetical situations. For example “what would you need to do to reduce the release cycle from one year to 6 months?”. If the answer is “that would be impossible” or “that would be extremely difficult” then agility is obviously not very high. Another example: “how fast could you release a completely new product from an idea you have today?”. If the answer is “one year” or “6 months”, you can definitely do better because 3 months is a usual benchmark with this regard. At this time, there is no definite collection of questions that you can ask to measure your agility. The interesting questions are highly dependent on the context of the business and on its objectives. This is where the experience with various organizations of an agile coach comes useful.

Principles and Practices to Support Increased Agility

Once the business has decided what to improve, it’s time to define an experiment and focus on principles and practices. Principles are the foundation of any transition, while practices define how one does certain things. For example, the agile principle “the most efficient and effective method of conveying information to and within a development team is face-toface conversation” leads to the practice of having collocated teams and to organizing Notice that we’re talking about increasing meetings such as Daily Scrum, Planning or the agility. Any business has some agility, the Retrospective where the whole team is physiproblem is how to increase it to respond to cally present whenever possible. The principle | no. 21/March, 2014


management Agility over Agile of fast and often feedback translates to the practices of: • getting feedback from customers on features • developers writing unit tests so that they get fast feedback on the correctness of their changes • continuous integration to allow fast feedback on changes committed • fearless feedback between team members to improve the team collaboration. Principles are the backbone of the transition: whenever the purpose of a practice is not well understood, the principles help decide whether to stop doing it or improve the way its done. Some practices are very important for agility. For example, truly self-organized teams, that can make decisions inside boundaries set by managers, allow much faster response to change than centralized decision making. While their workings is often nonintuitive and seems chaotic, it is proven to work and it is part of a new brand of management that deals with complex systems, most commonly known as Management 3.0. Occasionally practical reasons prevent teams from following a certain principle; that’s when the practices should be adapted to compensate. For example, teams that are forced to work in a distributed way should compensate by using a non-stop video and audio connection on a big screen so that it almost feels like collocation. Traveling between sites so that the team members know each other better also strengthens collaboration. If the business is serious about increasing its agility, it won’t stop at organizational practices such as adopting Scrum for all the teams. Changes in other areas are typically required. Let’s look at some of them.

Release Cycle

Reducing the release cycle from one year to one month or less is not possible according to the best knowledge of the industry without changing the technical practices that are being used. Agility at this level requires flexible designs, automated tests, executable specifications, continuous integration and daily refactoring. The previous article “Agility implies Craftsmanship” explained in more detail why. The article Building Changeability in Design – http:// - explores the role


of software design when changeability is an Continuous Improvement important characteristic. The continuous improvement part is the most difficult, since it often requires Feedback tough business and management decisions. The principle of fast feedback tends to Coaching plays a very important role in maipermeate other departments as well, most ntaining the rhythm of improvements; they notably HR. If bi-yearly evaluation was the are often invisible from inside a team due to norm, agile-minded developers will demand familiarity with the process. faster feedback on their performance. Practices such as monthly one-on-one Conclusion meetings with direct managers, monthly 360 Agile or Lean don’t matter by themdegrees feedback or continuous feedback selves. Agility matters. Agility is the property between team members can help with this of a company defined as the speed with which new need. it changes when it has to. Companies typically need to change when the market forces Management 3.0 them, when they decide to enter a new marThe role of management switches ket or when they want to grow fast. towards less direct control and more leadersTo improve agility, the business objechip. Some of the decisions typically made by tives should first be clearly defined. The set managers are delegated to the self-organized of principles and practices from agile and teams while the manager becomes coach and lean are then adopted in the company while support for the team members. This doesn’t keeping in mind the business objectives. happen over night and requires a transition Team-level practices such as those defined period; visualizing the areas of responsibility by Scrum or XP are only one side of the agiland the delegation levels for the teams helps ity improvement. Managers need to switch both management and the teams understand from a traditional role to a leadership role their new roles and the road ahead. Once it and strategic thinking. HR departments does happen, it clears the minds and the should adapt evaluation to allow early and time of the managers allowing them to focus often feedback, even through direct feedback more on strategy. between team members. Business leaders need to start looking at the business in terms Business Agility of: value streams, bottlenecks and removing Probably the hardest change to accept waste and getting away from the traditional is on the business side. Agility is measured view of compartmentalized departments. at this level in terms of how fast the busiImproved agility can be obtained withness can enter a new market or change its out changing all the levels of organizations. business model. The lean practices and prin- Remember though that you can always do ciples play an important role for this kind better. of change. First step is to identify the value stream of the new business model, meaning what will people pay for and what are the steps to create this value. If that’s unknown, it’s time to define some hypothesis and validate them through experiments, in the Lean Startup way (see previous article about Lean Startup). Once the value stream is clear, it’s time to: • visualize it • measure the cycle time: time from when the new item starts being developed until it’s done • measure the lead time: time from when the user asked for something until it’s payed • continuously improve the value stream by removing bottlenecks and waste (anything that doesn’t help the development of the valuable item).

no. 21/March, 2014 |


Phases and processes in the structure of a project


n most software projects, except perhaps those of small size, project managers use well known methodologies. These can be represented by dedicated systems - PMBOK (Project Management Body of Knowledge) or PRINCE2 - but they may also be replaced by methodologies specific to a particular organization. Although these approaches have a number of differences regarding orientation and use specific terminology, all have a few key points: the projects are delivered in stages and these stages involve the use of some common project management processes. Augustin Onaciu Project Manager @ Fortech

In this context, the stages or phases of a project are crucial for a project manager. Organizing things in stages, the project manager ensures that the services or products delivered at the end of each phase are as intended and at the same time, the project team members are ready for the next phase of the project. Next I will summarize the general phases of a project, as well as some practical aspects from experience, gathered from the developed projects and from the relations with team members, clients and other stakeholders in the project.

Establishment of the project strategy In this first phase of the project lifecycle, business requirements are defined and proposals regarding the approach and methodologies to be used in the project are made. Essentially, this stage has the purpose to obtain approval regarding the business strategy that validates the intended approach. Furthermore, it is recommended for the project team to review the business requirements at the end of each stage of the project, to ensure they are still valid and current.

Analysis and preparation

Consistent interaction with the client and / or shareholders, alongside the collaboration with team members is a key issue of this stage and defines activities that may include: • „Decomposition” of project components in a Work Breakdown Structure (WBS); • Recruitment of a project team or extension of an existing one (if applicable); • Establishment of a project plan and intermediate stages. Involvement of the team members in the detailing of plans for intermediate stages gives them a sense of

responsibility and ownership for these phases, assuring a more dedicated attitude; • Establishment of a collaboration program with the support teams (IT, operations, etc.).

Architecture and design The aspects involved in this phase relate to defining and creating items to be delivered, having the project strategy and the business requirements as a starting point. At this stage, depending on the size of the project, it is also important to have the contribution of a business analyst, who can work with the client to establish the design elements and details related to the architecture. If changes are needed at process level, the usage of a Flow Chart or Swim Lane Diagram is indicated, to create a graphical representation of the process. At this point, all efforts should be focused on analyzing and considering potential risks, before starting the actual development. Problems detected in time are almost always easier to approach in the design stage than after the start of the development. The realization of a complete and welldocumented design stage provides, to some degree, security regarding the compliance of the services or products to be delivered. Just the same, an incomplete design phase often leads to failure regarding objectives and customer expectations. In case of projects with identified risks of technical nature, alongside other elements generating uncertainty, it is recommended to consider a stage for feasibility analysis, to prove the validity of the product by developing a simplified prototype (demo). Development and testing Once the project has undergone a complete analysis and design in sufficient detail, | no. 21/March, 2014


management Phases and processes in the structure of a project the project team can start the develop- the following are required: ment of the project components. The • completion and storage of project detailing of various processes and documentation; potential approaches to these phases • realization of an analysis after the are not the subject of this article, being project launch, so that the project team a subject to be elaborated in detail. can use the gathered experience in the future; Preparation and validation • reallocation of team members on The objective of this stage is the preparation projects in which they can contribute for product launch. This phase can involve: and use the gathered knowledge and • preparation of user guides; experience. • provision of a support plan; • transfer of the data to client systems; During all these stages, a number of • identification of the elements that specific processes of project management will ensure the efficiency of the project can be identified. These are: after launch; • Management of the stages - in this • validation of project objectives. process it must be ensured that the conditions for the completion of each phase Support and Feedback are met. In this process the understanProviding support during the transiding of elements to be delivered at the tion of the project from the project team to end of each phase is essential. the client team is the focus of this stage. In • Planning - planning is needed for many cases, for various reasons the project the entire project, from the beginning, team may be reallocated to new projects and then a detailed planning for each too quickly, once the project has been deliphase. It must be ensured that the project vered, thus the awareness of the benefits or has the necessary resources, the methopotential problems arising after delivery, dologies, the tools to support each phase, for reasons not necessarily related to the so that the project can be delivered on project team, diminishes. time, on budget and in compliance with Monitoring of project benefits is very the agreed quality standards. important for team morale and can help • Control – keeping the purpose and promote the project or establish future costs under control, and proper manaaction points, to ensure the success of gement of time, risks and benefits are future initiatives. essential. The creation of reports containing information relevant to the status Closing the project and progress of a project is a common Although this stage is not among the practice. most anticipated or desired phases of the • Team management - a project project, it should be done with utmost resmanager is also responsible for the manaponsibility in order not to interfere with gement of the project team. Projects other initiatives that might reflect negatinormally require the formation of a vely in the organization. During this phase team of people who have different skills

Young spirit Mature organization A shared vision Join our journey!


no. 21/March, 2014 |

and knowledge, and the project manager must be able to define the composition of the team, keeping a balance and also covering potential training needs. • Communication - in every phase of the project, the establishment of the person or persons responsible for the communication with team members, management and / or shareholders is essential. Inadequate and inefficient communication is a common problem in the various projects. • Integration - many projects are not independent, but are parts of more complex systems that represent important parts of a business. The interaction between the current project and the rest of the components or existing initiatives should be carefully considered. Therefore, the careful approach of all these phases of the project lifecycle is very important for the project’s success and, consequently, for the quality of the product offered to the customer (internal or external) and also for the satisfaction and evolution of the teams.


Tick Tock on Beanstalkd Message Queues


ime is generally a pretty restrictive dimension; all the more so in IT, where every product, regardless of its stage of development, is submitted to this measurement. Moreover, IT developers have divided time into different categories and the resources that are allocated to each project are mainly targeted at increasing time efficiency concerning the development of the product. In this article, I will only talk about the time allocated for the execution of one application during a given session. Tudor Mărghidanu Software Architect @ Yardi România

I’m sure many of you are familiar with important terms in the Beanstalkd terminothe notion of message queues, especially logy so that you get a better view on what the if you’ve had the opportunity to work with rest of the article will focus on: applications that work with asynchronous operations. These message queues offer a • Job: It represents the message itself, series of undeniable advantages, such as: serialized according to personal criteria or • Decoupling – The separation of the according to the user library that is being application’s logic used. • Scalability – More clients can process • Tube: The Namespace that is used for data at the same time the queue. Beanstalkd accepts more queues • Redundancy – The errors are not lost at the same time. and you can start re-processing them • Producer: The process that deals with queueing the messages on the tubes. There are more services that help imple• Consumer: The processes that use the ment these message queues, but in this article messages which were queued on one or I will only talk about Beanstalkd. more tubes. • Operations: The Producer or the Beanstalkd Consumer can perform the following opeBeanstalkd is a service with a generic rations with the jobs posted on the tubes. interface, which was developed in order to • put cut down on the latency period between • reserve the processes of an application that requires • delete longer execution time. Thanks to its gene• release ric interface, this service represents a major • bury scalability factor within the application that needs to be developed. Beanstalkd doesn’t Problem require any implementation limit (language My belief is that a good learning method or marshalling) because it uses PUSH sockets is one that offers examples, so I thought about for communication and has a very simple a problem for this article: protocol. I will explain some of the more | no. 21/March, 2014


programming Tick Tock on Beanstalkd Message Queues “Build a web application where the users may upload video files in various formats, so that they would be available for display on one The figure illustrates the data flow within the application and of the application’s pages.” the way in which the web application should interact with the users, by using a layer of shared storage. The importance of this statement comes from the fact that it Think about a situation where users upload a set of video files is vague enough to leave room for the scalability of the problem, on a given page, videos that enter a pre-defined process that fulfills which in turn triggers the scalability of the solution; but its beauty two major functions: the first one deals with storing the file in a lies in fact in Beanstalkd’s simplicity. pre-defined persistence layer (distributed file system or database); the second function prepares and writes a message that contains information pointing you to the reference in the persistence layer. From this point on, the operation becomes an asynchronous and distributed one; if the ratio between the number of consumers and the frequency of input data was correctly determined, the files that were uploaded should be processed in a short time. package MyApp::Globals; # ... More static properties ... use JSON::XS; use Beanstalk::Client; class_has ‚message_queue’ => ( is => ‚ro’, isa => ‚Beanstalk::Client’, default => sub { return Beanstalk::Client->new( { # NOTE: This usually should come from a configuration file... server => ‚localhost’, # Making sure we serialize/deserialize via JSON. encoder => sub { encode_json( shift() ); }, decoder => sub { decode_json( shift() ); }, } ); } ); package MyApp::Web::Controllers::Videos; # ...

Once the client is implemented, its execution can be scaled both vertically (more system processes on the same machine) and horizontally (more system processes on different machines), using the same initial rule.

sub upload { my $self = shift(); # Retrieving the uploaded video. my $video = $self->req()->upload( ‚video’ ); # Additional content and headers validation ...

Objective C Yardi Romania


no. 21/March, 2014 |

TODAY SOFTWARE MAGAZINE # Storing the video in the persistance layer ... my $object = MyApp::Globals->context() ->dfs()->raw_videos( { filename => $video->filename(), headers => $video->headers()->to_hash(), data => $video->slurp(), );

# ... additional user data }

# Making sure we use the right tube for sending the # data. MyApp::Globals->message_queue() ->use( ‚raw_videos’ ); # Storing the data in the queue... MyApp::Globals->message_queue() ->put( { priority => 10000, data => $object->pack(), # Serialization occurs automatically ... } ); }


On a more personal note, I’ve always liked simple and elegant solutions that involve a minimum set of rules and simple terminology. Beanstalkd is a perfect example of this. But it is also important to note that the introduction of this service represents, to a certain degree, an integrating effort and no one should try to re-invent the wheel at this point in the development of the application. Another vital aspect is the fact that, using a distributed system in this manner allows for both a compressing/dilation of time and a very obvious fragmentation of the execution process. Therefore, a process which, running sequentially, could take a few weeks to complete may be reduced to a few days or even a few hours, depending on the duration of the basic process.

Pros • Speed • Persistence • It doesn’t require any serialized model

The consumers work as fast as they can, requesting messages from Beanstalkd as they process the data. At this point, we change Cons the status of the message as we go along. In this way, we can track • The distributed mode is only supported in the client the number of times the program was run correctly and also the • Lack of a security model number of mistakes we found. If we encounter an error, we can change the status of the messages that we marked as wrong once the problem was solved. Another important aspect is that the parallel connection of the consumers can be achieved through system processes, which leads to a considerable ease of management and to the elimination of resource locking and memory leaks. # Getting messages only from these tubes ... MyApp::Globals->message_queue() ->watch_only( ‚raw_videos’ ); while( 1 ) { # Retrieving a job from the message queue and # marking it as reserved... my $job = MyApp::Globals->message_queue() ->reserve(); eval { my $data = $job->args(); # Automatic data deserialization ... # Doing the magic on the data here ... }; # In case of an error we signal the error in # back-end and budy the job. if( my $error = $@ ) { $logger->log_error( $error ); $job->bury(); } else { $job->delete(); # If everything is ok we simply delete the job # from the tube! } } | no. 21/March, 2014



Back to the Future: HTTP 2.0


et’s have a quick look over the history and development of the HTTP protocol (Hypertext Transfer Protocol), in order to better understand the modifications proposed for the 2.0 version.

The need for evolution of the HTTP protocol Rareș Rusu Inginer Software @ Betfair România

HTTP is one of the protocols that have nourished the spectacular evolution of the Internet: it allows the clients to communicate with the servers, which is the base of what the Internet is today. It was initially designed as a simple protocol to ensure the transfer of a file from a server to a client (the 0.9 version, proposed in 1991). Due to the undeniable success of the protocol, billions of devices are able to communicate these days using HTTP (the current version 1.1). The extraordinary diversity of content and of the applications available today, together with the users’ requirements for quick interactions push HTTP 1.1 beyond the limits imagined by those who designed it. Consequently, in order to ensure the next leap in the performance of the Internet, a new version of the protocol is required, to solve the current limitations and to allow a new class of applications, of a greater performance.

back (round trip) Broadband (Bandwidth) - The maximum capacity of a communication channel As an analogy to the bathing installation of a house, we can consider the broadband as the diameter of a water pipe. A larger diameter allows more water to pass through. On the other hand, in the case where the pipe is empty, no matter its diameter, the water will reach its destination only after going through the entire length of the pipe (the latency). It’s intuitional to judge the performance of a connection according to its broadband (a 10 Mbps connection is better than a 5 Mbps one), but the broadband is not the only factor responsible for performance: in fact, due to the particularity of web applications to use several short duration

Latency versus broadband

The latency and the broadband are two features that dictate the performance of the traffic of data on the network: Latency (one way/ round trip) - the time from the source sending a packet to the destination receiving it (one way) and


no. 21/March |

Picture 1. The evolution of the uploading time for a page (milliseconds) depending on the broadband (Megabit/s), by Mike Belshe - More Bandwidth Doesn’t Matter (much)

Picture 2. The evolution of the uploading time for a page (milliseconds) depending on the latency, by Mike Belshe - More Bandwidth Doesn’t Matter (much)

amount of data on the same connection: after establishing a connection, a server will gradually increase the number of packages sent towards the client as it receives the confirmation of their delivery (slow start algorithm). Thus, the broadband will not be completely used immediately after the connection has been established. By contrast, most of the web applications initiate many short duration connections to transfer content (according to HTTP Archive a web application is made on average of 90 resources – HTML content, Javascript, images etc).

connections, latency influences performance more than the broadband does. The conclusion of these observations is that any improvement of the latency of communication has a direct effect on the speed of web applications. If, by improving the protocols, we could reduce the communication necessary between the two ends of the connection, then we would be able to transfer the same data in a shorter time. This is one of the objectives of HTTP 2.0.

Even though the running of HTTP is not conditioned by TCP as a transport protocol, one of the goals of HTTP 2.0 is the amendment of the standard in order to take advantage of these particularities of the transport level in view of substantially improving the speed perceived by the users of web applications.

A short incursion into TCP

The limitations of HTTP 1.1

In order to understand the limitations of the 1.1 version, it is helpful to have a quick look at the TCP protocol (Transmission Control Protocol), which ensures the transport of data between client and server: • on establishing a connection, it requires a three messages exchange (three-way-handshake) between the client and the server, before sending any data package. Consequently, the latency of the connection is directly reflected in the speed of the data transfer. • TCP is optimized for long-duration connections and for the transfer of a big

One of the aims of HTTP 1.1 was to improve the performance of HTTP. Unfortunately, even though the standard specifies things such as processing parallel requests (request pipelining), the practice has invalidated their implementation due to the impossibility of correct usage. At the moment, most of the browsers implicitly deactivate this option. Consequently, HTTP 1.1 imposes a strict order of the requests sent to a server: a client initiates a request towards the server and he must wait for the answer before initiating another request on the same connection. Thus, an answer of a bigger size may

block a connection without allowing the processing of other requests meanwhile. Moreover, the server doesn’t have the possibility to act according to the priorities of the client’s requests. The developers of web applications have found solutions to avoid these limitations, which are now considered recommended practices for the performance of web applications: • the majority of browsers open up to six simultaneous connections to the same domain – as an alternative to the actual impossibility to request several paralleled resources on the same connection; we have already mentioned that a page is made on average of 90 resources; the web developers have overbid this facilitation and they distribute content on different domains (domain sharding) in order to force the downloading of as many resources in multiple as possible. • the files of the same type (Javascript, CSS – Cascading Style Sheets, images) are concatenated in a single resource (resource bundling, image sprites) to avoid the surcharge imposed by HTTP on downloading many small sized resources; some files are included directly in the page source so as to completely avoid a new HTTP request. Although these methods are considered “good development practices”, they derive from the current limitations of the HTTP standard and generate other problems: the usage of several connections and several domains to serve a single page leads to the congestion of networks, useless additional procedures (DNS researches, initiations of TCP connections) and additional overloading of the servers and of the network | no. 21/March, 2014


programming Back to the Future: HTTP 2.0 intermedia (several sockets busy to answer several requests); the concatenation of similar files obstructs their efficient storage on the client (caching) and it is against the modularity of applications. HTTP 2.0 addresses these limitations.

HTTP 2.0: design and goals

The communication within a stream is carried out through messages, which are made of frames. The frames can be delivered in any order and they will be reassembled by the client. This mechanism of decomposing and recomposing of the messages is similar to the one existing at the level of TCP protocol. This is the most important change of HTTP 2.0, since it allows web developers to: • initiate several parallel requests and to process several parallel answers • use a single connection for these requests and answers • reduce the duration of uploading a page due to the reduction of latency • eliminate from the web applications the alterations specific to the 1.1 version, done in view of improving the performance.

The effort to improve the HTTP standard is tremendous. Taking into account the current wide spread of the protocol, the intention is to bring obvious improvements regarding the above mentioned issues, not to rewrite or substantially change the current specifications. The main goals of the new version are: • to enhance the speed of uploading the pages as compared to the 1.1 version • to use the request pipelining, but through a single TCP connection • to keep the semantics existing in the 1.1 version in relation to the methods, Server push answer codes, headers One of the obvious limitations in HTTP • to define the interaction with the 1.1 1.1 is the impossibility of the server to send version. multiple answers for a single request of a client. In the case of a web page, the server Request pipelining in HTTP 2.0 knows that besides the HTML content the The major change brought by the 2.0 client will also need Javascript resources, version is the way the content of a HTTP images, etc. Why not completely eliminate request is conveyed between the server the client’s need to request these resources and the client. The content is binary, with and give the server the possibility to send the purpose of allowing several parallel them as additional answers to the client’s requests and answers over the same TCP initial request? This is the motivation of connection. the feature called server push. The feature The following notions are useful in is similar to the one in HTTP 1.1, by incluorder to better understand how the pipe- ding the content of some resources directly lining of requests and answers is actually in the page sent to the client (inlining). done: However, the great advantage of the server • Stream – a bidirectional exchange push method is that it gives the client the of messages within a connection. Each possibility to store is cache the received stream has an identifier and a priority. resources, avoiding thus further calls. • Frame – the basic communication unit in HTTP 2.0, containing a header Header compression area which identifies the stream it belong In HTTP 1.1, each request made by the to and a data area. client contains all the headers pertaining to • Message – a frame sequence which the domain of the server. In practice, this forms a message in HTTP (request or adds between 500- 800 bytes to each requanswer). est. The improvement brought by HTTP 2.0 Within a connection there can be an is that of no longer conveying the headers unlimited number of bidirectional streams. that do not change (we are counting on the


no. 21/March, 2014 |

fact that there is only one connection open with the server, so the server can assume that a new request will have the same headers as the preceding one, provided we do not mention otherwise). Furthermore, the entire information contained in the headers is compressed to render it more efficient.

A brief look into the future

HTTP 2.0 is still a standard on the anvil. Most of its basic ideas were taken from the SPDY protocol initiated by Google. SPDY continues to exist simultaneously with the effort of standardization of HTTP 2.0 in order to offer a ground to try out and validate the experimental ideas. According to the time table announced at the moment, we expect the HTTP 2.0 specification to be final in the autumn of 2014, followed by available implementations. Based on the extraordinary success of HTTP, the 2.0 version tries to amend the current limitations and to offer mechanisms by which the Internet development can be further sustained.

References 1. Ilya Grigorik - High Performance Browser Networking [ books/1230000000545/index.html] 2. HTTP Archive [ index.php] 3. Mike Belshe - More Bandwidth Doesn’t Matter (much) [ viewer?a=v&pid=sites&srcid=Y2hyb21pdW0ub3J nfGRldnxneDoxMzcyOWI1N2I4YzI3NzE2]


A JavaScript Logging Library for Productive Programmers


he most widely used method of logging events for debugging in JavaScript is by calling “console.log(message)” which will show the message in the developer console. To log warnings or errors, developers can also use “console.warn(message)” or “console.error(message)”.

Bogdan Cornianu Java developer @ 3Pillar Global

The “console” object is a host object which means that it is provided by the JavaScript runtime environment. Because it is a host object, there is no specification describing its behavior, and as a result of that, the browser implementations differ, or can even be missing like in Internet Explorer 7. If most browsers already implement a logging functionality out of the box, why should there be a library for the same purpose? In order to extend the functionality offered by the built-in logging mechanism with features such as searching, filtering and formatting of messages. I’m working on a web based project with the server side written in Java and the client side written in JavaScript. The client for whom we develop this web application couldn’t provide us with the server logs in a timely manner. In order to solve the incoming defects as fast as possible, we implemented on the server side a mechanism which, in the case of an exception, would collect information about all the tables involved, which, along with the Java stack trace, were archived and sent to the browser so they could be downloaded on the fly by the client. He would then send this archive back to us, so

that we could investigate and fix the issue. On the front-end side, when the client submitted an issue which we couldn’t reproduce, we had to remote connect to their desktops, install Firebug and start watching its console. In order to improve this process and make issue reporting easier, we thought of replicating on the client side, the mechanism we implemented on the server side. This was the birth of “Logger”, a JavaScript logging library. What makes it stand apart from the rest of the logging libraries is its ability to save the events (logs) on the developer console, on the browser’s local storage but also export them to a text file. A few words about the usage of the library. Including the library is very easy and it requires only one line of code: “<script type=”text/javascript” src=”Logger.js”></script>”

There are two ways of logging events:

1. Using Logger.log >Logger.log(Logger.level.Error, “error message”, Logger.location.All); [ERROR]> error message | no. 21/March, 2014


programming A JavaScript Logging Library for Productive Programmers Because we used “All” as location, the message will also be saved in the local storage.

When clicking the button we will get the following messages:

>localStorage; Storage {1393948583410_ERROR: “error message”, length: 1} >

2.Using specific methods for each event: Logger.error(message, location), Logger.warn(message, location),, location) >Logger.error(„second error message”, Logger.location. All); [ERROR]> second error message Logger.warn(„a warning message”, Logger.location.All); [WARN]> a warning message„an information message”, Logger.location. All); [INFO]> an information message

Pressing the “OK” button will show the browser’s save dialog or save the file directly, depending on the browser settings. The filename is made up of the current date and time.

Each event is saved in the browser’s local storage as a key-value pair. The key is made up of the “timestamp_level”, and the value contains the event message: >Logger.getEvents(Logger.level.All); [ERROR]> error message [ERROR]> second error message [WARN]> a warning message [INFO]> an information message

To list all of the events ,we can use “Logger.getEvents(level)”: I will continue with presenting the functionality which, in my opinion, will make programmers more productive: exporting all events to a text file. By exporting to a text file, programmers can have faster access to the events which occurred during the application›s run time. Exporting the events can be done by calling “Logger. exportLog()” or by setting a callback function in the event of an error occurring in the application: ”Logger.onError(exportLog, suppressErrorAlerts, errorCallback)”

If we open up the file we will see the logged events and also the JavaScript error. [ERROR]> error message [ERROR]> second error message [WARN]> a warning message [INFO]> an information message [ERROR]> Uncaught Error: eroare. la linia 19 in http://localhost:8080/logger/main.html

We can see the error occurred in file “main.html” at line 19, which is where we created the button whose click event throws a JavaScript error: <button onclick=”throw new Error(‚eroare.’)”>Buton </button>

If “exportLog” is set on true then the error which occurred and also all of the events present in the local storage will be exported to a text file. If ”suppressErrorAlerts” is set on true then the error will not be written also to the browser’s console. ”errorCallback” is the callback function which will get called when the error occurs. The following code sequence will display a dialog message in the event of a JavaScript error.

The library uses the “Revealing module” pattern to expose the functions which can be called by the developers. The easiest way to write data from the local storage to a file would have been to use a third party JavaScript library called “FileSaver.js” which uses the FileSystem API or when this is unavailable it provides an alternative implementation. To avoid using any third party libraries, I found a solution using a Blob object:

Logger.onError(true, true, function(errorMsg, url, lineNumber) {

saveToDisk: function(content, filename) { var a = document.createElement(‚a’),

var errorMessage = errorMsg + ‘ at line ‘ + lineNumber + ‘ in ‘ + url’ response = ‚’; Logger.error(errorMessage, Logger.location.LocalStorage); raspuns = confirm(„An error occurred. Would you like to export the log events to a text file?”);

blob = new Blob([content], {‚type’ : ‚ application/octet-stream’}); a.href = window.URL.createObjectURL(blob); = filename;; }

I created an anchor element and attached to its href attribute, a URL object created from a blob. A Blob is an immutable object which contains raw data. The name of the file comprised from the current date and time is associated to the download attribute. The click() function is then called on the anchor element which We can test if this code works by throwing an error when will show the browser’s save dialog or save the file directly to disk clicking a button: according to the browser settings. In order to intercept the JavaScript errors, I created a variable ”<button onclick=”throw new Error(‚eroare.’)”>Buton</ button>” called initialWindowErrorHandler which contains a reference to window.onerror event handler. Initially, window.onerror will write if (raspuns === true) { Logger.exportLog(); } return true;//suppress errors on console });


no. 21/March, 2014 |

TODAY SOFTWARE MAGAZINE all error messages to the console. If the errorCallback parameter is not undefined in the onError() function, then the function which References: the errorCallback parameter points to, will get called on each Mozilla Developer Network JavaScript error. 1. Create Object URL: saveToDisk: function(content, filename) { var a = document.createElement(‘a’), blob = new Blob([content], {‘type’ : ‘application/octet-stream’}); a.href = window.URL.createObjectURL(blob); = filename;; }

URL.createObjectURL 2. Blob: 3. File System API: 4. Window.onerror: GlobalEventHandlers.onerror Github 5. FileSaver.js:

At any moment we can revert to the original window.onerror behavior by calling Logger.resetWindowErrorHandler(). JSFiddle This is the initial version of the Logger library, without any 6. Save to disk example: advanced features. In the future, it could be improved by adding the possibility to set a maximum number of recorded events, the 7. Revealing Module Pattern: ability to delete events by their event type, filtering the exported events by type, creating a graphical interface to browse through the existing records and generate statistics to determine the stability of the application in a given time frame and also the ability to customize the generated filename according to the programmer’s needs.

Our core competencies include:

Product Strategy

Product Development

Product Support

3Pillar Global, a product development partner creating software that accelerates speed to market in a content rich world, increasingly connected world. Our offerings are business focused, they drive real, tangible value. | no. 21/March, 2014



How to make the machine write the boring code


ave you ever had to write code for a lot of highly similar things, differing only in a few details? Have you ever wondered - while copy pasting code from one class to another, and another fifty more - why are you doing this repetitive, error prone and boring task instead of a machine? Machines are just fine doing this kind of work, why aren’t they doing it? In this article we will talk about how one can write a program which generates code based on an xml input file and flexible templates. Code generation sounds like a topic for NASA engineers, not something for humble programmers in Cluj working in outsourcing. Imagine our surprise, when on a hot summer afternoon our client announced as our next task a code generator. A code generator which can generate code based on an xml input file and on flexible templates for multiple platforms and programming languages. To understand the problem let’s look at an example. Imagine you have to develop a database application for a warehouse, which stores various kinds of electrical devices. This is a classical „skinfor-database” type of application, with a lot of forms, and of course a lot of DAL (Database Access Layer) classes. If this warehouse is small and they have only four different kinds of products than writing these DAL classes is no big deal. But if it is big, storing all kind of devices, ranging from smartphones to monitors and from routers to DVD players, and you have to create a table for each category, than this task may soon get very time consuming. In the case of these many similar classes that differ only in small details, writing them is not the only problem. A small modification or a new feature means the modification of all the DAL classes, which is very copy-paste error prone. In a scenario like this, having a code generator that generates all this code ensures that modifications and new features yield changes only to several template files and that refactoring the code is easy and fast. A well written code generator can be used in many projects, being a very useful tool in the future.

The architecture of the code generator

A code generator is composed of two major parts: the part which reads and parses the input and a part which parses the templates and generates the output. The input parser component passes the parsed data to the output writer in a well-defined format, thus any of the two parts may be changed as long as they adhere to the defined interfaces.


no. 21/March, 2014 |

Reading the input

The input has to contain all the necessary information for the completion of the templates, in a structured, machine parsable format. In this article, an xml format will be discussed. The input has to contain all data that is intended to be used in the templates. Since in this case the templates will generate C++ classes with SQL snippets in them, the input has to contain all necessary information about the categories that will be mapped to classes (and tables) and all their fields. It is advisable to preserve the structure of the data as it is in the xml file, because it eases the writing of the templates. An xml input file for this particular case could look like this: <?xml version=”1.0”?> <categories> <category name=”smartphone”> <field name=”vendor” type=”string” cpptype=”std::string” sql-type=”VARCHAR(30)”/> <field name=”os” type=”string” cpp-type=”std::string” sql-type=”VARCHAR(10)”/> ... </category> <category name=”monitor”> <field name=”vendor” type=”string” cpptype=”std::string” sql-type=”VARCHAR(30)”/> <field name=”diagonal” type=”number” cpp-type=”int” sql-type=”INTEGER”/> ... </category> ... </categories>

This xml file in fact contains an array of categories, each category containing an array of its fields. Each field has a name, a type, and platform-specific types. Storing the platform specific type in each field tag may not be expedient even with just two platforms. It may be much better to store just a platform-independent general typename in each field tag like „string”, or „number”, and to store the type mapping in a different part of the xml, but for the sake of simplicity in this example it will be stored in the field tag. There is not much to say about parsing the xml file, there are several libraries and parsers readily available; there is no need to write one.

The intermediate format

The parsed input data has to be stored in an intermediate format, before and during being made available to the template engine. The details of this format are specific to one’s problem that needs to be solved, and it depends on how general the code generator is, and many others. It should however comply with some criteria: • it should make it easy to look up categories, • it should make it easy to look up fields of categories.

Generating the output

This is by far the trickiest part of the code generator. It is the most important and most complex part. If one wants to write a code generator which is flexible and reusable, than hardcoding parts of the result code or any knowledge of the target language or platform is a dead-end. The best solutions are templates.

Parsing the Templates

Although at first it looks like a plausible option to write a parser by hand, it is not a good idea for several reasons. First, although this example is not very complicated, a real life template might have many more features and thus much complicated syntax with many more keywords and tokens. Writing a parser from scratch for a mini programming-language is a time and energy consuming task with a lot of pitfalls that are not obvious. Second, there is no need to reinvent the wheel. There are well tested, proven parser generators that generate a standalone parser from a set of rules. The most well-known parser generator is the lex – yacc pair. Lex is a lexical analyzer generator, and yacc is a parser generator. Both lex and yacc have open source variants; for our project we used the GNU variant of these tools, flex and bison. Processing the templates and generating the output is done in


Templates are in fact partly written – in this case – C++ classes with some sort of instruction on how to fill in the missing parts. The missing parts are the bits of code which depend on the input – the attributes of the categories and its fields. The most obvious example is the name of the class. This depends on the name of the category. In the template this will appear as a placeholder which will be replaced by the actual name of the category during the template processing. This will be called as „variable substitution” throughout the article. But in order to generate all the output, this variable substitution won’t be enough. For example, to generate the code to declare all the members corresponding to the category fields, some kind of loop is needed to iterate through the fields, and generate the appropriate member declaration. Also, if one has to make decisions based on some values in the input - which is very probable - then some sort of boolean expression evaluation and branching is necessary. This is starting to look like a programming language, with variables, if-branches and loops. Writing a robust parser which is easily modifiable in case a new construct three steps as it can be seen on the following flow chart: is added is no easy task. But before we dive into the details of a I’m not going to go into details about every step, with exampossible template parser implementation, let us see an example for ples, I will only talk briefly about each step. I’m going to try to give possible template syntax. a good picture about what is done in each one of them. class <$$> { public: <$$>(); ~<$$>(); <@foreach <$field$> in <$category.fields$> @> <$field.cpp-type$> get<$$>(); void set<$$>( const <$field.cpp-type$> &value); <@endforeach@> private: <@foreach <$field$> in <$category.fields$> @> <$field.cpp-type$> <$$>; <@endforeach@> }

This is a possible template for the class that represents a category. It contains variable substitutions and two loops. Variables are marked with the „<$” and „$>” tokens and the instructions with „<@” and „@>”. These delimiter tokens make the template more human readable, although this might seem absurd at first. When this template gets parsed and completed, all variables get substituted by their value and everything that is in a loop gets evaluated in every iteration.

Breaking the input into tokens

In the first step the input is broken into tokens. This is what the C preprocessor does (among others). What this means is that, in fact, every group of characters that has some importance gets tagged and unimportant characters are ignored. For example the character group „<$” gets the tag „VAR_B” and together – the tag and its value – they form a token. A line of C++ code with no template keywords in it gets tagged as „text”. Unimportant character groups are for example comments or whitespace in some cases. Tokenizing the input is what the lexical analyzer - or in other words tokenizer – generated by flex does. Flex generates the tokenizer based on an input file. The input file contains the rules that describe what kind of tokens this tokenizer recognizes, and what they look like. The rules are composed of a regular expression and a piece of code that gets executed when the regular expression matches a piece of input. The tokenizer reads the input one character at a time and tries to match it against all the defined rules. As soon as a match is found the longest possible match is selected and the appropriate piece of code is executed. The input to the tokenizer is the template itself and the output is the tokens. Flex generates a function yylex() which for each call returns the next | no. 21/March, 2014


programming How to make the machine write the boring code token in the template.

is the most crucial part of the parsing procedure. The abstract syntax tree represents Validating the grammar and building the whole template file in a tree form, the abstract syntax tree where the important parts of the template In this step the sequence of input tokens are represented as tree nodes. To have an are validated against the grammatical rules that describe the template language. This is done by the parser generated by Bison. Bison generates the parser from an input file which contains the grammatical rules that describe the language. The grammatical rules have to describe a LALR(1) contextfree grammar. This grammar, although somewhat constrained in that it cannot handle ambiguities, is suitable for most languages, even for fairly complex ones idea how this tree looks like let’s take a look as Java. LALR(1) languages are compo- at an example syntax tree: sed of terminal and nonterminal symbols. Terminal symbols are the ones that can be In this (overly simple) example tree matched one-to-one to the input tokens. there is some text, a foreach with some text For example in this template language the in it, and a variable. How does this tree get token <$ is a terminal, an atomic buil- built? Bison makes it possible to specify ding block of the language. Nonterminal custom code that will be executed whenesymbols are composed of multiple terminal ver a reduce happens. Thus it is possible to and nonterminal symbols. The definition of react to every event of the template parsing a nonterminal symbol is called a rule of the and create tree nodes when a new construct grammar. A rule is a list of all the possible of the template is assembled. In our examcombinations of terminal and nonterminal ple the tree would have a node for text, symbols that form this nonterminal. For foreach and variable. example, if we like to define a nonterminal Bison generates the function yyparse(). for the name of a variable, it would look This function calls yylex – the function something like this: generated by flex – internally to obtain the input. On successful parsing, it will variable_name: word OR word.word return 0, and another value on error. The The variable_name nonterminal user engineer can specify a custom input symbol is a composite building block of parameter, such as a pointer to the root the language. It can be part of the definition node of the syntax tree. Thus, after the parlist of other nonterminals to make up even sing is done, the built abstract syntax tree more complex nonterminals, like a foreach representing the whole template can be loop. There is a special nonterminal that retrieved. represents the whole language. This nonterminal is the topmost level nonterminal Executing the instructions in the syntax of the language. All the valid combinations tree of the other building block of the language The user program has to know exactly must be part of its definition. If an input what to do for each node to generate the is valid, that means that gradually, substi- output required by the specifications. For tuting the smaller building blocks with the example, a variable node gets substituted by larger ones, this master nonterminal can be the looked-up value of the variable. Each substituted with all the input at the end. node has to contain enough information The generated parser is a finite stack for the user program to generate the expecmachine. The parser puts the received input ted output. In the aforementioned example, tokens (terminals) onto a stack. This opera- the variable node has to contain the name tion is called shift. Whenever appropriate, of the variable. To generate the output for it substitutes them with nonterminals. This the entire template, the abstract syntax is called reduce. This is the same procedure tree has to be walked with the depth-first as the one described above. If all goes well, method. then all the input is reduced to the master A possible result for the given template nonterminal and the language is valid. and just one category could be: During the validation of the grammar, class Smartphone the abstract syntax tree is built also. This {


no. 21/March, 2014 |

public: Smartphone(); ~ Smartphone (); std::string getvendor(); void setvendor(const std::string &value); std::string getos(); void setos(const std::string &value); private: std::string vendor; std::string os; }


Writing a code generator is not a trivial task, it requires a large amount of effort, but in many cases this invested effort pays off with interest. Imagine that instead of writing and maintaining several dozens of files you have to write and maintain only several. I think it is not bold to say that a code generator can save you many workdays. I addressed all the main issues that we met while developing the code generator with as many examples as possible. In our case the decision to invest time in learning flex and bison proved to be very fruitful. The fact that they are well tested, proven tools meant that we didn’t have to test and debug a custom parser, and we could focus on the definition of the structure of our language on a much higher level. Flex and bison were flexible enough to suit all our needs and they are well documented too. Working with a parser generator has another advantage: in case of even large structural changes, or new features, only a few files and a minimum amount of code has to be modified. These advantages should sound familiar. Generating code for a code generator is awesome.

Dénes Botond Software engineer @ Accenture Romania



The HipHop Virtual Machine


hese days, when it comes to huge popular websites, performance issues come up often along with the question: „How can we support such a huge userbase at a reasonable price ?”. Facebook, one of the largest websites on the planet, seems to have a pretty good answer to this: the HipHop virtual machine. What is this, how was it born, how does it work and why is it better than the alternatives ? I will try to answer all these questions in the following pages. The information presented here is scattered among many documents, interviews, blogs, wiki pages etc. In this article, I try to provide a big picture view over this product. Keep in mind that some features and functionalities will only be touched upon, not covered in depth, and that others will be omitted entirely. If you want detailed information about a feature of HHVM, please read the project’s documentation or its dedicated blog.

1. What is the HipHop Virtual Machine ? HipHop Virtual Machine (or HHVM) is a PHP execution engine. It was created by Facebook in the (successful) attempt to reduce the load on their webservers, which received more and more traffic as the number of users increased.

2. Istorie

The history of HHVM starts in the summer of 2007 when Facebook started developing HPHPc. HPHPc worked in the following way: • it built an abstract syntax tree, which represented the logical structure of the PHP code • based on that tree, the PHP code was translated to C++ code • using g++, the C++ code was compiled to a binary file • that executable was then uploaded to the webservers where it was executed

In its glory days, HPHPc managed to overperform Zend PHP (the regular PHP platform) by up to 500 %. These impressive results convinced Facebook’s engineers that HPHPc was worth keeping around. They decided to give it a brother: HPHPi (HipHop interpreted). HPHPi is the developer-friendly version of HPHPc and, besides eliminating the compilation step, it offers various tools for the programmers, among which is a code debugger called HPHPd, static code analysis, performance profiling and many more. The two products (HPHPc and HPHPi) were developed and maintained in parallel in order to keep them compatible. The development of HPHPc took 2 years and, at the end of 2009, it ran on about 90 % of Facebook’s production servers. Performance was excellent, the load on the server dropped significantly (50 % in some cases) and everybody was happy. So, in February 2010, HPHPc’s source code was open-sourced and published on GitHub under the PHP License. But Facebook’s engineers realized that superior performance wasn’t going to guarantee HPHPc’s success in the long run. Here’s why: • static compilation took a long time and was cumbersome • the resulting binary had well over 1GB, which made deployment more difficult (new code had to be pushed to production daily) • developing both HPHPc and HPHPi

and keeping them compatible was getting more and more difficult. This was especially true because the syntax trees they used were different and keeping them compatible was no easy task. So, at the beginning of 2010 (right after HPHPc became open-source), Facebook put together a team that had to come up with a viable alternative to HPHPc, one that could be maintained for a long time. A stack–based virtual machine with a JIT compiler seemed to be the answer; the incarnation of this solution gave birth to HipHop Virtual Machine (HHVM). At first, HHVM replaced only HPHPi and was used only for development while HPHPc remained on the production servers. But, at the end of 2012, HHVM exceeded the performance of HPHPc and therefore, in February 2013, all Facebook’s production servers were switched to HHVM.

II. Architecture

HHVM’s general architecture is made up of two webservers, a translator, a JIT compiler and a garbage collector. HHVM doesn’t run on any operating system. More specifically: • a lot of Linux flavours are supported, especially Ubuntu and CentOS • on Mac OS X, HHVM can only run in interpreted mode (no JIT) • there is no support for Windows | nr. 21/Martie, 2014


programming The HipHop Virtual Machine HHVM will only run on 64-bit oper- have to be repeated. ating systems. According to HHVM’s Even so, the warm-up period can be developers, support for 32-bit operating completely bypassed. This can be done by systems will never be added. doing a so called pre-analysis. This means that the cache can be pre-generated before 1. How it works starting HHVM. After installing HHVM, there will be an extra executable with the help of which the bytecode for the entire source code can be generated and added to the cache. This way, when HHVM starts, The big picture view on how HHVM the cache is already full and ready to go. works is: Keep in mind that the cache keys • based on the PHP code, an abstract also contain HHVM’s build ID. So, if you syntax tree (AST) is built (the imple- upgrade or downgrade HHVM, the conmentation of this step was reused from tents of the cache will be lost and it will HPHPc) have to be re-generated. • using the AST, the PHP code is translated to HHBC (HipHop bytecode) 3. The RepoAuthoritative Mode • the bytecode is stored in a cache so An interesting way to run HHVM is the that the previous steps won’t have to be RepoAuthoritative mode. As I said in the repeated at each request section about cache, HHVM will check at • if the JIT compiler is turned on, the each request if the PHP file changed since bytecode is passed to it. The JIT compi- its last compilation. This translates to disk ler will then turn it into machine code IO operations which, as we all know, are which will then be executed computationally expensive. They only take • if the JIT compiler is turned off, then a fraction of a second, but it’s a fraction of HHVM will run in interpreted mode and a second that we don’t really have when execute the bytecode directly. This will we try to serve thousands of requests per make it run more slowly, but it will still minute. be faster than Zend PHP When the RepoAuthoritative mode is activated, HHVM won’t check the PHP file Details about the steps listed above can anymore; instead, it will directly retrieve be found in the following sections. the bytecode from cache. The name of this mode comes from the fact that, in HHVM’s 2. Caching the bytecode terminology, the cache is called a “repo” HHVM keeps the bytecode (HHBC) in and that this “repo” becomes the authoritaa cache that is implemented as an SQLite tive source of the code. database. When HHVM receives a request, The RepoAuthoritative mode can be it needs to determine which PHP file to activated by adding the following instrucexecute. After the file has been identified, tion in HHVM’s configuration file: it will check the cache to see if it has the Repo { bytecode of that file and if that bytecode is Authoritative = true } up-to-date. If the bytecode exists and is up-toSome precaution must be taken with date, it will be executed. A bytecode that this mode because, if a file’s bytecode is was executed at least once will be kept in missing from the cache, the client will RAM as well. If it doesn’t exist or if the file get an HTTP 404 error, even though the has been changed from the last time the PHP file is right there and can be executed bytecode was generated, then the PHP file without a problem. will be recompiled, optimized and its new The RepoAuthoritative mode is recombytecode put in the cache. This procedure is mended for production servers for two basically identical to the way APC (Apache reasons: PHP cache) operates. • eliminating most disk IO operations This behaviour also implies that, at improves performance a file’s first execution, there’s a significant • there is good certainty that the PHP warm-up period. The good news is that files won’t change HHVM keeps the cache on disk, which means that, unlike APC’s cache, it will sur- 3. The JIT compiler vive if HHVM or the physical webserver The bytecode is an intermediary form is restarted. So the warm-up period won’t of code that is not specific to a specific CPU


no. 21/March, 2014 |

or platform, it is by definition portable. When executed, this bytecode is transformed to machine code in a process called Just-In-Time compilation (JIT). The compiler may transform only a piece of code, a function, a class or even an entire file. Generally, there are three characteristics of JIT compilers that are important for their efficiency: • the generated machine code is cached in RAM in order to avoid compiling it over and over again • the JIT compiler will look for so called “hot spots”, loops in the code where the program spends most of its time. The most aggressive optimization is applied on these “hot spots” • the generated machine code is usually targeted for the platform on which the JIT compiler runs at that moment. For example, if it’s detected that the CPU supports the SSE3 instruction set, then the compiler will use this feature. Because of this, JIT compilers can sometimes achieve greater performance than static/classic compilers. HHVM’s JIT compiler is its crown jewel, the module responsible for all its success. While the JIT compiler in Java’s virtual machine uses methods as basic compilation block, HHVM’s JIT compiler uses so called tracelets. A tracelet is usually a loop because, according to research, most programs spend most of their time in a loop whose iterations are very similar and, therefore, have identical execution paths. A tracelet is made up of three parts: - type guard(s): prevents execution of input data of an incorrect type (e.g. an integer is expected, but a boolean is provided) - body: the actual instructions - linkage to subsequent tracelets Each tracelet has great freedom as far as execution is concerned, but it needs to leave the virtual machine in a consistent state when the execution is finished. A tracelet has only one execution path: instruction after instruction after instruction. There are no branches or control flow. This makes them easy to optimize.

4. Garbage Collector In most modern languages, the programmer doesn›t have to do any memory management. Gone are the days when you had to deal with pointer arithmetic. The way a virtual machine ( including HHVM) do es memor y

TODAY SOFTWARE MAGAZINE management is called garbage collection. Garbage collectors are split in two main categories: • based on refcounting: for each object, there is a count that constantly keeps track of how many references point to it. When the number drops to zero for an object, it is deleted from memory • based on tracing: periodically during execution, the garbage collector scans each object and, for each one, determines if it›s reachable. Unreachable objects are then deleted from memory Most garbage collectors are some sort of hybrid between the two approaches mentioned above, but one is always dominant. Garbage collectors based on tracing are more efficient, have higher troughput and are easier to implement. This type of garbage collector was intended for HHVM but, as PHP requires a refcounting-based garbage collector, Facebook›s engineers were forced to temporarily drop the idea. PHP needs a refcounting garbage collector because: • classes can have a destructor. It must be called when the object becomes garbage, not when it›s collected. If a tracing-based garbage collector would be used, then there›s no way to know when that object became garbage • the array copy mechanism requires keeping an up-to-date number of references to such data types HHVM›s engineers really want to switch to a tracing-based garbage collector. They even tried it at some point; but, because of the restrictions mentioned above, the code got very complicated and somewhat slower, so they dropped it. Though there is a chance that this plan will be carried to completion in the following years. One last point: HHVM has no cycle collector (object A references object B which references object A). There is one present in the source code, but it’s inactive.

5. How to write HHVM-friendly code HHVM works best when it knows a lot of static detail about the code before running it. Given that most of Facebook’s codebase is written in an object-oriented style, HHVM will deal best with that kind of code. More specifically, it’s advised to avoid: • dynamic access of functions or

variables: $function_name(), $a = $$x +1 • functions that access or modify global variables: compact(), get_defined_vars(), extract(), get_object_vars() etc. • dynamic access of an object’s fields: echo $obj->myvar (where myvar is not declared in the class). If you need that field, declare it in the class. Otherwise, accessing it would be slower because it would mean doing hash table lookups • functions with the same name, even if they’re in different files or classes and will never interact. If their name is unique, then HHVM will be able to determine much easier which parameters are passed by value and which by reference, what return type the function has etc. This last point is only applicable when running in RepoAuthoritative mode. If possible, provide: • type hinting for function parameters • the return type of a function. This can be done by making it obvious: return ($x == 0); Also, it’s good to know that code in global scope will never be passed to the JIT compiler. This is because global variables can be changed from anywhere else. An example taken from HHVM’s blog: class B { public function __toString() { $GLOBALS[‘a’] = ‘I am a string now’; } } $a = 5; $b = new B(); echo $b;

The variables $a and $b are in global scope. When echo $b is called, the method __toString of the B class is called. This will change $a’s type from integer to string. If $a and $b would be JIT-ed, the compiler would become very confused about the type and content of $a. Therefore, it’s best to put all the code in classes and functions.

III. Features 1. The administration server As I said in the chapter about architecture, HHVM has two webservers. One of them is the webserver that serves regular HTTP traffic through port 80. The second one is called AdminServer and it provides

access to some administrative operations. To turn AdminServer on, just add the following lines in HHVM’s configuration file: AdminServer { Port = 9191 ThreadCount = 1 Password = mypasswordhaha }

The AdminServer can then be accessed at the following URL: http://localhost:9191/checkhealth?auth=mypasswordhaha

The “check-health” option above is just one of many options supported by AdminServer. Other options allow viewing statistics about traffic, queries, memcache, CPU load, number of active threads and many more. You can even shut down the main webserver from here.

2. FastCGI One of the most waited for features is the support for FastCGI. This was added in version 2.3.0 of HHVM (December 2013) and it’s a communication protocol supported by the most popular webservers like Apache or nginx. The support for this protocol means that there’s no need to use HHVM’s webserver anymore, but instead you can use for example Apache as webserver and let HHVM do what it does best: execute PHP code at lightning speed.

This feature is crucial because it will ensure HHVM’s popularity.

3. Extensions HHVM has experimental support for the extensions originally written for Zend PHP. This support is accomplished using a so called PHP Extension Compatibility Layer. The goal of this layer is to provide access to the same API and macros as PHP does, otherwise the extensions won’t work. For a PHP extension to work, it needs to be recompiled using HHVM’s implementation of that particular API. Also, it must be compiled as C++, not C. This will usually result in some small but necessary changes to the extension’s source code. HHVM also supports 3rd-party extensions, just like Zend PHP. They can be written in PHP or C++. The extension will then be added to HHVM’s source code and the entire HHVM will have to be recompiled. The new HHVM binary will contain | no. 21/March, 2014


programming The HipHop Virtual Machine the new extension and its functionality can then be accessed in the In the real world, the following websites are known to use code that runs on HHVM. HHVM: HHVM already includes the most popular extensions like: • MySQL, PDO, cURL, PHAR, XML, SimpleXML, JSON, mem• cache and many more. For now, the MySQLi extension is not • included. •

IV. Parity

Many of the sites that use HHVM will send a X-Powered-By: HHVM seems to be a solid, robust and fast product. But none HPHP header as response. of this would matter if HHVM wouldn’t be able to run real world code. HHVM’s engineers measure this ability by running on it the unit-tests of the 20 most popular PHP frameworks and appli- V. Conclusion cations like Symfony, phpBB, PHPUnit, Magento, CodeIgniter, In conclusion, HHVM is a revolutionary, robust, fast product phpMyAdmin and many more. that constantly evolves and, thanks to the support for FastCGI, it spreads rapidly. Many concepts and solutions used in HPHPc and HHVM have been reintroduced to Zend PHP. It›s the reason why there are so big performance improvements from PHP 5.2 to PHP 5.5.

Radu Murzea (image from HHVM’s blog, December 2013)

The numbers next to each application represent the percentage of unit-tests that pass for that application. HHVM’s engineers constantly improve these percentages on every release and they hope to reach 100 % on all of them in at most one year.


no. 21/March, 2014 |

PHP Developer @ Pentalog





n the previous issue, I presented the Restricted Boltzmann Machines, which were introduced by Geoffrey Hinton, a professor at the University of Toronto, in 2006, as a method for speeding up neural network training. In 2007, Yoshua Bengio, professor at the University of Montreal, presented an alternative to RBMs: autoencoders.

Autoencoders are neural networks that learn to compress and process their input data. After the processing that they do, the most relevant features of the input data are extracted and they can be used to solve our machine learning problem, such as more easily recognizing objects in images. Usually autoencoders have at least 3 layers: • an input layer (if we work with images, this will correspond to the pixels in an image) • one or more hidden layers, that do the actual processing • a hidden layer, set to be equal to the first one.

Fig.1 - An autoencoder with 3 layers, and with 6, 3 and 6 neurons on each layer

An autoencoder with 3 layers, and with 6, 3 and 6 neurons on each layer The values of the output layer in an autoencoder are set to be equal to the values of the input layer (y = x). The simplest function that does this “transformation” is the identity function, f(x) = x. For example, if we have an input layer with 100 neurons, but on the middle layer we have only 50 neurons, then the neural

network will have to compress the input data, because it can’t memorize 100 uncompressed values with only 50 neurons. If our input data is completely random, then this is not possible and there will be errors in reproducing the input data. However, if our data has some structure, for example it represents the pixels of a 10x10 image, then we can get a more compressed representation of the pixel. If a pixel in an image is green, the 8 surrounding pixels are also greenish. If there are pixels of the same color on a circle, instead of remembering the position of each pixel (for a circle of radius 4 this would mean about 25 values), it is enough to know that at position x, y we have a circle of radius r, which can be stored with only 3 values. Of course, such a level of compression won’t be achieved using only one layer. But if we stack several autoencoders, as we did with Restricted Boltzmann Machines to get Deep Belief Networks, we will get more and more compact features. If we treat each layer as a function that receives as input a vector and returns another one, that has been processed, then in a 3 layer autoencoder we have 2 functions (the first layer, the visible one, just sends its inputs to the next layer). The first function will do the encoding of the input: h=f(x)=sf(Wx+bh)

where the constants have a similar meaning, but they are between the hidden layer and the output layer this time. The combination of the two functions should be the identity function, but if we take only the encoding function, we can use it to process our data to get a higher level representation. To quantify the error we make with regard to the identity function, we can use the L2-norm of the difference: L(x, y) = ||x-y||2 The parameters of the autoencoder are chosen as to minimize this value. The model presented above represents a classical autoencoder, where the learning of relevant features is done by compressing data, because the hidden layer has fewer layers than the input one. There are other types of autoencoders, some of them even with more neurons on the hidden layer than on the input layer, but which avoid memorization using other methods. Sparse autoencoders impose the constraint that each neuron should be activated as rarely as possible. The hidden layer neurons are activated when the features they represent are present in the input data. If each neuron is rarely activated, then each one has a distinct feature. In practice this is done by adding a penalty term for each activation of a neuron on the input data:

where sf is a nonlinear activation function, used by the hidden layer, W represents the weights of the connections between the visible layer and the hidden one, while bh is the bias unit for the input layer. where β controls the amount of penalty The second function will decode the for an activation, ρj is the average actidata: vation of neuron j, and ρ is the sparsity parameter which represents how often we y = g(h) = sg(W’x+by) want each neuron to be activated. Usually | no. 21/March, 2014


programming Autoencoders it has a value below 0.1. Denoising autoencoders have another approach. While training the network, some of the input data is corrupted, either by adding gaussian noise, or by applying a binary mask, that sets some values to 0 and leaves the rest unchanged. Because the result of the output layer is still the original, correct values, the neural network must learn to reproduce the correct values from the corrupt ones. To do this it must learn the correlations between the values of the input data and use them to figure out how to correct the distortions. The training process and the cost function are unchanged. Another variant of autoencoders are contractive autoencoders. These try to learn efficient features by penalizing the sensibility of the network towards its inputs. Sensibility represents how much the output changes when the input changes a little. The smaller the sensibility, the more similar inputs will give similar features. For example, let`s imagine the task of recognizing handwritten digits. Some people draw the 0 in an elongated way, others round it out, but the differences are small, of only a couple of pixels. We would want our network to learn the same features for all 0s. Penalizing the sensibility is done with the Frobenius norm of the Jacobian of the function between the input layer and the hidden one:


learning_rate : 1e-3, batch_size : 500, monitoring_batches : 100, monitoring_dataset : !pkl: „cifar10_preprocessed_train.pkl”, cost : !obj:pylearn2.costs.autoencoder.MeanSquaredReconstructionError {}, termination_criterion : !obj:pylearn2.termination_criteria.MonitorBased { prop_decrease : 0.001, N : 10, }, }, extensions : [!obj:pylearn2.training_algorithms. sgd.MonitorBasedLRAdjuster {}], save_freq : 1 }

Fig. 2 - Filters learned by a Contractive Autoencoder on CIFAR-10

Autoencoders represent a way of taking a dataset and transforming it in such a way as to obtain a more compressed and more discriminative data set. Using the latter we can then solve our problem in an easier manner, be it a regression, clustering or a classification problem. All these models can be combined, of course. We can impose In the next article, I will start presenting the improvements both sparsity constraints and corrupt the input data. Which of used in Deep learning with regard to layers and we will see several these techniques is better depends a lot on the nature of the data, kinds of layer types. so you must experiment with various kinds of autoencoders to see which gives you the best results. There are other types of autoencoders, such as Transforming Autoencoders or Saturating Autoencoders , but I won’t give more details about them now, instead I will show you how to use autoencoders using Pylearn2 , a Python library developed by the LISA research group from the University of Montreal. This library was used to develop several state of the art results, especially related to image classification. In Pylearn2, deep learning models can be configured using YAML files, which are then executed by the runner from the library, which does the training and then saves our neural network into a pickle file. !obj:pylearn2.train.Train { dataset: !pkl: „cifar10_preprocessed_train.pkl”, model: !obj:pylearn2.models.autoencoder.ContractiveAutoencoder { nvis : 192, nhid : 400, irange : 0.05, act_enc: ‚tanh’, act_dec: ‚tanh’, }, algorithm: !obj:pylearn2.training_algorithms.sgd.


no. 21/March, 2014 |

Roland Szabo Junior Python Developer @ 3 Pillar Global



AOP using .NET stack


n the following lines we will talk about AOP and how we can implement our own AOP stack using .NET Core features. First of all, let us see what AOP means. The acronym comes from Aspect Oriented Programming and it is another programming paradigm with the main goal of increasing the modularity of an application. AOP tries to achieve this goal by allowing separation of crosscutting concerns. Each part of the application is broken down into distinct units based on the functionality (concerns). Of course even if we don’t use this paradigm we will try to have a clear separation of functionalities. But in AOP all the functionalities are separated, even the one that in OOP we accept to be cut crossing. A good example for this case is the audit and logging. Normally, if we are using OOP to develop an application that needs logging or audit we will have in one form or another different calls to our logging mechanism in our code. In OOP this can be accepted, because this is the only way to write logs, to do profiling and so on. When we are using AOP, the implementation of the logging or audit system will need to be in a separate module. Not only this, but we will need a way to write the logging information without writing code in other modules that make the call itself to the logging system. There are different options that can be used in AOP to resolve this problem. It depends on what kind of technology you are using, what kind of stack you want to use and so on. We could use attributes (annotation) of methods and classes that would activate logging or audit. Another approach is to configure proxies that would be used by the system as the real implementation of a functionality. This proxy would be able to write the logs and make the call of the base implementation.

specify what methods need to be intercepted and what should happen in this case. By interception we can even add new fields, properties, methods or even change the current implementation.

How is it implemented?

Usually there are two ways offered by different frameworks that offer AOP support (at runtime and at compile time). When the framework offers AOP support at runtime, it means that in real time it will create different proxies that will redirect our calls. This will give us a lot of flexibility, being able to change at runtime the behavior of our application. Another approach is at compile time. This type of frameworks are usually integrated with our IDE and development environment. At compile time, they will change our code and insert the calls to our different features. In the end, we will end up with the same code as we would call the code from our method, but the code that we need to maintain is simpler, clean and with all the concerns clearly separated.


Of course all this comes with a cost that normally can be felt especially in two places – time and debugging. When the AOP framework uses proxies at real time, the performance of our application can decrease. This is happening because there is a Interception man in the middle Almost all the frameworks that are that intercepts our available for AOP are around intercep- calls and takes some tion. Using interception, developers can actions. At this level

usually reflection is used and all of us know that reflections cost a lot from the processor perspective. Having a code that changes at compile time means that we can have some problems when we need to debug our code and find an issue. This is happening because at runtime you don’t end up with only your code, you end up with your original code from the base concern plus the code from the second concert that you needed in the base concern and with the AOP framework code that make the redirection. As we can see in the above diagram, only the first and the last steps are part of the normal flow. The rest of them are added by the AOP framework.

Base components of AOP Join Points A join point is represented by the point in the code where we need to run our advice. This is a point in the code where you can make a call to another feature in a very simple way. In general frameworks that support AOP use methods, classes and properties are their main join points. For example before and/or during calling a method you can execute your custom code. The same thing can happen for classes and properties. In general you will discover that the concepts of AOP are very | nr. 21/Martie, 2014


programming AOP using .NET stack simple, but pretty hard to implement keeping a clean and simple From my perspective, the tool that offers all the features that code. are needed when you want to use AOP is PostSharp. But, based on your needs you should always try to identify the tool that satisfies Pointcuts your needs. Define a way to specify a join point in your system. It gives the developer the possibility to specify and identify a join point in the What does .NET Code offer system where we want to make a specific call. This can be accomThe good news is that we can use AOP without any kind of plished in different ways, from attributes (annotation) to different tool. It may be more complicated, but what you can accomplish configuration (in files, in code and many more). using .NET API is pretty interesting. In the following part of the article we will see how we can use RealProxy class to intercept Advice method calls and inject custom behavior. RealProxy class can be Advice is referring to the code that we want to execute when a found in .NET Core stack. join point is reached. Basically this is represented by the code that The most important method of RealProxy is “Invoke”. This is called around a join point. method is called each time a method from your specific class is For example, before a method is called, we want to write some called. From it you can access the method name, parameters and information to the trace. call your real method or a fake one. Before saying “Wooww, it’s so cool!” you should know that this Aspect will work only when you use also the interfaces. Aspect is formed from two different items – the pointcut and In the following example we will see how we can implement a the advice. This combination of these two items forms an aspect. custom profiling mechanism using RealProxy class. In general, when we want to use AOP we have a location where The first step is to create a custom attribute, which accepts a we want to execute the code and the custom code itself that we custom message that will be written when we write the duration want to be run. time to the trace.

.NET Stacks

There are different ways to implement AOP in .NET. On the market, you will find a lot of frameworks that offer this support, a part of them are free.

public class DurationProfillingAttribute : Attribute { public DurationProfillingAttribute(string message) { Message = message; } public DurationProfillingAttribute() { Message = string.Empty; }


Unity provides support for AOP in some part of the application. In general, Unity offers support for the most common public string Message { get; set; } scenarios like exception handling, logging, security or data access. }


Next we need a generic class that extends RealProxy and calThis is one of the most known frameworks in .NET world for culates the duration of the call. In the Invoke method we will need AOP. This is not a free framework, but it is full with useful features to use a Stopwatch that will calculate how long a call takes. At and 100% integrated with development environment. this level we can check if a specific method is decorated with our attribute.


public class DurationProfilingDynamicProxy<T> : Real-

This is a free tool that can be found on Codeplex. It offers Proxy the base features needed to develop an application using AOP { private readonly T _decorated; paradigm.

Enterprise Library This library offers also support for AOP, offering different capabilities like authorization, exception handling, validation, logging and performance counter. It is pretty similar to the features that are offered with Unity.

AspectSharp This is another stack similar to Aspect.NET. In both cases, you should know that you have access only to the base features of AOP paradigm.

Castle Project – Dynamic Proxy

public DurationProfilingDynamicProxy(T decorated) : base(typeof(T)) { _decorated = decorated; }

public override IMessage Invoke(IMessage msg) { IMethodCallMessage methodCall = (IMethodCallMessage)msg; MethodInfo methodInfo = methodCall.MethodBase as MethodInfo; DurationProfillingAttribute profillingAttribute = (DurationProfillingAttribute)methodInfo. GetCustomAttributes(typeof( DurationProfillingAttribute)).FirstOrDefault(); // Method don’t needs to be measured.

This is another stack similar to Aspect.NET. In both cases, you if (profillingAttribute == null) should know that you have access only to the base features of AOP { return NormalInvoke(methodInfo, methodCall); paradigm. }


no. 21/March, 2014 |

TODAY SOFTWARE MAGAZINE return ProfiledInvoke(methodInfo, methodCall, profillingAttribute.Message); }


foo.Concat(„A”, „B”); foo.LongRunning(); foo.NoProfiling();

private IMessage ProfiledInvoke(MethodInfo methodInfo, IMethodCallMessage methodCall, string profiledMessage) { Stopwatch stopWatch = null; try { stopWatch = Stopwatch.StartNew(); var result = InvokeMethod(methodInfo, methodCall); stopWatch.Stop();


WriteMessage(profiledMessage, methodInfo.DeclaringType.FullName, methodInfo.Name, stopWatch.Elapsed);

[DurationProfilling(„After 2 seconds”)] void LongRunning();

return new ReturnMessage(result, null, 0, methodCall.LogicalCallContext, methodCall); } catch (Exception e) { if (stopWatch != null && stopWatch.IsRunning) { stopWatch.Stop(); } return new ReturnMessage(e, methodCall); } } private IMessage NormalInvoke(MethodInfo methodInfo, IMethodCallMessage methodCall) { try { var result = InvokeMethod(methodInfo, methodCall); return new ReturnMessage(result, null, 0, methodCall.LogicalCallContext, methodCall); } catch (Exception e) { return new ReturnMessage(e, methodCall); } } private object InvokeMethod(MethodInfo methodInfo, IMethodCallMessage methodCall) { object result = methodInfo.Invoke(_decorated, methodCall.InArgs);

public interface IFoo { [DurationProfilling(„Some text”)] DateTime GetCurrentTime(); [DurationProfilling] string Concat(string a, string b);

string NoProfiling(); } public class Foo : IFoo { public DateTime GetCurrentTime() { return DateTime.UtcNow; } public string Concat(string a, string b) { return a + b; } public void LongRunning() { Thread.Sleep(TimeSpan.FromSeconds(2)); } public string NoProfiling() { return „NoProfiling”; } }


In conclusion we can say that AOP can make our life easier. This doesn’t mean that now we will use it in all the projects. AOP should be used only where it makes sense and usually it can be very useful when we are working on a project that is very complicated with a lot of modules and functionality. In the next article we will discover how we can use Unity to implement AOP.

return result; }

private void WriteMessage(string message, string className, string methodName, TimeSpan elapsedTime) { Trace.WriteLine(string.Format(„ Duration Profiling: ‚{0}’ for ‚{1}.{2}’ Duration:’{3}’”, message, className,methodName, elapsedTime)); }

We could have another approach here, calculating the duration for all the methods from the class. You can find below the classes used to test the implementation. Using „GetTransparentProxy” method we can obtain a reference to our interface. class Program { static void Main(string[] args) { DurationProfilingDynamicProxy<IFoo> fooDurationProfiling = new DurationProfilingDynamicProxy<IFoo>(new Foo()); IFoo foo = (IFoo)fooDurationProfiling. GetTransparentProxy();

Radu Vunvulea Senior Software Engineer @iQuest

foo.GetCurrentTime(); | no. 21/March, 2014



Discipline in Agile projects


t a time when market pressure and the need for increased competitiveness in all economic area are becoming increasingly stringent, IT industry has its own particular place and plays a very important role that can make the difference between the companies which are going to be sustainable and successful in the coming decade, by investing in innovation, research, process efficiency and those which are settling for what they are doing at the moment, thus risking to disappear from the market. Against this very dynamic context, one in which one certainty is continuous change and adaptation to the new market trends, the challenge for the IT field is to carry out its activities in the most efficient manner, with as fast as possible market deliverables, with the best quality, which would maximize the shareholders’ investment.

88% of software companies opt for Agile

Obviously, from the point of view of the full life cycle of a software solution delivery, an agile approach may seem the best choice in most situations that aim at the benefits described above. A recent study (by Agile Journal) shows that over 88% of the assessed software companies (many with over 10,000 employees) are already using or intend to use agile practices for the projects they are implementing. What usually happens in most of the companies is that they begin their agile experience by adopting the Scrum-related practices – which describe a very useful strategy for managing the activity of a development team. Unfortunately, Scrum represents but a small part of the delivery of a final software solution. What happens in most cases is that the teams start looking left and right, sometimes by their own efforts, sometimes with the help of consultancy companies, in order to fill the gaps that Scrums ignores by default, the ideas searched for being mostly connected to the building aspect. It may happen that the multitude of


existing sources and the terminology used create more confusion rather than leading to the expected results. More than that, not even the IT professionals know exactly where to look for advice useful for their situation and which problems should be handled with priority. For example, Scrum talks about the existence of a Product / Scrum backlog, and how these are handled during a sprint. But nevertheless, the question is: where does this Product backlog come from? Does it come out of the blue in the project? Of course not; it is the result of a session in which the initial requirements are identified, requirements which represent one of the main objectives that we must reach in the initial phase of a project.

Agile, but in a structured manner

prescriptive application of well-known recipes – because there is always the risk that these might not match the project context, and we focus on reaching certain objectives. The basic idea is pretty simple: we have to deal with an approach based on concrete phases (Inception, Construction, Transition), and within each phase there is a series of well-defined objectives which must be reached. For instance, one of the objectives of the Inception phase is identifying the initial (functional) purpose of the application. The structured approach is based on the description of several aspects or processes which enable the fulfilment of this objective. For instance, in the case above, the aspects / processes that must be considered are the following: • level of detail of the initial requirements (requirements envisioning, big requirements up front) • visualization strategies (usage modeling, domain modeling, process modeling) • modelling strategies (formal, informal, interviews) • work elements management strategies (scrum product backlog, work item pool, work item list, formal change management), • manner of approaching the nonfunctional requirements (acceptance criteria, explicit list, technical stories).

The solution we are implementing at Yonder is one focused on an agile approach, disciplined by offering procedures / advice adapted for each client / project in turn. Discipline consists on one hand in a sequential and iterative approach of the project (where one aims at reaching certain objectives throughout the entire lifecycle of the solution delivery) and on the other hand in offering concrete advice for reaching each objective (according to the project context). This is only possible by adopting a number of Scrum strategies - XP, Agile Modeling, RUP, Lean/Kanban, DevOps According to the context we are in, we in a structured manner and it avoids a will choose for each process the version

no. 21/March, 2014 |

management that best suits the current needs of the project – if the project is one where we need to provide support as well, we can choose a work item pool as a manner of working requirements management instead of a scrum product backlog. In order to make sure that these objectives are fulfilled, we introduced milestones all throughout the duration of the project. A real-life example of such a milestone is the ending of the Inception phase, where we check for consensus of all parties regarding the initial purpose of the application. One of the greatest challenges of this approach is, on the one hand, to make available as many procedures as possible connected to the manner in which certain objectives may be reached, and, on the other hand, to avoid becoming very prescriptive. The existing methodologies at the moment are either very detailed – when it comes to the processes it involves (see for instance the IBM RUP case), or describe in a sketchy manner important activities from the project’s lifecycle (see Scrum for the part of identifying the initial requirements or the installation of the solution into production).

Increasing client satisfaction

By applying these principles in all the projects we are developing at Yonder, we have managed to increase our clients’ satisfaction level, the increased productivity being one of the aspects that made this possible. This objective-oriented policy increases

TODAY SOFTWARE MAGAZINE by a great margin the degree of transparency in our relationship with the clients, expectations being set and monitored accordingly throughout the duration of the project. Through the ongoing validation of the “business case” and of the initial conditions that generated the project – multiple milestones in the Construction phase – we are making sure of its viability and we make available to the decision factors all the information required so that the decisions being made are based first and foremost on real numbers and facts and less on assumptions. Another important aspect connected to quality increase (which at first sight seems to be a rather subjective component) by adapting the best practices from the agile methodologies (XP, TDD) in order to meet certain objectives and to migrate them towards a measurable area. Given the current trends of working with large teams, geographically distributed, activating in complex fields with technical solutions that are the same, another important aspect is scalability. Using this objectives-based approach, within which team independency is encouraged, we are maintaining an optimum level of governance. In this case, the aim – beyond the deliverables connected to the software development process – is offering a complete solution in which the needs of the client are addressed according to their specific situation. Thus, we can lay the basis of a model that can scale easily by using the adequate tools, the success methods and

practices proven by other projects.


This disciplined and structured approach of Agile projects that we have adopted at Yonder, especially after obtaining the CMMI level 3 certification, has helped us to increase our client numbers, due to positive results and examples, service professionalization and increased predictability by applying and continuously improving this model. And, in order to prevent those who are interested to go again through a process of re-inventing the wheel and of re-discovering all of these practices empirically, the way it happens most of the times – a very good starting point of the journey towards the implementation of an Agile framework is Scott Ambler and Mark Lines’s book “Disciplined Agile Delivery”.

Mihai Buhai Delivery Manager @ Yonder | no. 21/March, 2014



The Developers of Mobile Applications and the Personal Data. Any Connection?


o we have to deal with personal data in our activity? What does this involve? These are legitimate questions for a mobile applications developer, since collecting these data has become an inherent phenomenon of the digital world and a more and more controversial topic along with the evolution of mobile applications, due to the various situations that can arise. Moreover, the subject is of a greater interest since, in the future, a tightening of sanctions is foreshadowed for non-compliance with the legislation of the personal data. Therefore, no matter whether he is interested in a good reputation in front of the user who is more and more frightened by the perspective of having his personal life invaded, or whether he wants to protect himself against the contingent legal sanctions, the developer has to observe the laws in force. They are quite numerous and thick, and this article does not aim to present them all in detail. Instead, starting from the hypothesis below, we wish to present a few ground rules that are easy to keep in mind, which you can take into consideration in order to minimize the possible risks.


A. is a company from Romania and it has just finished the process of development of a mobile application. It is an application created according to the internal specifications of A., with the intention of being commercially exploited under the company’s own brand, not an application that was ordered by an external client. Before uploading the application on the relevant online platforms (Magazine Play, App Store, etc.) so as to make it available to the users, A. finds out that it should take one more aspect into consideration: through the application, certain data regarding the users will be collected on its server and, sometimes, transferred to the partners abroad. But the company does not know whether they represent personal data nor if they imply complying with some legal laws.


What personal data can be collected by the mobile applications?

They require special carefulness (especially if they are collected in order to be used in According to the European Directive the behavioral targeted advertising, analyePrivacy1 (directive translated also in the tics, etc.). Romanian legislation), any electronic terminal equipment (phones, tablets, laptops, Some useful pieces of advice etc.) and any information stored on them • If the developer is the one in charge are part of the private area of the user and of the data collected through the appliare protected according to the European cation, then he can be considered a Convention for the Protection of Human personal data operator – meaning the Rights and Fundamental Freedoms. person who establishes the purpose and This information can be considered means of processing the data – and he private no matter whether it regards a natuwill have to comply with the specific legal ral person that is identified (for instance, obligations, including the registration at by name) or identifiable (one that can be the competent authority. identified directly or indirectly). They may • Establish a clear internal mechanism be connected to the owner of the electronic regarding the processing of personal device or to any other natural person (for data, before beginning to work on the instance, the contact data of one’s friends, development of the application and on from the phone contact list). writing code lines. This procedure is calHere are a few examples: location data, led Privacy by Design2 (PBD) and it is geolocation, name of the user, contacts a concept which facilitates a result of a from the phone book, e-mail, pictures and higher quality. As an example, you can videos, date of birth, identifiers such as find here and here a guide book drawn Unique Device Identifier (IMEI number, by the authorities in Great Britain3 and etc.), phone number, the registry of calls, Australia4 in order to meet the mobile messages or searches on the Internet, inforapplications developers half way and mation regarding payments made on-line, to promote Privacy by Design. Most of biometrical data such as facial recognition, these principles can also be applied to the etc. Romanian developers. Sometimes it is possible that among the • You can carry out an internal impact collected data there is some of apartness – research in which you can tackle issues the sensitive personal data, such as: the 2 sexual preferences of the users, their racial/ 3 documents/library/Data_Protection/Detailed_specialist_guides/ ethnical origin or political affiliation, etc. privacy-in- mobile-apps-dp-guidance.pdf 1

no. 21/March, 2014 |

Directive 2002/58/CE

4 privacy-guides/guide-for-mobile-app-developers/privacy-by-design

TODAY SOFTWARE MAGAZINE such as: (i) what personal data you need whom), the users’ rights, the contact of and why, (ii) how you collect, use, store the application developer. These essential and transfer them, (iii) how you obtain conditions must be met so that we can the user’s agreement for you to collect his state that you have offered the necessary data (including for the case in which you information to the users and they have alter the purpose you use them for), (iv) knowingly and freely agreed to it. The if you reveal them to third parties, (v) free agreement means that you give the possible risks and ways to avoid/ reduce users the possibility to accept or deny them, etc. the processing of their data. Therefore, in • Try to keep the data collection to a order to complete the installation of the minimum level, only for the established application, you should also make avaiand legitimate purposes (for example, lable a “Deny/Cancel” (data processing) collect only the data necessary for the button, not only the “Yes, I agree” button. application to run). Studies5 reveal that • Maintain the security of the collecusers tend to prefer and remain loyal to ted data – make all the necessary efforts mobile applications having a transpa(including technical ones) in order to rent and minimal policy regarding the make sure the data base is not in danger volume of collected data. of being hacked and illicitly copied. In • At the moment of installing the case of illicit usage, you can be brought application, you will have to obtain to trial by the users who can claim for the agreement of the user not only to damages. download the application on their phone • In some situations and depending on or tablet, but also to process the data colthe type of application, it is not only the lected by the application. developer who processes these data, but • Set a policy regarding the procesalso the distributors of the application, sing of personal data (Privacy Policy) the advertising and analytics providers, and make it available to the users (for the third party libraries, etc. It is helpful example, through a link) before they to explain to the users the manner in download and install the application and which their data will be used by them before you collect their data. You can use and why, etc. graphics and color in order to make the information more accessible. Conclusion • Privacy Policy should indicate, As developers, it is for your conveniamong others, the types of collected data, ence to implement proper privacy policies the purpose they will be used for (and by for the mobile applications you create and 5 release on the market. Privacy by Design

is a more and more popular concept and it can offer a technical solution to a legal problem. More and more, the applications which take the personal data protection seriously gain the trust of their users, succeeding in making a difference through transparency.

About the authors: Claudia Jelea is a lawyer specialized in issues involving the online environment, electronic trade and IT&C, brands, copyright and personal data privacy. She is a member of Bucharest Bar and of the Patent Chamber (brands). LinkedIn & Twitter: claudiajelea | | Catalin Constantinescu is a student in the fourth year in the Faculty of Law, Bucharest University and he is interested in the interference between law and IT.

Claudia Jelea Attorney @ IP Boutique | no. 21/March, 2014



The root of all evil in software development


upposedly the answer is bad requirements. It is risky to take a radical stand and to isolate the determining cause to a single factor, but poorly conducted elicitation is surely one of them. From my experience, during elicitation, even with the best interest at heart, there is neither discrimination nor consideration for the terms requirements / solution design. This results in a cumbersome bundle of mixed information that doesn’t add up into a cohesive, unitary whole. There’s incontestable value and need for design, but in early rounds of elicitation, focus ought to be paid to underlying cause for the requirement to help us to generate the suitable options for our stakeholders. The risks of not doing this segregation between the two terms are multiple, ranging from: • implementing a specific solution for a general problem that we were unable to express, • loss of stakeholder interest, involvement and trust, • ‘false positive’ endorsements or the worse, • further re-work and costs and eventually project failure.

‘subconscious’ link back into the Latin definition of the word, which is to “draw out by trickery and magic” ( There’s something to the magical character of elicitation. In my opinion, it thighs back to the fact that clients don’t have a rigorous or formal view of domain. Hence, it cannot be expected of them to completely be aware of domain- problem relationship. We, as business analysts, model the business, that is, we create an abstract and simplified view of the world, with all the advantages and perils that come along with. It is none the less true or valuable that modelling: • guides elicitation - and helps formulate questions, helps uncover problems, highlighting inconsistencies, conflicting Below, there are arguments from a requirements, disagreement between business analyst’s perspective to support stakeholders. this view point. If you consider a poorly expressed proWe’ll start with the magical definition blem, stripped of details, pushed forward of elicitation, then we’ll be diving into its into solution analysis – you may have purpose and object, exploring how it rela- found some justification for the difficulties tes to the interpersonal relationships and caused in projects by poor requirements. concluding with a heavenly match between interview and interface analysis. The object of elicitation It is both stakeholder requirements, A magical definition and the solution requirements. BABOK (Business Analysis Body of Requirements definition according to Knowledge) defines eliciting as: to draw BABOK is: “(1) A condition or capability forth or bring out (something latent or needed by a stakeholder to solve a problem potential); to call forth or draw out (as or achieve an objective; (…)”.There are information or a response). According many more definitions, but words like ‚proto IREB (International Requirements blem’, ‘opportunity’, or ‘constraint’ appear Engineering Board), elicitation tech- recurrently. niques fulfill the purpose of finding out Design on the other hand is a collection the conscious, unconscious and sub- of decisions about how to implement a set conscious requirements of stakeholders. of requirements. Words like ‘latent’, ‘potential’, ‘unconscious’, Surfacing what is of value to us is easier


no. 21/March, 2014 |

said than done, because as it turns out we are terrible at estimating what is of value to us, and to what extent. Moreover, we are wired to think in terms of solutions rather than problems simply because of visualization ease. This is why there are difficulties in finding the fine boundary between problem statements, problem definition on the one hand, respectively solution definition on the other hand. Building upon the previous argument, requirements are statements which translate or express a need and its associated constraints and conditions, such that we can now differentiate between stakeholder requirements and solution requirements. The Business Analysis Core Concept Model makes a clear definition of change, need, solution, value, stakeholder, and context, and maps these values as follows: • Re quirements: Need, Value, Stakeholder; • Design: S olution, Need, And Context; There’s no fine grain line between the two, as they share the need. What is certain though, is that when we set out for elicitation, it should be clear to us what the object of the elicitation is. It is in the engagement phase of a project that one should lay down the rules regarding the granularity that constitutes design details, upon which the stakeholder is happy to have no steering or decision authority.

Empty window frame

A trivial example whereby I choose to disjoint myself from the triviality and how this situation is handled in almost every case, an example I am going to use as a

TODAY SOFTWARE MAGAZINE hyperbola to make a point. Consider an empty window frame. You’d be tempted to say, we’ll buy insulating glass window and the issue is solved. Your problems could be: • esthetics, comfort or health - wind and rain get into your home ruining furniture and your floors, or • safety related - you are exposed to burglars.

Observe target’s visual response and first and foremost, minimize the ego threat. Nobody likes to be pointed fingers under the spot spotlight, so for a higher return on the long run, act more like an ‘accomplice’ who understands the challenges of having to know the right answers and avoid the ‘go fetch’ attitude.

A match made in heaven: interview – interface analysis Let’s think about the context. What if your broken glass pertains to a mental health facility, or to a country side field? Is the solution we inferred still suitable? In the first alternative, you would certainly be interested in the safety aspect and also the privacy aspect. So you may need reinforced, insulated, mirrored glass. For the broken window in the field, you may only be interested to have a cover for rain – in which case a thick foil of plastic could do the job just fine.

Human Characteristics in Elicitation

You can leverage many human tendencies to your gain during elicitation, but not if you have impaired expectations, desire for self- expression or exploit the interviewee’s performance anxiety. Feelings like desire to be recognized, appreciated predisposition to curiosity, gossip, the instinct to complain, and last but not least - occupational hazards: advising, teaching, correcting challenging are all very important levers to use during elicitation. You can try one of the obvious - as conversation starter: expression of mutual interest, simple flattery, quid pro quo exploit, or, with the necessary caveat – for the experienced or daring: oblique references (comments made indirectly, in either a positive or negative light, which generate either defense or criticism; useful to cross check understanding). Also without the partner noticing, the business analyst can insinuate during his speech - provocative statement or taking an opposing stand which will result in knowledgeable people wanting to instruct, correct.

One of the techniques made available by BABOK is the interview, which is defined as a conversation that compels people to voluntarily tell you things without you asking. Even though it is a planned conversation, and it sounds like it is straight forward, the fact that you need to be building on what you learn during the interview and human tendencies makes it harder than it meets the eye. People say ‘fail to plan = plan to fail’, and this is why it is recommended to start by formulating the relevant questions (open, hypothetical) and consider your relationship with the target namely the attitude info sharing. Having done this, during the interview, you can and should re-word questions to motivate sharing and validate your understanding. There are key words that can be used to keep our eye on the ball, and prevent important knowledge slipping through your finger: WHY - leads to deeper motivations; WHAT - leads to facts; HOW - leads to a discussion about process, not structure; COULD – encourages maximum flexibility. Other helpful questions are: What, precisely, is the problem to be solved? When does the problem occur? What generates the problem? What situations, are they new or old? How is the problem handled now? Why does the problem exist? The so called ‘putative’ questions go one step further by asking about a situation in a way that tests your model view of the domain. In my opinion, this would equate to replacing variables in a formula, with numbers. The most important activity you do in elicitation is not talk, but listen, or else you run the risk of unintentionally leading the interviewee. When asking questions, first

and foremost, we need to remember the opportunities to be addressed problems to be solved, the problems to be solved and the constraints that we need to comply with. Like most people, you may be under the impression that this is more or less just a smoke screen, and that it doesn’t help write code, so consider you have successfully put into words the problem to be solved, opportunity to be addressed, constraint to comply with. You have also drawn a context diagram (and you now know whether you’re filling in a window frame on the country side field or in a mental health institution). Next in terms of steps and something that fleshes out the details required for development: Interfaces. Interface analysis serves the purpose of identifying connections between solutions and/or solution components and defines how they will interact: interfaces to external applications; interfaces to external hardware devices. For defining interfaces it is important to specify the type of interface, and the purpose. Then we state the inputs and outputs; define validation rules that govern inputs and outputs; Identify events that trigger interactions. There is a heated debate on the definition of requirements that helps get some closure on this topic. This debate goes round in circles only to come to the conclusion that the WHAT for one group, can be the HOW for the next group, as we move from business executives to executants. There’s no single recipe for preventing chaos regarding requirement but some helpful ideas are: • placing the due importance on problem statement, • envisioning and declaring the objective and object of your elicitation activity • iving a couple rounds of gathering the requirements, • gradually diving deeper and deeper i n t o solut i o n design, A l l this, once w e ’ v e understood the problem/ opportunity you want to | no. 21/March, 2014


management The root of all evil in software development solve, your options to address it... and the Models (data flow diagrams, state charts, ‘business layer’ your work is aimed at. UML) Historically used as the elicitation techRequirements Elicitation – Best practices nique - Currently a means to: Dear colleagues, as you have already • Facilitate communication found out the process of requirements • Uncover missing information elicitation is concerned with collecting • Organize information gathered from information and specifications from diffeother elicitation techniques and rent stakeholders. Before the requirements • Uncover inconsistencies. are formally entered into a requirements list, the analyst or requirements engineer Best practice needs to lean and issue them to careful • Time-ordered sequences of events inquiry in order to ensure that they are well to capture a typical interaction between formed, this being the basis for any further the system and the systems or people it elicitation action. interfaces with Secondly, it is of extreme significance • Create the DFD on a white board as that you, as business analysts, possess a the result of collaboration because the toolkit of techniques with the purpose purpose for any model is to help the of fitting your method when gathering thought process, not serve as final custorequirements. mer documentation. Moreover, we, as business analysts want • Build DFDs bottom up based on to deliver in the end a business analysis events as defined by essential systems approach of valuable quality. This means, analysis which better designed processes, will result • The use of data dictionaries whein a better customer experience, if they do never there are multiple, diverse, and not, the question may be asked, why using autonomous stakeholders. them? The possible upgraded customer satisfaction will be difficult to assess, howeDear colleagues, software projects ver, unless there is evidence, which suggests actually require pragmatic approaches for specific customer dissatisfaction applying the corresponding technique or With things as they are now. The ques- method that are doable in real – life situatition that should be emphasized: how can ons and constraints. We, as analysts, need a we obtain a satisfied customer in the end? practical guidance in applying best practiAs most of you can remark, a possible and ces in the phase of requirements elicitation feasible answer is quite simple: by applying process. a wide variety of best practices known in the area of business analysis. Requirements Elicitation – Challenges The success or failure of a system deve- and Future Steps lopment effort depends on the quality of Requirements elicitation is a complex the requirements. The quality of the requi- process involving many activities with a rements is greatly influenced by techniques variety of available techniques, approaches, employed during requirements elicitation and tools for performing them. The relative because elicitation is all about learning the strengths and weaknesses of these deterneeds of users, and communicating those mine when each is appropriate depending needs to system builders. on the context and situation. 1. Collaborative sessions – standard or default approach to eliciting Which techniques and approaches requirements should be used for a given requirement 2. Best practice : elicitation activity? • Essential when a large, diverse, As a relevant example, the use of more and autonomous set of stakeholders requirements elicitation technique in high exists. specifications volatility projects can be • Emphasizes creativity to aid envi- possible and applicable. Interviews, prosioning innovative systems totyping techniques or workshops are • The use of technology facilitates combined in order to collect the wishes conducting the group session with from those projects as they are extremely stakeholders distributed over a wide complicated and involve randomly varying geographical area. requirements. If we deal with, geographically distributed software development process, groupware techniques (video conferences, use cases, brainstorming) besides


no. 21/March, 2014 |

efficient requirements management are recommended/ We should not forget that elicitation techniques have their strengths and weaknesses. What’s more, to reason for differences, a strong requirements elicitation process should use more than one technique and method. The beauty relies on choosing the right approach and this step requires an understanding of the audience and the pro and cons of each possibility that we have at our disposal. Other important argument: the choice of techniques to be employed is dependent on the specific context of the project and it is often a critical success of the elicitation process. Strong arguments: • The selected technique is recommended by the approached methodology. • The selected technique is the analyst’s favorite. • The choice of the technique is governed by the intuition of the analyst to be the best approach in a given situation Clearly requirements elicitation is best performed using a variety of techniques. In most of the projects several methods are employed during and at different stages in the software development life-cycle, often in cooperation when complementarily.


Over time, a number of important challenges and trends have occurred within the field of requirements elicitation process. One of the most important challenges is the development of ways to reduce the gap between research and practice in terms of awareness, acceptance and adaptation. This can only be achieved by setting the results into practice and making the approach more attractive, thereby providing the relevant proof and motivation for business analysts to use them. Other challenges that the business analyst may encounter during the elicitation process include: • Conflicting requirements from different stakeholders • Unspoken or assumed requirements • The stakeholders’ unwillingness to change or help design a new product. • Not enough time allocated to meet with all the important stakeholders. • Knowing when and which techniques, approaches and tools to use combined with the knowledge of how → improve the chances of customer

TODAY SOFTWARE MAGAZINE satisfaction and project success; • L i m i t e d a c c e s s t o p r o j e c t stakeholders; • Geographically dispersed project stakeholders; • Stakeholders change their minds; • Conflicting priorities; • Too many stakeholders are willing to participate; • Stakeholders – overly focused on one type of requirement; • Stakeholders – afraid to be pinned down;


• Developers do not understand the requirement. • Developers do not understand the domain knowledge.

The key to overcoming these challenges as you elicit requirements is to continually motivate your users, developers and customers to communicate and cooperate with each other. You can do this by selecting the right elicitation techniques.

Future of elicitation techniques

Despite obtaining a suite of successes and progresses in the area of requirements elicitation, there are still some issues which are waiting to be taken appropriately into consideration. Below some of the potential requirements elicitation research areas that are recommended to be taken into consideration are listed: • Reducing the gap between the theory and practice, and experts and novices; • Increasing the awareness and education of analysts and stakeholders in industry; • Developing guidelines for technique selection and managing the impact of factors on the process; • Investigating ways of collecting and reusing knowledge about requirements elicitation; • Integration and use of new technologies including web and agent based architectures into the support tools; • Exploring how requirements elicitation activities relates to new and developing fields of SE (AGILE development methodologies, web systems)


The process of requirements elicitation, including the selection of which techniques, approach, or tool to use when eliciting requirements, is dependent on a

large number of factors including the type of software project being developed, the stage of the project, and the application domain to name only a few. Most of the approaches require a significant level of skill and expertise from the business analyst to use effectively. However, from the range of existing techniques, variations of interviews, group workshops, observation, goals, and scenarios are still the most widely used and successful in practice. In the end, the choice of the corresponding elicitation techniques for the respective project is made by the REQUIREMENTS ENGINEER based on the given cultural, organizational, and domain specific constraints.

Bibliography 1.

2. 3.



6. 7.



A Guide to the Business Analysis Body of Knowledge(r) (Babok(r) Guide), by IIBA (Author), Kevin Brennan (Editor) Requirements Engineering, by Elizabeth Hull, Ken Jackson, Jeremy Dick Mastering the Requirements Process: Getting Requirements Right (3rd Edition), Suzanne Robertson, James Robertson Requirements Engineering Fundamentals: A Study Guide for the Certified Professional for Requirements Engineering Exam Foundation Level - IREB compliant (Rocky Nook Computing), by Klaus Pohl , Chris Rupp Ag i le S of t ware R e quirements: L e an R e qu i re m e nt s Pr a c t i c e s for Te ams , Programs, and the Enterprise (Agile Software Development Series),by Dean Leffingwell The Business Analyst’s Handbook, by Howard Podeswa Discovering Requirements: How to Specify Products and Services, by Ian Alexander, Ljerka Beus-Dukic OCEB Certification Guide: Business Process Management - Fundamental Level, by Tim Weilkiens , Christian Weiss , Andrea Grass Requirements by Collaboration: Workshops for Defining Needs, by Ellen Gottesdiener Cluj Business Analysts Business-Analysts-Cluj

Mădălina Crișan, CSPO, Business Analyst Monica Petraru, Product Manager Cătălin Anghel, Business Analyst | no. 21/March, 2014



Impact Training


n large companies, most of the employees go through training when they enter the company, and then they are repeated according to the training needs of the company. Under these circumstances, we might think that the employees no longer see the trainings as something special, but we are wrong. A study carried out by the researchers of the Madeira Spanish University shows us that the employees enjoy training almost as much as a raise. The study conducted on 5000 employees from Spain reveal that the satisfaction regarding the employer and implicitly the workplace increases by 18% after the employee has participated in a training provided by the company. The same study reveals that trainings make 66% of these employees feel better within the company and 60% are less inclined to leave from a company which offers trainings. Nowadays, when offering a raise is unlikely for many businesses, we must think our steps in advance and develop a strategy by which to maintain the employees’ satisfaction. The companies which constantly invest in training usually have a lower turnover, which associated with a high level of satisfaction of the clients, leads to profitability. On the other hand, the companies which are not willing to invest into their employees jeopardize their own success and the survival itself. Once the need for training established, there is the question whether to externalize the training service or to sustain it by one’s internal resources. There are many companies which have internal learning and development departments, which manage the training and specialization of the employees according to the level they have reached or to the trends dictated by the market. However, most of the times, companies do not afford to sustain such a department.


The main idea is to satisfy the training needs of the company, so as to be able to survive in the competitive environment. When we have to decide between externalizing the training and using our own resources, we should take into consideration the fact that the employees hold an outside trainer in higher esteem. For a recently employed person, the fact that the trainer is from within the company makes no difference. Things change when an employee who is old in the company needs some specialization and he is offered a training led by one of his colleagues, who is probably on a higher position or who took part in that training course in the past and for whom the company paid. It is obvious that such training will not bring the above mentioned satisfaction. Some companies choose to externalize the training services and most of the training needs they have. Other companies choose to keep in-house trainings and to create an original curriculum of training. However, most of the companies are somewhere in between: when they can manage, they keep them in-house; when the situation is beyond them, they bring an outer trainer. A company specialized in consultancy

no. 21/March, 2014 |

on employees’ development processes offers specialization certificates and br i ng s c utt i ng - e d ge l e ar n i ng and development techniques. Consequently, there are numerous reasons for which the management of a company should bring an outer trainer: 1. The staff is limited in number and skills 2. The existence of a great number of employees who need specialization training 3. The managers’ wish to keep their employees up-to-date on what is going on in the industry The typical corporatist investment into training and development increases each year. This is due to the age of the companies which consider training as the greatest investment into their employees. Tim Grant, HR Manager of TAP Pharmaceuticals company, states that in a continuously developing and changing industry, the employees need trainings dictated by the dynamics of the market. He also believes that, since their employees need varied skills and expertise, the external training companies help them to achieve the desired performance at the

TODAY SOFTWARE MAGAZINE right moment. Another company which considers the outer trainer as a more efficient variant is Motorola Corporation. Fred Hamburg, Chief Learning Office of Motorola, very often uses technically educated trainers. He insists that the training of the employees is time consuming and that it is very difficult to take a man out of production in order to have him perform trainings. Hamburg is interested in saving money, but at the same time he wants to benefit from the best trainers in the domain. The advantages of collaborating with an outer trainer are: • Access to additional resources and high expertise, besides the fact that the trainers manage to adapt the training according to the client’s needs. • Cutting-edge training tools, applicable on the type of business. • Reducing the risks regarding the security of data and information, data management, a lower turnover of the experimented trainers. • Lower costs – many companies believe trainings are costly because of the amount they have to pay at a given time, but on a closer look, the cost explains the expertise and time of the trainer, the tools and the resources used. Also, in time, the cost of hiring a full-time trainer proves to be higher.

trainer, you first have to evaluate several trainers, because in the past years many trainers have appeared on the market. In their evaluation, we have to take into account the experience they have in the domain, since most of the trainers did not have a real contact with what the client requires, but only a series of courses and a trainer certification. You can also analyze his portfolio of achievements, the way and the success registered in his career. The values he promotes are also important, or whether he has written books or articles on the subject of interest, whether he is a member of some professional organizations or testimonials of his former clients. If the company does not afford a trainer, there is also the alternative of sending one’s employees to open conferences and trainings. This is an excellent way to assimilate information and to connect with people on similar positions to theirs. The disadvantages are the time spent outside the office and in certain cases, the costs. Very often, especially when only a few employees are involved, the benefits of open conferences and trainings are worth it. When we choose which open trainings to take part in, we have to pay attention again to the trainer’s expertise, the agenda of the training, the cost and the number of participants – since a big number of participants may be an obstacle in the way of carrying out the practice. If the decision to externalize training In any situation, there are arguments was made, in order to choose the best for and against externalizing the training

services. What remains certain and constant is the fact that the organizations must pay attention to the specific training needs so as to maintain performance for the survival of the brand. And, since happy employees are the best ambassadors of the brand, whenever we encounter problems in holding on to our employees, we can place our stake on the fact that training will make them as happy as a raise.

Monica Soare Manager @ Artwin | no. 21/March, 2014



MWC 2014: Smartphones, Tablets, Phablets and Smartwatches


odespring team has just returned from the Mobile World Congress 2014, the place where all the mobile magic happens! Samsung, Sony, HTC, LG, Nokia, Huawei revealed their latest devices and innovations. A distinct and exotic appearance was the Yota team from Russia. In the mobile landscape we also remarked the Romanian team from Allview sliding in with its latest devices. Find out more in the following lines. First, we must note the so much expected keynote of Mark Zuckerberg - Founder and CEO of Facebook. In his typical casual outfit, Mark has answered questions about the latest acquisition of WhatsApp and shared his vision about the future in relation to the project: “A lot of the goal we have with internet. org is to create an on-ramp to the Internet. […] Someday someone should try and help connect everyone in the world…” As he explains, the plans for the next year are to build partnerships for this endeavor and test the working model. He also pointed out that a necessary objective is to make developers feel empathy for the high-data consumption services they are developing. Considering the perspectives of a hyper connected world and a transformation of our lifestyles, the top mobile technology developers and innovators enchanted the audience.

Samsung at MWC 2014

Samsung’s event “Unpacked 5” held on the 24th of February 2014 revealed the so much expected Samsung Galaxy S5. With a 5.1-inch Full HD Super AMOLED screen Samsung Galaxy is water-and dustresistant. It is equipped with a quad-core Krait 2.5GHz chipset, with 2GB of RAM and a 2800mAh battery. Samsung considers that Samsung Galaxy S5 camera is the “strongest on the market”, working a 16MP sensor capable of shooting 4K video, with background refocus, real time HDR and


a 0.3s capture rate. There’s also a 2.1MP NFC, 3GB of RAM and a 3200mAh batsnapper on the front of the handset. The tery. The screen is esthetics is appealing: colors like “Charcoal simply amazing: 5.2Black”, “Shimmery White”, “Electric Blue” inch display, full HD and “Copper Gold” will be available. The with Sony’s Braviamarket release is set for April 2014. inspired Triluminos The Samsung smart watches series steps know-how assisted by further with Samsung Gear 2 and Samsung the Adreno 330 GPU. Gear 2 Neo. Wrist-based apps are trendy The market release is and imaginative. According to Samsung set for March 2014. officials the Android will be replaced with the new OS based on Tizen platform. Sony Xperia Z2 Tablet The world’s lightest and slimmest tablet – Sony Xperia Z2 Tablet – is a technology fierceful little beast: it measures 6.4mm and weighs 426g! It is water resistant and carries a lot of power: a Snapdragon 801 processor with 2.3GHz quad-core Krait CPU and to round it off an Adreno 330GPU. The screen is a 10.1-inch Full HD Triluminos Sony at MWC 2014 display with X-Reality technology. The battery charging speed is boosted by 75 per Sony Xperia Z2 smartphone cent. Built on Android Kitkat 4.0, it will be Sony did shake the Mobile World available on black and white. The market Congress 2014 with its 5.2 inch flagship release is set for March 2014. smartphone: Sony Xperia Z2. As its predecessor, Sony Xperia Z2 is waterproof and we love that! The headline feature is Sony’s 20.7MP camera - an Exmor RS for mobile image sensor incorporating the award-winning G Lens. Videos with Sony Xperia Z2 will delight users as the handset can capture in 4K resolution. This all is supported by Android KitKat, a Qualcomm Snapdragon Sony Smartwatches 801 processor - a 2.3GHz quad-core Krait Along with the smartphones, Sony has CPU - as well as 4G LTE connectivity, developed the smart watch series: SW2 and

no. 21/March, 2014 |

TODAY SOFTWARE MAGAZINE Wrist Strap SE20. SmartWatch 2 expands battery. the Android experience and introduces new and exciting ways to live and commu- Huawei at MWC 2014 nicate. It interacts with your smartphone With a nice Premium look, Huawei’s over Bluetooth®, and what’s happening in MediaPad X1 shines among the company’s your life is mirrored on your watch. launched products at MWC 2014. One of HTC at MWC 2014 the smallest 7-inch Saving its grand release of HTC One 2 t abl e t s , Me d i a Pa d the following architectures: native, HTML5 on the 25th of March 2014, HTC brought X1 is easily portable and cross-platform. in the spotlight at MWC 2014 its ‘flagand comfortable. The ship mid-range’ device, the HTC Desire specs are impressive: 816. Powered by a quad-core 1.6GHz 1.6GHz quad-core Snapdragon 400 chip, it has 8GB of storpro cessor, 2GB of age and is equipped with 1.5GB of RAM. RAM, 16GB of interOne can increase the storage capacity up nal storage, microSD slot, 4G connectivity to 64GB with the microSD card slot. NFC, and a 5000mAh battery. The market release DLNA connectivity, Bluetooth 4.0, Wi-Fi date is set at the middle of 2014. and 4G/LTE functionality are available. The release date is set for April 2014. YOTA at MWC 2014 HTC smartwatches prototypes are Yota Devices, the company that also present at MWC 2014, behind closed innovated the world’s first dual-screen, doors. always-on smartphone, unveiled the next generation YotaPhone at Mobile LG at MWC 2014 World Congress 2014. The next generaSupersize has its own beauty! – At least tion YotaPhone has full-touch control on that is how LG proves it at the MWC 2014 its always-on electronic paper display mobile show with its new phablet or super- (EPD).The YotaPhone headline is: 1-Look, sized smartphone: LG G PRO 2. With a 1-Touch Always-On Display. Running on 5.9-inch full HD display, LG G PRO 2 is a Qualcomm quad-core 800 series probigger than the phablet of choice. The spec- cessor, it has Smart Power Mode, wireless ifications are decent: 2.26GHz quad-core, charging, NFC, advanced anti-theft pro3GB of RAM, 16/32GB of internal storage, tection, high-performance IHF and it just Android 4.4 KitKat. opened the YotaPhone SDK for third party developers.

NOKIA at MWC 2014

Astonishingly, Nokia goes on Android with Nokia X and Nokia X+ at Mobile World Congress 2014. The Windows Phone exclusive era is over. Looking a lot like Lumia, these two new devices are designed for the low-cost segment and bring a colorful perspective: bright green, bright red, cyan, yellow, black and white. Nokia X hass a 4-inch IPS capacitive display, with a 3MP camera, dual SIM cards, and e x p an d ab l e s t orage via a MicroSD card slot. They are supported by a 1GHZ Qualcomm Sn ap d r agon du a l core processor, have 512MB RAM & 4GB eMMC, and an impressive 1500mAh

Allview at MWC 2014

The Romanian team from Brasov – Allview – has launched at the MWC 2014 shoe its DualSIM flagship device: Viper S. Equipped with a 1,4 GHz Qualcomm Snapdragon Quad Core, a 5-inch IPS HD OGS display, 2GB RAM, and 16GB storage capacity expandable to 48 GB, Viper S also has two 13 MP and 5MP Omnivision photocameras, NFC, a 2500 mAh battery and is packing Android 4.3 Jelly Bean. Codespring, as a software development and outsourcing team, is monitoring the innovations and next-generation mobile technology developments. Software engineers at Codespring are involved in mobile development for industrial and enterprise mobile applications. Based on our expertise we understand that mobility is a driving force of the current ITC world. We are ready to discuss projects developed on

Diana Ciorba Marketing Manager @ Codespring | no. 21/March, 2014



Gogu and the justification of action “Any problems, Gogu?” Chief stood still in front of Gogu’s desk and he was trying to read the face behind the display. All eyes were focusing on them: for about 10 minutes, only gabbles had been heard from the strategic area entitled “Gogu”, but nobody had had the courage to see what was going on with him, whether someone had made him angry and whom. It was risky to place yourself in front of Gogu’s batter of pungent remarks, and his unintelligible gabble was a clear sign – confirmed in many similar situations – of danger. “Is there a problem, Gogu? Can I help you in any way?” Chief unwarily insisted. “Well, the goddam mammoths, may their glacier melt, I think so, yes, burst Gogu, and the colleagues sat down more comfortably on their chairs, resting on the back of their armchairs and stretching their legs, in a word, they all prepared for the show. Chief felt the movement, rather than seeing it, and he decided to rise to their bait: “Tell me, ‘cause I see you rather tense, what’s the matter?” Gogu couldn’t see from behind the display the hardly concealed smile of the Chief and he launched himself into a tumultuous tirade, to the delight of his colleagues, happy to see the show. “I told them three days ago, three days ago!... And nobody cared to answer. How is this possible?! Can you explain it to me, Chief, how can an entire team ignore such a request?! Something that is, eventually, to our own advantage… And no one, but no one even deigned to answer the email. And you, what are you staring at me for, like in the circus? He added, on noticing the audience who was hanging on his lips. As the ton of his voice was the same and the question followed the rhetorical questions till then, many didn’t even realize they were caught in the act, namely that they had become his target. Misu, however, felt the danger and tried to draw away, slowly, letting his chair glide towards his desk. Since this had been the only movement in the area, it immediately caught the Gogu’s eye, and he said reproachfully: “You, too, Brutus?!” after which, he went on to attack: “Oh, yes, Mr. Misu, you didn’t do anything, either. You had nothing whatsoever to do with it… Was it that difficult for you, who are as slow as a snail going into reverse, to have a look over that CV of yours, to up-date a couple of lines there and send it back to me? It’s not like I needed them to give to my mother!” His eyes were fixed on Misu and it was obvious that this time it wasn’t a rhetorical question: he was waiting for an answer. Misu looked to Chief for help, but he avoided his look. He was completely enjoying the scene and it was clear he had no intention of interrupting it. “Now, don’t pick on me”, drawled Misu, in an attempt to stall and hoping to find an excuse or at least some intelligent reply to Gogu’s accusations. According to the rule saying that “the best defense is to attack”, he quickly retorted (despite the fact that he was well known to be a slow Transylvanian): “But you didn’t say what you needed it for!”


no. 21/March, 2014 |

Gogu was knocked all of a heap: Why do you need to know? he thought, but didn’t voice his reply. It was not an intelligent reply and it wasn’t like him to throw random words. A farcical idea threaded into his brain: was it possible that it had made a difference? Misu and the others were waiting for the mad attack, which – surprisingly – never came. It was obvious that Gogu was processing the information. Could justification be so important as to trigger the action? Gogu looked at Chief for help: “Chief, would it have mattered?” According to his well known habit, Chief leant back on a corner of a table and stood upon his pantables, saying: “Yes.” After that, he exited. *** Later that evening, at home, Gogu was still under the influence of the discussion in the office. After the Chief had gone, Gogu went out for a coffee with Misu and they debated a lot, talking about motivation, about how a request formulation could influence its gratification, “deep issues” – as Gogu called them – which he greatly enjoyed. “Dad, I want to watch TV” – Gogu’s son suddenly appeared from his room. “Go, mind your lessons and forget the TV. I saw you on your computer the entire evening, now you want to move on to the TV?! No.” “Gogu, dear – his wife chopped in – there is a show on the influence of technology on the young people’s mental development, it is extremely educational and I think the child could benefit from it. This is what he wanted to watch. In my opinion it would do no harm, but it is, obviously, your choice…” Obviously, obviously – Gogu mocked her. Silently, of course, since he wasn’t crazy to do it aloud. On the other hand, the matter thus described, could he argue with that? Goddam it, look who knows about the power of justification… He grumbled: “Well, it’s good that you two allied against me… go on, turn it on now, what can I say?!” They have defeated me…

Simona Bonghez, Ph.D. Speaker, trainer and consultant in project management, Owner of Colors in Projects

gold partner


powered by

Issue 21 - Today Software Magazine (english)

Issue 21 - Today Software Magazine (english)