Page 1

No. 2 / 2012 •



Microsoft Kinect from SF to reality Agile Software Development using Scrum Functional programming Startups - SociaLook Exploratory testing How to Prepare for a Job Interview

6 Microsoft Kinect from SF to reality

20 SociaLook Horaţiu Mocian

Simplex team

10 Agile Software Development using Scrum Tavi Bolog

14 Functional programming Ovidiu Deac

18 Exploratory Testing Ioana Matros

22 Interview with Dan Luţaş – Portrait of an expert in information security Marius Mornea

26 How to Prepare for a Job Interview Andreea Pârvu

29 Meet Gogu! Simona Bonghez



Ovidiu Măţan, PMP Coordonates Gemini Solutions Cluj’s team, in the past worked as a Product Manager for Ovi Sync at Nokia, founder of Today Software Magazine.

The first issue of the magazine has enjoyed a real success. had over 1373 unique visitors since its launch on February 6th, 2012. The local urge was felt in over 1420 visits from Cluj. The local community support was not long in coming. Many thanks to ISDC for trusting us and becoming our first sponsor. On March 16th the international version of the magazine was launched on www., with a new look&feel, which was applied to the local version as well. The diversity is the main feature as our readers are located in 25 countries. Starting with this number we initiate the Local Comunities section which will present groups such as Transylvania Java User Group or Cluj Semantic Web Meetup and the events they organize. This second issue of the magazine is pretty diverse. Many of the articles open new series. It is worth mentioning the presence of Mrs. Simona Bonghez with an original article about how expensive teaching a lesson to an employee can be. Started in the first issue, the introduction of functional languages continues with their general presentation and operating instructions. Big Data presents the challenges generated by the data flow which become bigger and bigger due to the social networks and not only. It promises to be the first in a series of articles describing the challenges we face during real-time data flow analysis and how to apply the technologies available on the market. A new series is started with the article about Kinect, an exotic device which gives us the possibility to develop applications, not only games, with the human body serving as an input device. How to test an application? This is a question, whose answer is not taught during the university classes, taking into consideration that the number of QA Engineers is comparable to the Developers. We are going to analyze Exploratory Testing as a testing tool. Preparing for and presenting yourself during a job interview are often neglected by many developers, so we considered useful to include a special article on this matter, especially as it can be successfully used during different kinds of meetings, not only interviews. The StartUp and Interview series continues with the presentation of SociaLook, an efficient tool dedicated to the businesses based on Twitter and an interview with Dan Luţaș from BitDefender. Last but no least, we are looking forward for your comments on www.todaysoftmag. com. You are welcome to subscribe to our newsletter to get the latest news and updates.

Thank you,

Ovidiu Măţan founder of Today Software Magazine


nr. 2/2012 |

Editorial staf Fondator / Editor in chief: Ovidiu Mățan / Editor (startups and interviews): Marius Mornea / Graphic designer: Dan Hădărău / Translator: Nora Mihalca / Marketing colaborator: Ioana Fane / Reviewer: Romulus Pașca / Contact address str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania ISSN 2284-6360 | nr. 2/2012



Microsoft Kinect From SF to Reality


ince 2002, from the release of Minority Report (directed by Steven Spielberg) in which the main character John Anderton (Tom Cruise) interacts with a computer using a glove and different hand gestures, the technological world started talking about a new way to use computers, where a Natural User Interface – NUI would replace the traditional ways (mouse and keyboard) of human-computer interaction. During the following years, different projects and similar concepts were announced, but only after seven years, at the Electronic Entertainment Expo 2009 (E3), Microsoft announced Project Natal, focusing on the idea that „You are the controller”.

Simplex team


nr. 2/2012 |

Project Natal was said to be a hardware peripheral device for XBOX 360 aimed at changing the interaction between users and Microsoft’s games console, overall improving the gameplay experience. In the demo of the concept, users could play games, only with the help of natural movements of their body, thus being unconstrained to use a controller or a joystick. Also, the device used a special microphone, allowing the user to start and stop the game and control it with simple voice commands. For in short time the use of Kinect was considea while, Microsoft decided not to reveal rably extended in different areas. At present much about the product and, at E3 – 2010, moment, the device is used in fields such as Project Natal was renamed as Kinect and medicine, advertising or robotics industry, planned to be officially launched in early going far beyond its original function of November that year. entertainment. Therefore, Kinect is a peripheral device developed by Microsoft, which detects A few weeks after its release, various motion and allows users to interact with projects started to appear in the form of the XBOX 360 console, as well as with hacks. Controlling the mouse in Windows, any Windows PC (with the official release playing a digital guitar, generating a realfor Windows: http://www.kinectforwin- time 3D projection of a room and robots using body gestures and voice that could easily avoid obstacles were just commands. Although the initial goal was to some of the numerous projects based on enrich the experience of XBOX 360 games, Kinect and built by enthusiasts. Shortly


after, some Open Source SDKs (Software Development Kit) started to appear, allowing anyone to develop projects with this new technology. n response, in June 2011, Microsoft launched the first official KinectSDK with unlicensed trading. The final version, with trading license, was launched on February 1st, 2012, together with a new edition of Kinect, called Kinect for Windows, a device similar to the XBOX edition, but with some additional features. One of these new features is the support for Near-Mode, which gives users the possibility to be tracked from a close distance, and also to detect finer movements, allowing, for example, finger tracking.


Technical details

Kinect contains an RGB camera with a resolution of 640 x 480, an accurate microphone array consisting of four capsules, a motorized system that allows the tilting of the sensor and, its main component, the 3D depth sensors. Senzorul de adâncime 3D este format diThe 3D depth sensors are an infrared laser projector combined with a monochrome CMOS sensor that captures 3D information. With the aid of its software, Kinect can recognize gestures based on the 3D depth data received by this system. The device can simultaneously track up to six people, but the gestures and motion can be analyzed only for two of them at a time. A number of twenty joints are placed on the body, which are used to recognize certain movements made by users. These joints are: head, shoulder center/left/right, elbow left/right, wrist left/right, hand left/

right, spine, hip center/left/right, knee left/ right, ankle left/right, foot left/right. The tracking of these points on the body is made possible by the infrared projector, which sends infrared waves and measures their time of return, after hitting some physical obstacles in the environment. The principle is similar to how sonars work: if we can detect the time required for waves to return to the infrared sensor, we can find how far the objects hit by the infrared light are. All these distances can be stored into an image or a map containig the visual field of the camera. Each point on this map shows its depth in space. All these points can be forwarded for further processing and, with the aid of shape recognition algorithms, certain forms or objects, such as the

human body, can be found. In addition to accuracy and speed of reaction, infrared technology brings another advantage: solving the problem of ambient light. As the sensor was not designed to capture visible light, Kinect can be used in rooms where lighting is very low. In order to function, Kinect needs a power source greater than that obtained by USB. Therefore the device needs an additional power cord. The transfer of information and connection with XBOX 360 or PC is made via USB, while the power is supplied from the outlet. However, XBOX 360 Slim, launched in 2010, includes a special adapter that eliminates the need for the power outlet. The connection between Kinect and PC is made via USB and requires installation of certain drivers that are available online. The official SDK, also available online, enables development of applications in C++, C# or Visual Basic, using Microsoft Visual Studio 2010 development environment and requiring a machine running Windows 7.


Kinect’s main competitors are Nintendo Wii and PlayStation Move, each with different attributes that distinguish them apart. We will compare the three devices and try to find which of them is most suited for a Natural User Interface. | nr. 2/2012



Regarding its camera, Microsoft Kinect takes the lead and is somehow expected to do so, as the camera is fundamental in its functioning. In addition to the 3D Depth sensors, Kinect features an RGB camera used to detect facial expressions and fine details which can’t be captured by the infrared sensors. PlayStation Move contains an RGB camera, but as it functions in tandem with a physical controller, Move does not offer a solution based on infrared, and this makes the tracking of the user’s entire body quite difficult. Wii offers no camera, as the device from Nintendo focuses on its accelerometer and gyroscope. Nintendo Wii can detect motion, but not with a very high precision. Audio control is very well integrated into Kinect and the device has a set of directional microphones that allows it to “listen” only to user commands and ignore background noise. Move offers a similar set of microphones, thus, its audio performance is similar to that of Kinect. Wii does not incorporate any microphone, so voice commands or audio capture are not viable options. When it comes to attached physical controllers, Kinect has none. This, however, is the most interesting part: according to Microsoft, the controller should be the user him/herself and any of his/her movements or gestures are automatically captured by the device and further processed, allowing a seamless and unconstrained interaction. As for Nintendo Wii, the physical controller is its main component. It consists of accelerometers and a gyroscope that make it efficient and fun for simple games and applications. Still, although it is among the first in this area, Nintendo Wii has lost its head start to other devices that can offer a powerful natural user interface. Finally, PlayStation Move has a controller similar to Wii that measures the acceleration of the user’s movements. Together with its RGB camera, it provides a good and accurate system to detect human gestures, but its downside is the use of an external controller that could sometimes make the user feel uncomfortable and unnatural.

Code Sample

The following will contain a code sample, written in C# and aimed to initialize a Kinect device attached to a PC. The code sets some operating parameters and, as a simple feature, displays a red ellipse that


nr. 2/2012 |

Microsoft Kinect – From SF to Reality

follows the position of the user’s right hand. For now, we will only include the code itself, while planning to comment and explain it in further detail in the next issues of the magazine. public partial class KinectSample : Window { public KinectSample() { InitializeComponent(); } bool closing = false; const int skeletonCount = 6; Skeleton[] allSkeletons = new Skeleton[skeletonCount]; private void Window_Loaded(object sender, RoutedEventArgs e) { kinectSensorChooser.KinectSensorChanged += new DependencyPropertyChangedE ventHandler(kinectSensorChooser_KinectSensorChanged); } void kinectSensorChooser_ KinectSensorChanged(object sender, DependencyPropertyChangedEventArgs e) {


return; Skeleton first = GetFirstSkeleton(e); if (first == null) { return; } MainCanvas.Children.Clear(); Ellipse rightEllipse = new Ellipse(); rightEllipse.Fill = Brushes.Red; rightEllipse.Width = 20; rightEllipse.Height = 20; MainCanvas.Children. Add(rightEllipse); ScalePosition(rightEllipse, first.Joints[JointType.HandRight]); } Skeleton GetFirstSkeleton(AllFramesR eadyEventArgs e) { using (SkeletonFrame skeletonFrameData = e.OpenSkeletonFrame()) { if (skeletonFrameData == null) { return null; } skeletonFrameData.CopySkelet onDataTo(allSkeletons);

Infrared image (left) and RGB image(right), captured by Kinect sensor. KinectSensor old = (KinectSensor)e.OldValue; StopKinect(old); KinectSensor sensor = (KinectSensor)e.NewValue; if (sensor == null) { return; } var parameters = new TransformSmoothParameters { Smoothing = 0.3f, Correction = 0.0f, Prediction = 0.0f, JitterRadius = 1.0f, MaxDeviationRadius = 0.5f }; sensor.SkeletonStream. Enable(parameters); sensor.SkeletonStream.Enable(); sensor.AllFramesReady += new Ev entHandler<AllFramesReadyEventArgs>(sens or_AllFramesReady); sensor.DepthStream. Enable(DepthImageFormat.Resolution640x480Fps30); sensor.ColorStream. Enable(ColorImageFormat.RgbResolution640x480Fps30); try { sensor.Start(); } catch (System.IO.IOException) { kinectSensorChooser.AppConflictOccurred(); } } void sensor_AllFramesReady(object sender, AllFramesReadyEventArgs e) { if (closing) {


Skeleton first = (from s in where s.TrackingState == SkeletonTrackingState. Tracked select s).FirstOrDefault(); return first; } } private void StopKinect(KinectSensor sensor) { if (sensor != null) { if (sensor.IsRunning) { sensor.Stop(); if (sensor.AudioSource != null) { sensor.AudioSource. Stop(); } } } } private void ScalePosition(FrameworkElement element, Joint joint) { Joint scaledJoint = joint. ScaleTo(1280, 720, .3f, .3f); Canvas.SetLeft(element, scaledJoint.Position.X); Canvas.SetTop(element, scaledJoint.Position.Y); } private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e) { closing = true; StopKinect(kinectSensorChooser. Kinect); }


} The code sequence is inspired by educational resources found at site. Running it requires the installation of the official SDK provided by Microsoft, which can also be found on the site mentioned above.


Kinect offers the opportunity for developing innovative applications that provide an intuitive and natural interaction between human and computer. Although it is relatively a new technology, the device attracted the attention of many independent developers and enthusiasts, but also of corporations and large companies. Thus, in a short time, numerous projects and applications using Kinect started to appear, bringing solutions in various fields. As these projects showed, Kinect can be easily used in advertising or in retail stores, where customers could sample clothes in front of a computer. Furthermore, the device is ideal for non-entertainment areas. Currently, it is suggested for use in school, where students could experience a much more interactive and exciting learning. For example, in the chemistry class, students could be able to interact with different substances and make some experiments on a computer with Kinect, without being directly exposed to some harmful chemicals or dangerous reactions. Clearly, the device attracts many students, and Microsoft and other organizations or companies, started creating student competitions or financing student projects and start-ups. An example is the Imagine Cup, an international competition addressed to students passionate about techonolgy, which in 2012 added a special category dedicated for Kinect, named Kinect Fun Labs Challenge. Finally, the huge impact it had in a short span of time proves that Kinect is a technology with great, still not fully-tapped potential, that can bring a significant progress in the interaction between human and computer. We will continue the topic on Kinect in the next issues of Today Software Magazine, where we intend to publish some ideas and information about what and how to develop applications with Kinect and its official SDK.

Useful Links

Project Natal - Microsoft E3 2010 Kinect Demo - OpenKinect SDK - Kinect for Windows SDK - Nintendo Wii - PlayStation Move - | nr. 2/2012



Agile Software Development using Scrum Beginning of 2008, the company I was working for decided we need to become agile and use Scrum. I didn’t quite understand why we should change our way of working especially that we used to follow SDLC (software development life cycle): requirements, design, implementation, testing, maintenance and releases were scheduled at 3-4 months intervals using waterfall.

Tavi Bolog Development Lead at Nokia, enjoying Berlin and building software together with an enthusiastic team

Waterfall is a sequential process, as shown in the diagram below. The biggest critique related to waterfall is rigidity. In reality, it was proven that a software project can’t be accurately planned from beginning to end and it is very important that software teams adapt during the project life cycle to new requirements.

developing software which satisfies client needs (both functional and qualitative) and is deployed in a production environment as soon as it has some business value. This philosophy, used by all agile methods (e.g. extreme programming, kanban, lean software development, Design

Cerinţe Cerinţe


Design Implementare Testare Mentenanţă Back to Scrum, the process was presented in 1995, by Jeff Sutherland and Ken Schwaber at OOPSLA ’95. The idea of Scrum was initially used outside software development, as a method to increase the speed of execution and flexibility of a working team by using a rugby analogy (a team tried to advance with the ball by passing backward and forward). Scrum uses an incremental and iterative approach, having the final scope


nr. 2/2012 |



etc.) was formalized in 2001 as the Agile manifesto: • Individuals and interactions over processes and tools • Working software over comprehensive documentation • Customer collaboration over contract negotiation • Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more.


Implementing Scrum

As you will see, Scrum looks relative simple to implement because roles and process themselves are simple. But this means that each team needs to define the details (for example what done means). Scrum relies on human values (see the manifesto). Each team member needs to contribute to the team goals by being honest, on-time, available to help his colleagues, cross-functional. Scrum philosophy suggests that the teams are self-directed and self-organized and the management interventions are minimized. Scrum roles are: • Product Owner: he is usually the voice of the client, making sure that what the team develops is prioritized and adds business value for the client. • Scrum Master: he is responsible to facilitate the process of Scrum (planning, grooming, daily standups, retrospective, review, etc.) and makes sure that the team has all needed artifacts to reach its goals: removes blocks, protects the team from outside interventions that could disrupt the development process. The scrum master is not the manager or leader of the group, but a facilitator and a protector. • The team: it is responsible to produce software (most of the times) which could be used by the users in a production environment in early stages. The team is crossfunctional and consists of up to 10 members. Scrum is time-boxed in development iterations called sprints. Duration of a sprint is between 1 to 4 weeks. Teams that are new to Scrum are recommended to use 4 weeks sprints to adapt to the new way of working. Over the time, the team could adapt the duration of the sprint as it fits best, but it is important to maintain the same sprint duration because it will help to set a pace for the team. In my team we’re developing in 1 week sprints. The sprint should be considered atomic. This means that once it is started, the content will not change. In exceptional cases, with the team agreement, the content could suffer small changes. The team is also empowered to stop a sprint if the planned content is not relevant for the business


Product Backlog

• • •

It is important that, before starting the sprint, the product owner creates a Product Backlog. This describes the desired functionality and the priority of the desired functionality. The backlog is a living document, which can be modified constantly by the product owner to reflect the priorities and the new functionality that is needed for the product. The product owner owns the backlog, but I have seen situations when the team members contribute (it could be a way to stimulate the team). On the other hand, the product owner is the only one who prioritizes the functionality. A simple way to define functionality is to use the user story concept borrowed from extreme programming. A user story looks like: As a [user] I want to [do something] in order [to achieve something]. Here is a specific example: As a user I want to login in order to use the product functionality. A user story needs to have acceptance criteria, which define additional details of the functionality. Here are some examples of acceptance criteria for the aforementioned user story: • The system must show page [X] in case of successful authentication • T he system must return error messages for wrong user and password combination For complex product, the backlog will grow considerably and it is important to have it reviewed. Reviewing process is called grooming. The review is focused on evaluation of priorities, breaking the complex user stories in smaller ones, review user story content and the acceptance criteria. Grooming will make the sprint planning easier. In my team, we review the backlog once per week for 1 hour.

Definition of Done

Before planning, any team needs to define the rules that will be followed in order to consider a user story done. Here are few examples of criteria I have used. Of course, they need to be adapted to the need and maturity of a scrum team: • Requirements defined along with acceptance criteria

• • •

Solution design agreed (and documented) Solution implemented Code reviewed by other developers Unit tests implemented and executed with success Acceptance tests successfully executed (automation is recommended; here you can also add load testing) Documentation updated (configuration parameters etc.) Build process updated and all tests executed with success Solution is demoed at the end of the sprint and accepted by the product owner

Sprint Planning

The sprint starts with a meeting called Sprint Planning. This one is time box to a maximum of 8 hours (depends on the sprint duration) and split in two. In the first part, the product owner presents the desired scope of the sprint, along with the priority of the functionality from the Backlog. The team tries to understand as much as possible from the functionality requested by discussing and asking questions to the product owner. On the second part of the planning, based on the information uncovered so far, the team will define the scope of the sprint by making the so called Sprint Backlog. Here, the team adds the user stories (based on product owner priorities), they can commit to deliver in this sprint and potentially ship to production at the end of the sprint. The product owner needs to be around in case additional clarifications are needed or to discuss with the team increasing/decreasing of the sprint scope. Each user story is estimated by the team using story points. These represent the relative complexity of the user stories and not really the effective time needed to finish a user story. A usual method to measure the complexity of user stories is to use the Fibonacci numbers: 1, 2, 3, 5, 8, 13, 21, etc. In my team we decided to stop at 13, considering that user stories which are more complex need to be divided in smaller user stories. Estimation itself is done as a team. There are enough statistical data to prove that a collective estimation is more accurate than individual estimations, so I will | nr. 2/2012



not insist on this. Collective estimation could be done using poker planning (each team member comes up with his estimation, but all team presents the estimation at once to avoid getting influenced by other team members). Usually, extreme estimations are discussed to understand why people see the complexity of a user story so divergent. Eventually, the team could try to re-estimate based on these discussions. In my team, we use the principle of majority in case the estimations are pretty much closed. Story points estimation is relative and individual for each team, so it doesn’t make sense to compare teams based on story points achieved. This is used to help a team to eventually project when in the future it is more likely for some functionality to be implemented (sometimes it is necessary to build business plans etc.). This is possible because after few sprints, the team will increase and stabilize its velocity. This represents the number of story points achieved in a sprint according to definition of done. Using this statistical information, the team could create estimations for future delivery of certain functionality. A team which is mature enough is able to stabilize the velocity of the sprint using the experience of previous sprints. Once planning is done, let the sprint begin.

Daily Scrum

Each morning, team members get together at the same time to discuss the progress of the sprint. Everybody must show up. To maintain the focus, the meeting is standing. Each team member will ask 3 questions: • What did you do yesterday? • What will you do today? • What is blocking your progress? The scrum master could “punish” the team members who are late or missing. I haven’t seen this in practice, but paying 1 euro that could be eventually sent to a charity fund sounds like a good idea. The scrum master keeps track of the meeting time, making sure it is time-boxed to 15 minutes. If there are impediments requiring special attention, these could be discussed right after the daily scrum. As a rule of thumb, the scrum master is responsible to remove impediments, but there are situations


nr. 2/2012 |

Agile Software Development using Scrum

when he needs the support from the team People need to be open to feedback or externals, so delegation is acceptable as (give and receive) as part of the retrospeclong as tracking the progress of the issue is tive. If team can openly discuss the issues still handled by the scrum master. and celebrate success, that’s a clear example of a mature scrum team. Sprint Burndown Items discussed in the retrospective Once the sprint scope is defined, the must be specific as much as possible. For team must find a way to track the progress example: daily. We call this sprint burndown, which “Communication was good” would is a graph, showing how much is left until the user stories are done. Here is an example where: • X axis shows the number of days of the sprint • Y axis shows the number of story points of the sprint (some teams use the total number of user stories) make more sense if phrased as “Andrew • “Ideal” means the ideal progress helped me to solve problem X. I couldn’t for the team to finish everything have done it without his help.” that is scheduled • “Actual” means the actual progress When scrum doesn’t work of the team. There are situations when using Scrum is not recommended, or at least some It is important that the burndown is aspects need to be modified. Here are some updated according to the definition of examples: done. As a rule of thumb, it is preferable Distributed teams: if team memto have 50% of user stories done rather bers don’t share the same location, it is than having all stories almost done; the pretty complicated to become a team. point being that a done user story could be Distribution impedes the biggest asset deployed to production and create business of scrum: human interactions. The same value. could happen if the product owner is not collocated with the team. An alternative Sprint Review could be splitting the team accordingly to On the last day of the sprint, the team limit the effect of distribution. reviews the achievements together with Unbalanced teams: if a team has the product owner. The team show case not enough senior people, most likely the the functionality being built to the product idea of self-organizing will turn pretty soon owner. The product owner is encouraged to in chaos. A 1:2 ratio (seniors/juniors) looks feedback. His feedback will be used by the good. team in the next sprints or may require that Organizations which don’t offer a user story to be re-planned and not consi- support for Scrum: there are organizatidered done. In my team, we use the sprint ons considering that being agile means to review to also decide if we can deploy to change anytime everything without impacproduction the functionality built as part ting the releases. of the sprint. The review is time-boxed to Teams doing maintenance: maimaximum 2 hours. ntenance means most of the times fixing problems that appear on production Sprint Retrospective systems. In this case, Kanban offers a better After review, the team analyses what support, because it removes the need to went well and what needs to be improved pre-plan a sprint, rather focusing on the in order for the team to function better. most important things on each specific day. Each team member needs to come up with Some production issues may be harder to a few ideas. It is important to also focus the solve due to limited information, making retrospective on the positive aspects that irrelevant an eventual estimation effort. For happen during the sprint. As well, for nega- me, Kanban made more sense compared to tive aspects, the team needs to plan some Scrum in this specific situation. specific actions for the next sprint.


What is needed to do Scrum?

Product backlog must be maintained using a tool. There is a variety of tools, starting with a simple MS Excel. Lately, I used Scrum Works Pro and JIRA. Both tools offer support to visualize burn downs, but we drop this functionality in favor of a poster on the office wall which is updated daily by the scrum master. The approach sounds better to me: it has a better impact on the team and could be updated in realtime during the sprint. Limiting work in progress is a recommendation coming from lean manufacturing. The team must focus to finish work according to definition of done and not focus on starting as much work as possible. If someone works on more than 2 items at the same time, the capacity to focus will drop down affecting the productivity. I would recommend helping a colleague rather than starting more new work. Automation tools are able to provide quick feedback on the quality state of the product. Any team doing scrum must use a continuous build system. We use Jenkins. Continuous build system must create a build and run all the automated tests against it. We use JUnit and several Mock implementations (JMock, Mockito, SprintTest, etc.). Team members must

cover new code with automated tests. An interesting approach is TDD (Test Driven Development) which implies writing new code only if tests are failing. I havenâ&#x20AC;&#x2122;t seen this used so often. We also use Sonar to collect metrics around our usage of Java programming language and good software practices. For acceptance testing, we use Selenium along with a custom implementation allowing cool report generation implemented using Ruby on Rails. Acceptance tests are executed a couple of times per day against a scaled down version of a production environment and they verify the correctness of the application from a functional perspective. It is possible to use Selenium to verify the layouts of the pages, but this involves a lot of work compared to the benefits. In the past, we also used JMeter for load testing. To maintain the code quality and knowledge sharing, team could do code review or pair programming. Code review could be done ad-hoc or formal. There are some tools to help the process (like Code Collaborator). Pair programming is a technique imported from extreme programming where 2 developers share a single keyboard to write code for the same functionality. This approach is more

dynamic compared to code review and eventually less boring. Still, I havenâ&#x20AC;&#x2122;t seen this being used full time by teams, rather when needed. Team members must be cross-functional. For several weeks we experiment full team doing manual testing. Everybody is responsible to help the testing people, running manual tests. As an effect, the bug tracking system is over-used, but eventually this is what we all want: to find all the issue before the users finds them and be able to release quality software. This example worked very well on our team.


Scrum is a lightweight process. Its power is in the team members interacting with each other to reach a common goal. The end goal is to have a happy customer who can quickly evaluate the functionality of the product. If an organization decides to use Scrum or other form of agile development it is recommended to not bend the process from the beginning, but only after the teams feel comfortable using the specific agile technique. | nr. 2/2012



Functional Programming

Functional Programming (FP) ( is an old concept developed in the 1950s when Lisp language was created. It has its roots in” lambda calculus”, a formal system in mathematical logic, first formulated by Alonzo Church in the 1930s ( Lambda_calculus). The functions are the main element in FP. Programmers generally work with immutable data structures and pure functions. Functional languages usually provide a set of tools that make it easier to work with functions.

Ovidiu Deac Independent software consultant specialized in agile development methods and functional languages

Functional Languages

The best known functional languages can be grouped in the following families: Lisp (Lisp, Common Lisp, Scheme, Clojure), ML (Standard ML, Caml, OCaml, F#), pure functional languages (Haskell, Clean), concurrent languages (Erlang, Oz, Scala). The table below shows a map of the main functional languages, issue year and the influences among them. None of the general-purpose languages can be exclusively functional. All the languages mentioned above are functional, but they


nr. 2/2012 |

also have imperative elements because a purely functional written program, with no “side effects”, cannot communicate with the exterior. Somebody made a joke: in order to observe a pure functional program, all we can do is place our hand on the computer’s case and feel it heating. Therefore, programming languages are “multi-paradigm”: some of them are prevalently imperative with functional elements, while others are prevalently functional with imperative elements.


Lisp/Common Lisp/Scheme

Lisp was invented in the 1950s as the first language to support functional programming techniques. It’s a multi-paradigm language with dynamic-typed, strong-typed disciplines and immediate evaluation. The name LISP derives from „LISt Processing” as it is specialized in working with lists. Everything is made up of lists in LISP, both data and the code. Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or even new domainspecific languages embedded in Lisp. (DSL) The most widely known general-purpose Lisp dialects are Scheme, Common Lisp and Clojure.


It is a recent dialect of the Lisp programming language created in 2007. One of its main features is Concurrent programming through software transactional memory Software_transactional_memory. Clojure runs on the Java Virtual Machine (JVM) and Common Language Runtime (CLR). ClojureScript, a subset of Clojure, can be compiled to optimized JavaScript.

ML/Standard ML

ML is an impure functional language developed in the 1970s at the University of Edinburgh. ML includes static typing and it uses eager evaluation. The major dialects are Standard ML (SML), Caml, OCaml and F#. ML has strongly influenced other languages, such as Haskell and Clean.



Erlang is a functional and concurrent general-purpose language. Developed in the 1980s by Ericsson, it was initially used in telecom applications. It was designed to support concurrent, distributed, faulttolerant, and soft-real-time applications. It features dynamic and strong typing disciplines. The concurrency is implemented using the Actor Model (http://en.wikipedia. org/wiki/Actor_model) and the virtual machine is able to support hundreds of thousands of independent lightweight processes. Each process has its own data. With small exceptions, the shared data is read-only. Therefore, the garbage-collector can run per-process, with excellent performance results.


code. It is considered to be more efficient and easier to optimize then Haskell.

Techniques in Functional Programming

This is a presentation of a few basic elements which can be found in functional languages. Some of them are also present in imperative languages.

The Functions are “First Class Citizens”

A functional language has “first class citizens” functions. This means it treats functions as any ordinary variables. A function can take other functions as argument and return new functions. Python decorators are an example: def log_call(f): def wrapper(*args, **kwargs): print “Calling f with arguments (%s,%s)” % ( args, kwargs) try: result = f(*args, **kwargs) print “Result: %s” % result except Exception as ex: print “Exit by exception %s” % ex raise return wrapper

It’s an object-oriented and functional programming language. It is strong and static-typed and uses type inference. It runs on JVM, CLR or Android and can call Java or C# libraries. In addition to the synchronization primitives, Scala’s standard library has support for the actor model, similar to The above decorator can be used like Erlang’s approach. this:


It’s a general-purpose purely lazy functional programming language. The support for imperative constructs is implemented using monads. Functions producing side effects must declare them in their signatures. Being a purely functional programming language, concurrency and parallelism are easy to implement. For parallelism one can use classic threads combined with shared mutable data (Mvars). There are some other solutions: Software Transactional Memory – a concept originally implemented in Haskell, “sparks” – operations that can be executed in parallel if the virtual machine considers it to be profitable – or “data parallelism”.

OCaml is a multi-paradigm ML language providing support both for functional programming and object-oriented programming. It uses static and strong typing. In most cases the types can be omitted, and the compiler does the type Clean inference. OCaml is notable for its perforClean is a general-purpose purely funcmance close to a similar implementation tional computer programming language in C. developed in the 1980s. It’s pretty similar to Haskel, but it has a different approach F# on dealing with side effects. The functions Developed by Microsoft, targeting the producing side effects use the uniqueness .NET framework, F# is partially compatible typing system i.e. an object, once altered with OCaml. It’s fully supported by .NET cannot be used in further computations. and runs on CLR and Mono. This technique allows the Clean compiler to verify correctness and the generate efficient

@log_call def my_function(a, b, c): return (a+b)/c my_function(1,2,3) my_function(4, 2, 0)

... with the following result:

$ python Calling f with arguments ((1, 2, 3),{}) Result: 1 Calling f with arguments ((4, 2, 0),{}) Exit by exception integer division or modulo by zero Traceback (most recent call last): File “”, line 18, în my_function(4, 2, 0) File “”, line 5, în wrapper result = f(*args, **kwargs) File “”, line 14, în my_function return (a+b)/c ZeroDivisionError: integer division or modulo by zero

An example of functions passed as arguments in Haskel could be a “forward pipe” operator which can be defined as: x |> f = f x

...and it takes as its arguments a value and a function, and returns the result of applying the function. This allows us to use the following syntax: double x = 2 * x doubleAppliedThreeTimes x = x |> double |> double |> double

Once the functions are used as any ordinary objects, there are a few essential | nr. 2/2012



Functional programming

elements which make working with func- Function Composition tions easier. In mathematics, if h(x) = f(g(x)) ...we can write that h = f ⃘ g Inner functions and closures ...which is read “h is the function obtaiLike in the example above, the functi- ned by composing function f with function ons that are declared inside other functions g”. are called “inner functions”. Sometimes, an It is obvious that the second syntax inner function has access to values inside is much cleaner as it expresses the idea the namespace of the parent function. In directly, without details about how the this case, it is called “closure”. The values arguments are transmitted. Supposing that are transparently transmitted to the inner the functions reverse, which reverses a list function. and sort which sorts it in ascending order See below an example of closure in C#: have already been implemented. We want using System; to implement a function that sorts a list in class Example { descending order. static Action CreateAction(string name, In Python the code would look like this: int value) {

Action a = delegate { System.Console.WriteLine(“Action name: ‘” + name + “’ value: “ + value); }; return a; } static void Main(string[] args) { Action a1 = CreateAction(“action1”, 10); Action a2 = CreateAction(“action2”, 20); a1(); a2(); } }

In this example, the inner delegate uses

2 int inputs and returns an int is seen as “a function which takes one int input and returns a function that takes an int input and returns an int”. Therefore, for a function that takes N inputs, we can fix the first M inputs (M < N) and we get a function that takes N-M inputs. For example, in Haskell, if multiply function is defined like this: multiply a b = a * b

…double and triple function can be defined like this: double = multiply 2 triple = multiply 3

Calling double with input 10 it returns multiply 2 10. This means that a function multiplying In C++ the code is similar but additio- all the elements on a list with 2 could be nally, it has the curly brackets to define the written like this: blocks, respectively the argument types and doubleAll xs = map (multiply 2) xs reverse. ... or written in “point free” style: def reverse_sort(seq): return reverse(sort(seq))

std::vector<int> reverse_sort(const std::vector<int>& seq) { return reverse(sort(seq)); }

doubleAll = map (multiply 2)

In F# instead, the preffered way is the forward composition operator >> like this:

doubleAll = map (\x -> x * 2)

reverse_sort = sort >> reverse

...and it makes an easy to read syntax.

$ mcs closure.cs && mono closure.exe Action name: action1 value: 10 Action name: action2 value: 20

...or using an anonymous function instead of multiply: This was possible due to the fact that the language supports functions as “first class citizens”, anonymous functions and partial application.

In Haskell, it can be implemented by function composition, using the operator Note: Additionally, Haskell supports the variables name and value from the con- (.) or by “forward arrow” using the ope- “operator sections” for binary operators, text in which it was created. rator (>>>). This way, the two functions therefore the above function could be below are (almost) identical: written more concise like this: reverseSort = reverse.sort reverseSort’ = sort >>> reverse

Anonymous Functions

When adopting a FP style, simple functions used in one single place are needed. Generally, they are used together with high level functions such as map, fold, filter or as predicates for functions such as find, count etc. In the previous example, delegate is an anonymous function. See below an example of using an anonymous function in C++ 0x11 as predicate in calling count_if

As you can see, In Python or C++ we face the details on transmission of arguments, and in those languages supporting function composition, the code is much more compact and brief. This manner of writing functions without explicit arguments is called “point-free style”. In functional style, when possible, using point-free style is preferred.

Partial Application

In the lambda calculus a function can take a single input. A function that takes

#include <algorithm> #include <iostream> #include <vector> int main() { std::vector<int> v = {1,2,3,4,5,5,5,6,6,7,7,7,8}; int even_count = std::count_if(v.begin(), v.end(), [] (int n) { return n % 2 == 0;}); std::cout << “Even numbers: “ << even_count << std::endl; }


nr. 2/2012 |

doubleAll = map (*2)

It is remarkable how concise and easy to read this code is, as soon as we get familiar with map function.

List comprehension

transformations made on lists in a clear and concise way. In Python, a function that returns the elements from the Cartesian product of two sets, where the first component is bigger than the second, looks as follows: def f(m1, m2): return [(x,y) for x in m1 for y in m2 if x>y]

In Haskell the same function is written: f m1 m2 = [ (x,y) | x <- m1, y<- m2, x>y]

In Erlang, besides list comprehension, there is also binary comprehension, which produces binary data structures instead of lists. This functionality is very useful when


working with binary protocols which is one of Erlang’s strong points. All the above techniques give us the possibility to write a very clear and concise code. This is the reason why the modern imperative languages have adopted functional techniques.

Purely Functional Languages

A purely functional language is a functional language where data structures are immutable and the functions don’t produce undeclared side effects. Languages such as Haskell or Clean separate the pure functions from the impure ones, but they use different approaches. In Haskell, a monad describes the way in which a sequence of impure functions having a certain type of side effects is executed. There is a different monad for every type of side effects. Therefore all the functions having input/output effects are executed in IO monad; all the functions modifying external state are executed in State monad etc. In order to get side effects, a function needs to be executed in that particular monad. As a result, the compiler returns an error if a function which is not in monad X calls another function with side effects of type X. In Clean, the impure functions which alter object state get those objects as “unique objects”. A unique object can be used only once. Consequently, once a function has altered a unique object, it returns the new version of the altered object, besides the normal return value. The function caller continues to use the new object, the compiler forbids the use of the old one.

Conclusion Advantages

First, functional languages make available several high level tools that increase productivity. Then, in purely functional languages, the fact that side effects are very clearly localized help the programmers write more robust code. This prevents from discovering that, besides returning arguments product, a “product” function also writes the result in a file. This separation can be guaranteed by a compiler. As a result, the clarity and quality of the written code is improved. They say, in Haskell compilation is the most difficult part, and once a code is compiled, it is bug free. Surely

it is an exaggeration, but partly it’s true. The typing system is so well designed that errors which are normal for other languages are spotted at the compile time. With impure functional languages, side effects may appear, but generally they are deterred, and the functional work style makes developers aware of their presence. Another advantage of working with pure functions is that testing, refactoring, parallelism and various optimizations become much easier. This issue was discussed upon in the previous article of this series. Even if we don’t use purely functional languages, parallelism is much easier if a functional working style is adopted. Languages such as Erlang, Clojure, F#, Haskell are known for the easiness of writing parallel application. The “many-core” tendency in nowadays hardware architectures asks for more parallelism and estimations show the numbers of cores in a “standard” processor will double every 2 years. This means that in a very near future we will be working on systems with tens or hundreds of cores. The applications will only be able to take advantage of the large number of cores if their architecture allows massive parallelism and processing on tens, hundreds or thousands of execute strings. As an example, in Erlang, applications with tens or hundreds of thousands of threads occur frequently during production. It’s pretty obvious that the most appropriate languages for writing heavily parallel applications are functional languages which frequently make parallelism almost trivial.


The general perception is that functional languages have a lower performance. It’s only partially true. Generally, using a functional working style, structures of immutable data are used, which are less efficient. For example, modifying an element in a list implies: 1. Creating a new list with the same elements from start until before the modified element. 2. Creating a new loop with the new value of the element. 3. Integrating the new loop on the existing list. Modifying the immutable list is more expensive both in CPU time and in memory usage. On the other hand, there

is the advantage of not needing the synchronization. The original list stays the same and its other users are not affected at all. This is a benefit if those users are in different execution threads or on different machines. Though, we have the possibility to use mutable data structures where necessary even in a functional language. So we can identify areas with performance problems and change the elegance and the robustness of the functional code for the performance of the imperative code, when it is really needed. So the statement that functional languages are weaker from the performance’s point of view is false. The data structures used make the difference. Regarding the performance, there are pretty frequent the situations when the applications originally written in imperative languages and then rewritten in functional languages such as Haskell, Ocaml or F# are at least as fast as the original implementation, but with a smaller and more robust code. Another problem would be the fact that in the “lazy” functional languages (ex Haskell) it is difficult to estimate how long a certain operation will take. This is a problem especially in soft-real-time systems. Yet, the main problem which slows down the adoption of functional languages is the fact that programmers have to change their way of thinking. In the functional approach the problem is “what does the application?” and not “how it does it?” Thinking like this, the need to produce side effects disappears in most of the code, excepting those components that interact with the surrounding imperative world. In spite of that the industry makes more and more space for functional languages. These are a few examples: very popular applications like RabbitMQ, CouchDb, Riak or Ejabberd have been written in Erlang, Haskell which is used more and more frequently in applications in which code quality is essential and on JVM and .NET platforms F# and Clojure have a growing user base. | nr. 2/2012



Exploratory Testing, a Fashionable Debate


’ve heard/seen quibbling about Exploratory Testing in various circumstances (groups, meetings) lately. My intention is to make a useful summary (I hope!) of all the ideas I found and tested. I have been listening to Pro and Con dialogues and monologues about exploratory. Following the natural track of an application, I will try to outline and underline some of the arguments.

About Ioana Matros QA Engineering at Gemini Solutions Cluj


nr. 2/2012 |

we could boast about some tests written upon requirements. In my opinion, this should be the ideal case. The Client who knows what he wants functionally, technically and aesthetically is a rare species. If I were the Client, I’d know I wanted a qualitative product, delivered yesterday with minimum costs. All clients know this, don’t they? Obviously, this approach affects our fluency and the testing success. Thinking about experiences and discussions, I could say I frequently met this phrase – this time from the Client: “If you can’t convince them, confuse them. Now that a context has been created, we could utter the first objection. Using Exploratory Testing, it is possible to miss the base flows and to deliberately deviate believing the base scenarios work. It’s not The Context the most constructive thinking, but I beliAs bugs are sort of centre of attention eve I may have sinned at least once. in our field, we are going to discuss about them. It’s our pleasure, isn’t it? Having the Con application, all should start from some base scenarios to follow the flow. Developing • Let’s assume we have finished explothese scenarios we are going to notice ring the application and, as expected, whether every little part of the whole appliwe have found a lot of bugs that cation plays its part correctly. made our day nicer or angrier, as If we are to start from the beginning, appropriate. The expression “Finished Exploratory testing is an approach to software testing without considering and executing already written test cases. Exploratory means learning the product, creating and matching a series of tests with a series of data, and finally executing them. And all these in one individual, at the same time and in a casual manner. Let’s see…To start from the very beginning, we are given an application which we have to prove good or not good. It seems the opinions vary a lot according to everyone’s comprehensiveness. Obviously, the discussion is much more complicated and the truth is somewhere in between. This time I choose the situation when we demonstrate our application is not the best.


exploring the application” is a more relative one, it’s almost as we’d say the application is bug-free. Is it possible? Objection! How do we manage and measure what we have tested? We don’t want to repeat the same exploration. It’s obviously more difficult than let pass or fail some already written test. Actually, there would be a Pro here, as there are measuring methods: Checklist, Recording, Notes, Bug Hunt, Questioning… •


We cannot use automation, a fairly important testing process. And, as there is always a solution, the automation of the bugs discovered during the exploration of the unknown application is helpful. Tests which can be run every time a bug is fixed or modified, adding code in that area.

• The approach itself teaches you how to learn the product. • You do and you learn to do all kind of things: to create and run test, to investigate, to manage...

• In my opinion, it uses the more inquiring and creative part in everybody. It’s the time to ask a lot of questions “what will happen if ”, “what if ”, compared to testing in various steps in Excel, Word, QC documents or any other method. • • It should lead to an increase in responsibility, as every explorer must decide the track he will use during testing. In order to accomplish it, he must find, investigate, use every helping means, starting from (non)existing specifications to similar tested or rival products. It would be helpful to be aware of the fact that every exploring is unique because every tester is unique. • • It sometimes leads to “great discoveries” such as “NullPointer” in Java or ”TooManyOpenCursors” in Oracle, which have the opportunity not to be found during “à la carte”testing. The NullPointerException moment is enjoyed by testers, as it usually marks the end or rarely, the beginning of an investigation that becomes more personal, because of its manifestation and seriousness. I am aware that finding such errors is relative, but in this article

I generally talked about a code which is not first-hand code, assuming we are in Code Complete. These should be some of the arguments disputed at the moment. Every one of them may become the subject of a discussion we are going to postpone until a future time, when there will be more time, more paper and more ink for my pen. If I had to draw a conclusion, I’d say Pro, but not as the only testing method. | nr. 2/2012





i! I am Horatiu Mocian, the founder of SociaLook (, a startup which tracks the activity of companies in social media (focusing on Twitter). Even if my startup is quite young, about 1-year old, the story behind it is quite interesting, and I am going to tell you all about it.

The Beginnings

Horaţiu Mocian Young antreprenor in social media.


nr. 2/2012 |

The SociaLook story started in April 2011, when, after recovering from my first start-up (which failed), I started thinking about my next venture. The idea came to me naturally: while I was working on Newistic (my first start-up), I was in Romania, and I needed to get in touch with healthcare communications agencies in the UK and US to sign them on as customers. To achieve this, I spent a few weeks using LinkedIn and Twitter to find people that are actively using these products, and contact them directly. This approach worked, and I managed to set up meetings with 4 companies in London, and I converted 3 of them into customers. For my own small sample, LinkedIn and Twitter proved to be more efficient than e-mail. I noticed that the process of finding the right people using social media can be automated, and this looked like a big opportunity for a startup. I convinced an angel investor, Gerard Baz, (whom I met when I was working at Newistic), to invest in this idea, and this is how SociaLook was born.

First Product

SociaLook’s first product was pretty straightforward: creating a website that contained detailed data about startups from Europe and the US, focusing on the rounds of financing raised, and the people behind companies. For each person, the product was showing their entire social media accounts: Twitter, blog, LinkedIn, Facebook. The most interesting functionality was showing the connections between people based on their investments and positions occupied in various companies. For example, if a user was searching for the best connection to a certain VC partner, SociaLook could provide the following link: the user is followed by someone on Twitter who is working at a startup that raised investment from that VC fund. Unlike LinkedIn, SociaLook offered connections from multiple social networks and computed a score for each connections - a connection between 2 people is stronger if they worked together for 10 years than if they met at a conference – while on LinkedIn all connections are the same, We started working on the product in May 2011, in Targu Mures. After we had built a prototype, the next step followed: product validation. Therefore, in


June and July, I spent 3 weeks in London, where I met with other entrepreneurs and investors. The feedback I received was promising: most of the people I talked to saw an opportunity for this kind of product. We decided to keep working on it and to prepare a beta version for the month of September. And this was my main occupation in the summer: I led the product implementation, and managed to get the beta version ready by the end of September.

Pitching Tournament

Starting in September 2011, my focus was on attending a series of events in order to increase the visibility of the product. I participated to Capital on Stage in Amsterdam (September), and to Venture Connect in Cluj (October). The highest peak was reached in November: I pitched SociaLook on the stage at How To Web, the biggest technology conference in South-Eastern Europe, where I got the 3rd place (the best place for a Romanian team). Then, I pitched it in front of investors at Venture Connect in Bucharest. This event brings together the best startups from Romania and the most active Romanian investors, as well as a good number of foreign investors. I received good feedback, and I started discussing about a new round of financing. The tournament ended with a trip to London in December, to meet with potential customers. All this time, we started receiving feedback from customers who already started to use the product. The overwhelming majority was that the product was interesting, but its utility was low. One reason for this was that the target market (entrepreneurs looking for investment) was quite small. Additionally, a founder who hadn’t raised at least one funding round couldn’t afford to pay for this kind of product, and after they had raised financing, they had no need for the product (even for the later rounds, the existing investors are much more useful than the ones found through SociaLook). At the same time, we became aware that the product is difficult to port to other verticals (sales, business development), because we didn’t have reliable data sources, and most of them are quite expensive (LinkedIn, CapitalIQ, Bloomberg).

SociaLook 2.0

In January 2012, after we had put together all the feedback received until then, we started reinventing SociaLook. We took several key decisions: no data source other than social media would be used, we would build an automatic profile for each company, and the main data source would be Twitter initially. As a consequence, our focused has changed from finding relevant connection to finding the employees that have Twitter, LinkedIn accounts or blogs from any company. Moreover, the new product will be addressed to people working in sales and business development. Since March, we have been working on a new design of our product, which reflects the new business direction. At the time of writing this article, we are working intensely on the new product, which will be tested by users in April. Also, in April, we will make several pitches and demos of the product in London.

The Future

The plan for the next months is quite clear: finishing the implementation of the new product, pitching to as many events as possible, discussing a possible data partnership with LinkedIn and increasing the team. If everything goes according to plan, in the summer we will be in talks for a new round of financing and expanding our presence in London. Since I mentioned the team in the last paragraph, I’ll describe its composition. I am the only one involved full-time, but our initial investor is involved in deciding the strategy and direction of the company. Razvan Roman has joined the team as an advisor in January 2012, and he is involved in defining the product for sales people and bringing in prospective customers. I have created the logical design of the product and built the back-end, and for the visual design and front-end programming I work with a company based in Targu Mures, Reea ( In the following months, I will be looking for at least one technical person to join the team as CTO and to become responsible for the technical side of our product. The main technical challenges at SociaLook are processing a large volume of data, and building specific components

for NLP (natural language processing) analysis.


Since this article has been written (at the end of March 2012), Horatiu decided to scrap the product for sales people (based on feedback from beta customers and data acquisition issues), and to pivot SociaLook into a tool for Twitter analytics at the company level. It works by gathering all the Twitter accounts of a company (employees as well as corporate accounts). Then it shows statistics, like the breakdown of different types of tweets (replies, mentions, retweets, URLs, personal), if people are talking inside the company or with other companies, etc. The new product is already in beta (, and Horatiu is currently looking for beta customers.

Editor’s note In the current edition of our magazine we invited Horatiu Mocian (@horatiumocian) to talk about his latest startup, SociaLook. We have 2 main reasons for this choice: first, the desire to show the evolution of an early-stage startup from the financial and marketing perspective, focusing on the strategy around defining and validating the product, and less on the technical details; second, the importance of the environment where a startup is nurtured. Since Horatiu is covering the first aspect in the rest of the article, we would like to offer some details about the environment behind this endeavour. In Targu Mures, there have been strong efforts to develop the IT&C industry, both collaborative, like Mures Tech Cluster, or individual, coming from people like Dumitru Radoiu (university professor who contributes both through his teaching, and through the support he offers for local startups together with Sysgenic), and Dan Masca (the founder of Reea, a web development company that actively helps startups with financial investments and know-how). | nr. 2/2012



Interview with Dan Luțaş Portrait of an expert in information security

Marius Mornea Software Engineer and Founder of Mintaka Research


nr. 2/2012 |

I will start by setting forth a dilemma concerning the selection of the interviewee for this number of the magazine: Mr. Dan Luțaş. We were colleagues during high school and college and I’d like to consider him a good friend. At first, this caused an internal conflict between the objectivity in choosing and presenting this article and the subjectivism brought about by the interference of my personal life with my professional career. At the local level, there are many experts whose achievements recommend them for the interview, without generating this conflict, so I’ll be brief in explaining why I stuck to this choice. Everything started from a mutual friend who showed me the article “30 under 30, generation restart” published by Forbes Romania, where Dan’s name is added to other 29 names of young people with high potential to improve the future. Passing over the sensational and overlooking his modest reaction (he considers that he was mentioned there by mistake), I objectively analyzed his professional profile and I acknowledged what I had already known: Dan is one of those who fully identify themselves with what they do. He belongs to that category of people who dedicate over 10.000 hours of continuous efforts to a purpose which is not just a hobby, career or passion but a true manifestation of the balance between the daily routine and what they really are, as individuals. The office activity, the researches conducted for the doctorate degree and the teaching hours

at the university, all these are dedicated to the security matters, as while in high school and college, programming and engineering respectively were permanently a significant part of who he is. I consider that this type of dedication is one of the essential attributes of a professional, irrespective of his/her activity field. Therefore, let’s find out how he got here, what defines him and what the preoccupations of an IT engineer specialized in information security are. Taking into account the fact that the interview took more than 30 minutes, and the first attempt to fully transpose it in words significantly increased its size, I decided to use a concise style and, with your permission, I will synthesize below the main ideas we have approached. Everything started 29 years ago, in SatuMare where Dan was born and from where his family moved after just 3 days from his birth, to Cluj. His first contact with a computer took place after the revolution, when, repeatedly, his curiosity helped him to break down his grandfather’s computer which needed repairs every two weeks. At the same time, he acquainted himself with the first virus: OneHalf, which entered the system from a floppy disk containing a game and which offered him greater challenges and rewards than the game itself: „I wanted to see what it was about, I continued digging and I liked it and then I made my mind: this is what I should do, this is


the future. The IT field came naturally, the same happened with engineer profession, so everything was drawn from the very beginning. “Thus, the “Tiberiu Popoviciu” Computer Science High School and the Computers Section within the Faculty of Automatics and Computers of UTCN, the engineer degree diploma, the master degree diploma and my current status, doctorate candidate respectively, made up the normal track of events”. He started his career in college and during the first three college years he designed various programs in C and Java, showing no interest in security matters. His interest in this field was stirred when he saw that: „ Bitdefender was recruiting personnel and thus I returned to my childhood passion. I said to myself: let’s try to see what comes from this [..] and thus, together with 4 colleagues we set up Bitdefender Cluj in 2005.” During the last 7, the positions I held varied from: virus analyst (2 years and a half), while “I dealt mostly with reverse engineering, I studied the x86 code, I reviewed numerous files formats, I analyzed many computer viruses and I worked in many, many shifts (working as an analyst is a non-stop job, requiring continuous focus and attention); I used to work on weekends, I spent many nights but (all I can say now is that) this was a rewarding job; I felt that I was permanently learning something new, being thus able to go further and further.”; then, he advanced to the position involving the development of detection technologies, moving up from the user mode to the kernel mode, a necessary step to keep up with the threats in this field. He is still working on this today, only that now he is acting as a leader of the team involved in researching the kernel protection and proactive methods, at a higher technological level, advancing to hypervisor and introspection techniques. The development of his career entailed higher challenges, being thus aware of the complexity of the information security threats. If: “at the beginning, the main goal was to create a virus by means of which you could prove your skills, to display a message on the screen or to compromise the data, this typology has vanished nowadays. Today, the current trend points out to the theft of information and the exploitation thereof. We have to understand that

nowadays, it is very profitable to design malware able to spy and steal identities. Therefore, we are no longer dealing with solitary virus designers, focused on one single idea, we are now talking about a whole malware industry which uses malware development methodologies with definite lifecycles, being permanently updated, controlled and implemented. We are dealing with a professional matter and to be able to properly respond to such threats, we have to have a professional approach”. Currently, the industry uses a proactive approach: “It is much easier to prevent than to treat”. Basically, the model has evolved from using the antivirus for debugging an already infected computer to preventing the bugging thereof by stopping the attacks before their actual launch. Returning to present days, we decided to insist upon the company Bitdefender, which, in my opinion, is one of the most successful Romanian IT companies, having in its portfolio a product which is permanently fighting for the top position, in its category, at the international level, and which generates significant amounts of Intellectual Property rights. By this, I see a positive sign, a reliable proof of the competence and competitiveness of the IT environment in Romania, but, at the same time, a splash of color in the outsourcing sea. Therefore, I wanted to hear an opinion from inside: „On Bitdefender occupies the top position and if you’re looking at the last year, we occupied the first position in respect to detection and cleaning programs. We are on top, we are the best and we are proud of this.” said Dan, smiling. „What I want to say is that this is a 100% Romanian product, the entire technology is developed exclusively in Romania, nothing is made abroad, we have no outsourcing, and everything is developed by the Romanian experts.” These local teams are also the first ingredient for the recipe for success: „After all, I think that the people are the most important factor for success and there are such persons within the company, Bitdefender, who are working continuously for 12-13 years, and who are still working with the same passion and dedication, aiming to be the best. We may also talk about your reaction time and ability to anticipate what is to happen. My boss used to have a saying: “the faster you are, the less you get fired”. The speed is an

essential component in this field: if you fail to prevent, you have to have detection at the level of hours, even minutes, if possible. Another component of the success is innovation [..] Bitdefender has been the first antivirus product provided with signatures online update.” This being just an example of innovation brought by Bitdefender in the field of information security and which has been taken over by the industry, becoming thus a qualitative standard for the services supply offers. Letting the company’s achievements to speak for themselves, we decided to focus for a while upon the first ingredient: the teams of developers. How and where from we can find resources in the IT security, a niche sector. „It is almost impossible to find a fully trained and properly qualified specialist in Cluj and even in Romania. [..] a training should be attended and, in Bitdefender, this training lasts 4 months and sometimes, even more. That’s why we focus on finding persons with potential, interested in this field [..] and who will be able to reach the level we require. By getting involved in the academic life, we try to attract the persons directly from the source.” However, Dan’s involvement in the academic life is not primarily focused on recruiting; he is rather concerned about the educational process, teaching lab courses considering that : „The moment you acquire relevant knowledge you should not keep it for yourself , the general idea is to share it and help the others to acquire such knowledge and to increase the level of awareness of the people around you.”, and particularly, upon a more important aspect, namely: „The security is as powerful as its weakest link, and its weakest link is represented by the people. [..] and, this is what I try to do by my involvement in the educational process, [..] to promote this field and to draw attention, from the very beginning, upon the types of risks which they are exposed to.” It is also important the pride you feel when: „your students were actually interested and felt they could learn new things. Briefly, this is my reason; this is why I want to conduct the lab applications, to disseminate knowledge and to raise the degree of awareness upon the information security.” Therefore, the security is closely related to the technology and to the training of the human factor: formally, by proper policies within the companies, from the | nr. 2/2012



development phase up to the production phase; and educationally: „information security should be constantly taught, from the first academic year to the last academic year (6th year), being included in the master’s academic program”. For an exhaustive exploration of the academic scope, we decided to compare the research activity, afferent to the doctoral thesis, to the research activity conducted in the related industry. Practically, although Dan tries to harmonize these two aspects, there are several fundamental differences: in the academic research, „you submit an idea, without yet taking into consideration the aspects related to the performance, scalability, stability and implementation to real life, while in industry, everything is upside-down. ”The non-functional, practical specifications have precedence before the revolutionary theoretical ideas. „If you are protected by the academic aura” you may conduct „researches for the researches’ sake, hoping to find a cure able to solve everything”, but in industry, Dan considers that it is important to be pragmatic and to place yourself in the rapid applicability range, without postponing the implementation efforts for the next generations and to advance only theoretically. Returning to the economic environment, we decided to briefly review the current changes from IT sector, such as the paradigm transit from the old fix structures to the combination between mobile and cloud, both from the industrial and the individual consumer’s perspectives. This change opens an entire range of new vulnerabilities: the loss of confidentiality, the need for Service Level Agreements in order to ensure the access and integrity of data etc. The security should keep up and the Bitdefender team has already launched on the market a reliable solution for the infrastructure in virtual media such as: VMware, Hyper-V, Xen. Although rapid changes have been recorded in all ITC sectors, the speed of change and reaction in the security sector proved to be outstanding. Practically, you have to be permanently prepared not only for potential attacks from evil-minded individuals but also for changes in the software technologies and products with high distribution rates on the market.


nr. 2/2012 |

Interview with Dan Luţaş

Before exploring the wide range of options, from which we can choose when building a career in the information security sector, we decided to bring up the sensational aspect of this type of job. There are plenty 007-type news: Stuxnet, cyber wars, etc. which make this branch to be one of the fewest IT branches to offer adrenaline on the workplace. Therefore, I asked my interlocutor: how much of the daily routine is sensational and refers to life and death fights? Smiling, Dan made a parallel between the James Bond movies and the secret agents from the real life, where below 10% of the activity involves the movie’s adrenaline and actions. There are information security companies which provide highly advanced and specialized solutions, but however, these are placed to other levels, requiring relevant certifications and major investments for this. Even though, some highly sophisticated threats, such as Stuxnet (which managed to compromise the Iranian nuclear program) may be caught by an ordinary antivirus. The difference is made by the application of the corrective solutions, since, as opposed to the movies, in real life there are well-defined processes available, involving strict procedures that have to be followed. The movie-based strategy “the lonely wolf tracks down the hacker and arrests him/her” is not applied. We can talk about the personal satisfaction when you manage to find a solution in order to stop an attack, but: „it is not legal to make your own justice, [..] however, you may resort to justice, to collaborate with the competent authorities and, based on a proper warrant, to stop and eliminate the hacking network”. In other words, even the systems controlled by the hackers are protected by the law, and in order to stop them, we have to observe the official legal procedures. Since we approached the sensational aspect of his works, I gave Dan the opportunity to debunk another myth frequently present in the local media: the attacks launched by the Romanian hackers on official websites of FBI, Pentagon and on other similar “fortresses” and the well-known hackers’ nest from Ramnicu Valcea (also known as the Hacker Ville). The answer given by Dan divides hackers in three distinct categories: the amateurs, who use automatic instruments, the scammers and the real hackers.

Actually, the first category is made up of young persons who employ easy-to-use automatic instruments: “any person who knows how to use Yahoo Messenger simply clicks the button three times and ... this is it”, which automatically identify vulnerabilities and allow you to exploit them (by means of the graphic interface, of course). Most of the times, these “hackers” have a low level of technical knowledge and fail to exploit further the access to these highly important data, being often caught and ending on first pages of newspapers. Dan thinks that they should be treated indulgently and included to a reeducation process. The second category, i.e. the hackers from the Hacker Ville, are actually dealing with “big-time internet scams […], false ads on E-bay […], using dumping prices and taking advantage of the weakest link from the security chain, the human being who is unable to detect the scam. I wouldn’t go further to call these activities as hacking activities. […]. There is no difference between a Hacker Ville hacker and a scammer who cheats you in exchange rates, both are actually exploiting the human innocence.” The last category is represented by the real hackers, but some shades of grey are also found thereto: “in the broad sense: the hacker is an extremely curious person, with special interest in a certain field and who tries to understand as much as possible from that particular field”. “I’d like to consider myself a hacker”, Dan says. Then, how can we classify the hackers into Black Hats and White Hats, in other words, how can we group them into good boys and bad boys? “The difference […] is given by the authorization and ethics: to be white, you need to have the consent of the company you attack for finding its vulnerabilities, and you need to be ethical, that is to refrain from disclosing the data you found, to avoid publishing it or using it for your personal interest. Otherwise, from the point of view of technical knowledge, technologies or personal skills, there shouldn’t be any differences”. In other words, the morality is what separates the pen-testers (the experts paid to find vulnerabilities) from the hackers. The pen-testers get paid, the other get to prison, but both deserve respect from the perspective of their technical skills.


To complicate things even more and to be even harder to make a clear delimitation between the “grey” levels, we chatted for a while about pen-testing and thus I found out that when it comes to the cyber attacks on companies, the things are pretty clear, but the situation changes radically in the cases of the public commercial products. For example, it’s common practice to take a widely-used application, to run thereon different instruments and vulnerabilitydetecting tests and once the vulnerabilities are found, you have two possibilities: either you notify the supplier (the developing company) and give it the chance to repair the vulnerability, and in this case you build a good image of yourself and do a public service; or you may organize a public auction to sell the vulnerability for a personal financial profit. And regarding this last approach, there is a vehement debate in the community, both from the ethical and legal perspectives thereof. We leave the tabloids’ sensational and the subtle grey shadows of ethics, and come back to statistics and the reality from the security industry. Even if we agreed that an IT security specialist is not a 007 agent, the official figures of Norton Cybercrime report for 2011 shows that at every other second, 14 persons are falling victims to a cyber crime, and the overall value of the prejudices brought about, in 2011, by the violation of the IT security protocols amounts to over 388 billion dollars, under the conditions that the overall value of the market of drugs is estimated to 288 billion dollars. Even if Dan avoids using the word “mafia”, I consider that the magnitude of

the aforementioned figures clearly describes a form of organized crime which laid the bases of an industry focused on the exploitation of the security vulnerabilities. We both agree that we are dealing with an armament race, and if 10 years ago, the ratio of forces was obviously favorable to the IT security, nowadays, the fight is still open and on the other side of the barricade we find an enemy with the same technological and financial resources. Nevertheless, Dan reinsures me that although “we are permanently facing cyber attacks, I’d like to think that we can cope with this situation. If we do not rely only on technology, if we are careful in selecting the websites and the operations we conduct there, if we are fully aware of the risks we may find there and if we have a anti-malware solution, because now we are no longer talking about an antivirus, but of an entire range of virtual bugs… I think we will be ok”. I choose to trust him but I cannot help asking him about their strategies to keep up with the future threats. More exactly, I ask him about how they manage to be permanently updated with the latest technologies: trainings or personal study? “The personal study is extremely important […], you have to struggle, to be aware of all new cyber attacks; it would be ideal if you were connected to the list of vulnerabilities and you were aware of everything that happens in this industry”; as for the trainings: “I don’t want to seem rude but we are conducting most of the trainings!”, Dan said laughing. According to Dan, the purpose of certifications is to provide new perspectives on this industry. Of course, there are branches and positions

which formally require certain certifications but in his case, as an “inside” developer, the main contribution was the exploration of several concepts different from the every-day perceptions. Dan holds the following certifications: CISSP (Certified Information Security Professional), CEHv7 (Certified Ethical Hacker) and CISA (Certified Information Security Auditor). Some were quite a challenge for him (e.g. CISSP had 250 questions to be answered in only 6 hours), the preparation time for every such certification being of 2 or 3 months, but everything’s worth it, for example CEH gave him the opportunity to access: “an entire arsenal of tools! You are amazed of the things available for those who generate cyber attacks and you’ll be surprised to find out how simple it is!” As you can see from the aforementioned paragraphs, you need a considerable investment of time and effort to be able to keep up with everything in this highly competitive and dynamic field of the IT security. Finally, we approached the social aspects: how much time an IT security engineer spends in order to be fully updated and Dan approximates to 10-12 hours/ day to keep up and 13-14 hours if you aim to be in top. Considering that he dedicates all evenings to his family and particularly to his two children, it is easy to understand the reason why some of his friends consider him a 007 agent. In the end, I think that I underestimated the time investment of over 10.000 hours and I am pretty sure that Dan will dedicate all his life to this field because: “I think that I do what I truly like to do”. | nr. 2/2012



How to Prepare for a Job Interview


hen I accepted the invitation to write about “How to prepare for a job interview”, I thought it would be simple. I intend this article to be easy to read and understand, and last but not least, to be an opportunity to learn something useful. When I started to write it, I realized Romanian is not an easy language and it’s even more difficult to express complex ideas in a simple way. After having finished it, I asked a 10-year old child to read it, in order to test its simplicity and fluency. He told me he didn’t understand a thing. So I started all over again.

Andreea Pârvu Recruiter at Endava and Training Specialist with focus on developing abilities and skills in leadership, communication and teamwork


nr. 2/2012 |

How do I prepare myself for an interview? The answer I have in mind is extremely familiar. But when I try to write it down, it becomes more and more complicated. It’s the same situation with the interviews. In your imagination it’s a casual conversation you are going to have with one or more persons. When in that situation, you realize you experience emotions that you don’t know how to control and being afraid of embarrassing silent moments, you answer the question you are asked as fast as you can. It happens not to be satisfied with the piece of information you provided. As in my situation, I advise you to wait for 10 seconds, to breathe deeply and to think about several ideas to express later on. Some people say the first and most important thing you have to do before an interview is to get ready. For example, to look for information about Top 99 Most Frequently Asked Questions. I think it can be a useful exercise of self-knowledge and self-assurance. Yet I advise you not to exaggerate with this exercise and learn the questions by heart. You may trick the beginner recruiters, but what are you going

to do when you meet an “old fox”, who reads your intentions very well and starts to get you out of the preestablished answers routine? We are both aware of the risk of losing your head and maybe “the job of your dreams”. I believe the most important thing you should do during a job interview is to have a well-defined purpose: “to discover if that job suits you”. Be as honest with you as with your interviewer. Keep the following three ideas in mind: (1) Can I best achieve my tasks on this position?” (2) “Will I enjoy the company and the working environment?” and (3) “Which is my motivation to work for this company?”. If you are not honest with you, you take a risk again. You hitch up to a job in a new company where you will not perform and integrate. If so, you will be continuously looking for a job. Clothing is another job-interview myth. Specialized articles (not that this isn’t one) say you should dress elegantly, in order to make the best impression. It’s worth mentioning that excess make-up (at women) and perfume (in both cases) may negatively draw your interlocutor’s


attention. If we are to be honest with each other, I think excess make-up may negatively draw the attention in any circumstances. I’ve always appreciated simple persons, as much as possible. If you don’t wear makeup, why should you apply it before going to a job interview? I disagree with exaggerations and I’m aware other recruiters think the same. Coming back to clothing, I consider it’s important and even relevant to wear a costume if you are applying for a position of Sales and Marketing Manager. But if we are talking about a more relaxed working environment (as in the case of most IT companies), then you can adopt the “smart casual” style. I know it’s a frequently used term, but it’s difficult to understand sometimes. “Smart casual” is a kind of combination between elegant and usual. A successful combination can be a pair of blue jeans and a shirt (for men) and a semi-elegant skirt or pair of pants and a shirt (for women). In my opinion, it’s enough to have clean and decent clothes. It’s essential what you have to say for the 45-60 minutes you are going to spend with the recruiter. My advice is to have your “good family upbringing” with you during the whole interview. Above all, be respectful! Shake the recruiter’s hand firmly, even if a woman (don’t shake it too hard, though). Switch off your mobile phone and forget about your watch. Most of the times you should clarify with the recruiter how long the discussion will take from the beginning. Checking your watch, you show impatience and desire to finish as soon as possible. In more simple words, you transfer your interviewer lack of interest and low motivation. At the same time, good manners means avoiding exaggerate gestures. It’s extremely important that you correlate your speech with your body language. Before the interview, study a little what means to have an open attitude compared to a more reserved one. It may be to your advantage if you know how important your gestures, face mimic and posture are during interpersonal communication. Nonverbal language is a significant element in creating an image in the others’ eyes. Experienced recruiters interpret gestures. They succeed in knowing their interlocutor even before this one begins to speak. Therefore, adopt a relaxed but firm attitude and be sure when you answer. In the end, greet and thank.

However, before going to a job interview, you still have several steps you have to make. The first step is writing a CV (Resume) that should highlight how your skills and experience match to the requirements of the job. The second step is to create a personal brand. This will be more useful when you first meet your interviewer. As Caragiale would say “What is that a CV?” Curriculum Vitae or “Life Course”. It’s what you have done by now and what you want for the future. It’s your Business Card in front of the employer. As any business card, the CV has to be customized in such a way that it draws the attention of the first person to read it. As a recruiter, I’ve noticed that an attractive CV fulfils two big standards: (1) it draws the attention right from the beginning (for example, for a marketing position, I’ve seen an extremely innovative CV in the form of an advertisement; the applicant’s creativity is more than obvious, representing an advantage for this kind of position); (2) it is explicit and it follows a red string. Of course, here comes the question: what is that red string? You can find below the six steps you should follow: • Personal Information • Career Objective • Professional Achievements • Work Experience • Formal and Informal Education • Competences. Skills. Knowledge. Let’s analyze them one by one, so you, the reader, and I both have the same vision upon things. Every recruiter is interested in Personal Information such as: Name (it’s essential to know the identity of the interlocutor), City of Residence and Contact Details. The rest of the information such as exact address, sex, age, shoe size etc is less relevant. My question is: Why should they occupy so much space? Career Objective. I laughed a lot the other day reading a CV whose objective statement was “Becoming a waiter, bartender…security agent…and others”, with the strong desire “to go a long way”. A long way? Where to? was my first reaction as the person was applying for a position of Java Developer. I haven’t found yet the similarity between a bartender/security agent/ waiter and a developer. Maybe they have a “secret code” too which they develop in a way or another. I give this example,

which is indeed carried to excess, in order to underline the importance of linearity between the career objective and the position you applied for. I give you one more advice: state your objective as SMART as possible (Specific, Measurable, Achievable, Realistic, Time limited). Those who haven’t heard of this concept, I give you a short example: “ To become Project Manager in an IT company with more than 100 employees in the future 5 years”. Professional Achievements. It’s information that should be included in any CV. No matter the position you are applying for, it’s your results and performances that count. It’s even more specific in IT. If you give details about the projects you participated in and the technologies you worked with, the recruiter will be able to sketch out your technical profile more easily. More than that, the complexity of the projects and your results can be compared to other candidates. Use this section of your CV because the majority doesn’t and it is another distinctive element that calls the attention. Work Experience. This section starts with the most recent job and continues with those which are relevant for the position. If you want a Senior Java position, I don’t see why you would mention your internship at the University Cafeteria. There is no connexion between these two positions. You just overload your CV with no favorable result for you. The more detailed your work experience is, the more significant is for the recruiter. The CV will be analyzed for the next step of the recruiting process, the interview. Formal and Informal Education. I’ve seen CVs that included details about Middle School and the objects studied then. I encourage you to list your university degrees and, if you really want, the highschool you attended. Besides your studies, brag about all your certifications, either technical (SCJP, MCSD) or soft trainings (leadership and communication). This proves your orientation towards self-learning and skill achievement. Competences. Skills. Knowledge. This is the section where you can list the competences you achieved during training courses or simply on the job. For the IT sector it’s simple, you just list the technologies and frameworks you worked with. What is useful and can differentiate you from other | nr. 2/2012



applicants is to specify how long you have been working with them. Remember that foreign languages also play an important part in this section. I recommend mentioning only those you are proficient at least at Intermediate Level. Most of the articles dealing with this subject look like “76 Things not to do During an Interview”. My intention was not to tell you what to do or not, but I think it’s common sense that every candidate has a slight idea of the meaning of “preparing for an interview”. To conclude, I recommend you to be yourselves during the entire conversation. Do not talk about things that are not true, because the employers will eventually discover everyone’s personality and they may consider you don’t fit to the company’s corporate culture. Indeed, it


nr. 2/2012 |

Pregătirea unui interviu

may sound a cliché, but it’s undeniable that every individual influences this corporate culture in a way or another. So, when you accept a new job, feel that connexion with the company and especially with your boss. It’s important to be on the same wavelength. This is the reason why it’s better to ask questions in order to clarify every aspect you want. Pay attention to the familiar tone, after all the interview is a formal conversation. It may take place in an informal space or you may have the possibility of meeting more relaxed recruiters, who don’t come with an interview sheet, but they are surely evaluating behavior skills. Best of Luck! Recruiter 3.0


Meet Gogu!


Gogu is a funny character, cynical at times, an introvert to whom the interior monologue is an alternative to the real life. With Gogu’s help, we explore different aspects of a project manager’s life trying to find and suggest solutions easy to understand and to apply. As Gogu would say: “almost common sense”. We invite you to follow Gogu and send him your comments and suggestions.

Simona Bonghez, Ph.D. Speaker, trainer and consultant in project management, partener of TSP(

Gogu suddenly got butterflies in his stomach. He revised the values one more time and even read the email hoping that somewhere, something was changed, that maybe he hadn’t properly understood the message. But in that same time his mind was creating the first response scenarios. Better said, it was trying to create scenarios because nothing of what he was thinking was applicable…especially the thought in which the djinni from the lamp turned back the time or the one in which he became invisible to the Chief. “Oh, L ord...” t he sig h es c ap e d unintentionally. “Problems, Gogu?” He startled… he had forgotten about the presence of Misu. He muttered in his beard: “Nothing unusual, the old stuff…” But he felt acutely painful how unusual this problem was. He revised the action course hoping to identify the start point of the disaster. The events began to unfold in his mind: the arrival of the bid invitation, the creation of the proposal with the associated

benefits – How happy will be Miţă to mock me now… – and then the open presentation. He felt again the pride when the Chief gave him the task – “Gogu, I am sure you will do great in preparing the final offer and you obviously have our entire support”. Ha-ha, let’s see how proud you’ll be now! – he blamed himself. Then he felt anxiety when he found himself in front of the offer, not knowing how or where to start from; the satisfaction felt after the meeting he had organised to establish the solution and allocate tasks – “he did a good job” – the Chief looked pleased – “and everyone understood their tasks and responsibilities”. Then again the anxiety when the first answers came from the colleagues, the satisfaction due to the final document, the stress caused by the eligibility documents, the printer that broke in the worst moment ever and the relief after sending the offer and receiving the confirmation… He found himself smiling– Smile, Gogu, you’re not going to smile a long time from now on… He ran the movie too quickly; he passed over the disaster without identifying it, the same way he had done then, actually. He


had become too arrogant, things had gone way too smoothly and instead of being cautious, he had surrendered to the laisser faire. The he suddenly remembered: he relived the moment when he had been in front of the Excel. He had introduced all the training sessions, had added prices, and had inserted the new column for additional expenses related to transportation and accommodation... Oh, my God!!! But he hadn’t updated the formula in the totals’ column… “Problems, Gogu?!” Another sigh slipped. But this time he answered: “We have just won the auction.” “Wow, that’s great news! Misu jumped. He soon got confused: “What’s the problem then?” “I blew it. I miscalculated the final price in the offer.” In the silence that followed the butterflies in his stomach got bigger. He had the first confirmation of how serious the problem he had caused was. The voice of Misu was hardly heard: “How big is the loss?” “I haven’t calculated, I have just received the email and realised that something is missing.” In a strange way, it was not the value of the loss that affected him, but the fact that this mistake had been made by him, the organised, structured and obsessively perfectionist one. And with habits, but this has nothing to do with it, he thought. He, who has very high standards and who asks of the others the same thoroughness that he proves – Well, used to, because from now on… He suddenly felt ashamed and helplessness. He began to think again about the djinni from the lamp and the invisible man. Suddenly he realised that his only option was to resign. He started to encourage himself – That’s it, you are accountable, and you have to show that you understand the gravity of the situation and that you will assume the responsibility. He looked at Misu, who had continued to look at him without apparently understanding – He probably wonders how I could have possibly done it. He answered: “I will check and calculate right now.” He sat down, but he didn’t open the Excel. He opened a new file with the


nr. 2/2012 |

letterhead of the company on which he wrote slowly and with big letters: Resignation. He noticed that the butterflies in his stomach had begun to disappear, but instead he felt a bitter taste in his mouth. He liked working here. He didn’t want to leave. But there was no other option. Hmm, and the project that he was working on had just begun to run, the results were better than everyone had expected and he had a great contribution to that. When the phone rang he tensed instantly – the Chief. What will I tell him? He picked up automatically. “Gogu, my son, you’re awesome! You proved it abundantly. Not that anyone has ever doubt it.” The Chief ’s voice was full of happiness and good humour. Gogu swallowed nervously: “Chief, can I talk to you for a couple of minutes? I’ll come to your office.” “Something doesn’t sound good. What happened?” Without waiting for the details and maybe understanding from the voice tone, the Chief added: “I’ll be at the office in 2 hours. I’ll stop by and pick you up. Good job, Gogu!” He added before closing the phone. Ha-ha, great job indeed! –Gogu said for himself with bitterness. He had some time to write his resignation. Still, he opened the Excel file with some hope that the value of the loss was not that high. At least the financial losses to be acceptable, because the psychological hit was already hard to endure; and the imminent meeting with the Chief was not going to mitigate its impact. When Chief appeared at the door, Gogu felt sadder about Chief ’s smile than the ungrateful position he was in. The resignation was in the envelope, the envelope in the pocket of the jacket, and the Excel calculations and loss mitigation options in front of him. He took the sheet and followed the Chief. He gave the Chief the news as soon as they entered the office. Although he had rehearsed the text several times before, the words came hard, as if he didn’t want to leave. The news was not easy to give or to receive. The Chief sat with the Excel in

front of him for several minutes: “There is no way to cover the entire loss, even if we implement all your suggestions. I wouldn’t exclude the option of talking to the client, but the chances are small. The mention concerning the miscalculation is very clear: the total values prevail.” His voice was calm and sad. Gogu would have wanted him to shout, to argue, to threaten to fire him. Then, he would have shown the resignation: a bitter triumph… But Chief didn’t do so. He asked about one of the solutions Gogu suggested, they discussed the alternatives, and they even settled a meeting with the client. After two hours, he wasn’t fired yet and there were no signs of that; he had no idea what to do with the resignation. After another hour, they had an entire plan which, if it was going to work, would have considerably diminished the losses… but not without considerable effort... “Ok, Gogu, I’ll see you tomorrow morning to find out how we’ll break through”, said the Chief after revising one more time the planned actions. But Gogu didn’t move. The resignation was in his pocket; however the script he wrote in his mind wouldn’t let him show it to the Chief. First he had to be fired and only then he would have handed it with dignity. “Gogu?!” The Chief didn’t understand what was going on with the “stone” standing in front of him. “Aaaa... hmm... aaa...” The words didn’t go out and that was it. “Tell me, Gogu, do you think we forgot something?” “Well... ahm... I... am fired, isn’t it?!” It was more a statement than a question and his tone was slightly defiant and vengeful. “Listen, Gogu, don’t you think I have spent too much on you to fire you now?” “I don’t understand...” Gogu was taken by surprise. “What do you mean?” “Tell me, Gogu, do you think you will ever make the same mistake?” “Oh, God, how could I? Never! Under no circumstances!” “There you go, Gogu! Indeed, this lesson cost us something, but it was a very well learned lesson. Do you honestly think that I want to invest again in the same lesson and put us through all this trouble?!...”


powered by