Page 1

Garon La

Software Developer & User Advocate

Interface • Interaction • Experience • Research

page

Selected works table of contents

1 ePlant visualizing big data in 3D 4 Rambleo local discovery 9 Cancer Complexes visualizing protein families 15 Ninja facilitating brainstorming


ePlant

Visualizing Big Data in 3D

As a developer and informaticist, I worked on the interface, interaction and experience while introducing new biological models, datasets and databases. Frontend: Actionscript, Apache/Adobe Flex, HTML, CSS, Javascript, PHP, AJAX Models: Google SketchUp, Google O3D, Collada, Maya Backend: Perl, Python, PHP, MySQL, JSON ePlant is a platform for the visualization of large-scale plant data on interactive 3D models. It displays data across multiple biological levels, from molecular, genetic, protein, cell, tissue, to the whole plant. The goal is to allow for the quick interpretation of multi-faceted data that can differ tenfold within a millimeter distance in the organism or between hours of plant development.

Figure 1: The solution was to use three dimensional models to display the multi-faceted data, which shows your queried protein’s concentration (through color), location (position on the model) and time (each leaf on the lower petal pad represents a different stage in plant development). Hovering the tissues or selecting them from the sample list will display their name, protein level and data source. My work introduced the tissue model (Figure 2), with a tissue dataset and database showing the quantity of the user’s query protein. A relative mode (Figure 3) allows for users to see their query protein in comparison to the master control sample. Other milestones include an expansion on the genetic viewer with new datasets and databases and user studies and surveys to implement desired features and to improve interaction and experience. 1


Figure 2: The root tissue (above, lower center) is able to intuitively show varying levels of protein throughout the root between millimeter distances.

Figure 3: The relative mode clearly shows that the query protein is found in very minimal quantities in the master control sample. 2


Figure 4: A comparison between plant and cell models in Collada (left plant, top cell) and Google O3D renders created for the 3DDI. Constant iterations with user feedback introduced new features and interactivity. This resulted in framing of interactions to familiar tools in biology (ie, protein viewers) that allowed users to easily pick up how to navigate and utilize the tool. Positive feedback on the platform allowed us to take a dynamic approach to developing the framework to allow for easy implementation of other organism models and data sources. The framework was then made open source to further open up the platform. As a result, the 3D Data Display Initiative (3DDI) was created promote the use of 3D models in visualization while offering insight into other open source visualization tools for the web. The formation of the 3DDI and the publication of ePlant initiated case studies into alternative tools for the visualization of large datasets, the use of 3D models for data display as well as the tools required to do so. A goal of the research was to use the findings to move ePlant away from a flash base to one of a lighter footprint. Research & Presentations: ePlant: 3D Visualizations of Biological Data for Arabidopsis thaliana (Research, Presentation, Poster) The Analysis and Integration of Large-Scale Datasets (Research, Presentation) Casestudy: Survey of 3D display web technologies: WebGL, OpenGL, HTML5 Canvas, Three.js, D3.js Publication: Fucile G, Di Biase D, Nahal H, La Garon, Khodabandeh S, et al. (2011) ePlant and the 3D Data Display Initiative: Integrative Systems Biology on the World Wide Web. PLoS ONE 6(1): e15237. doi:10.1371/journal.pone.0015237

3


Rambleo

Local Discovery

As a developer and user researcher at Rambleo, I took on the role of managing the user interface, experiences, interactions, research and its development. Frontend: Ruby on Rails, HTML, CSS, Javascript Wireframes & Mock-ups: Adobe Illustrator, OmniGraffle, low res wireframing Rambleo is a local discovery app that allows you to find hidden gems around the city or to find the best place for now. It came about when we were traveling and simply had no idea what to do in a new city (other than eat). We want it to be tailored to the user, which often means including their friends and people they admire into the mix and is something a lot of review platforms ignore. This allows for a more personalized experience in hopes that not only will the user find what they want quickly, but they will also enjoy it. The task started with quick iterations through creating mock-ups and usable prototypes while carrying out a lot of user research and surveys for features and interfaces. Included is a glimpse into the process of setting up a new product through design, prototypes and research.

The Design

Figure 5: Wireframing the landing page for the redesign. The most desirable places or events should be front and center with an easy way to navigate through additional spots. Large action links help explain what the application is about and provides easy access to the main functions of the page. The original landing for Rambleo (Figure 6) feels very dated. I made this and I probably would not use it if I came upon it. Surveys of the layout were surprisingly positive but didn’t exactly meet our user stories or targeted audiences. 4


Figure 6: The original Rambleo design


Figure 7: The Rambleo redesign “Professional” and “Clean” were major keywords that surfaced for the original design but didn’t quite align with the discovery aspect of the app we wanted. We wanted a hip, cool vibe that was also highly usable and accessible – something we would be attracted to, pick up and easily get our next spot for the night. The original page was too dense and provided information that people will not read. Most importantly, the action links were missing, hidden or unclear and would result in too high of a bounce rate. The design is quaint, but ugly and would not attract interest nor does it necessarily fit the expected imagery or usability of the app’s userbase. A redesign (Figure 7) brought bigger visuals to the main page with an ability to scroll through nearby places on the main page. It also made it easier to discover and share new locations from the main page. This iteration brings the main features and actions front and center so that users could easily understand its purpose and begin ramblin’.

6


Figure 8: A prototype of the user’s home page showing posts from people followed. Users are also able to view which spots are popular or any spots with recent check ins, posts, updates, or changes. The recent feed continually updates to highlight locations with the most activity; places you might want to be at!

The Features User surveys and research for features provided us with four re-occurring themes that became a main focus for the experience: There is a place for every mood – people don’t want to traverse long lists to find what they want to do. They can find the right place by looking at what’s near them now, by selecting categories, or by using a combination of tags. This allows users to decide their own engagement period depending on their own urgency. How about some #Cheap #Beer and #LiveEntertainment? Or #Indie #ArtShow at the #MET? Or we could simply browse all the plays showing around us tonight. Opinions matter – people want to hear what others think, but not just from anyone. They prefer it from their friends, other users with similar tastes, or even professional reviewers (aggregating reviews from newspapers, blogs, magazines and more). Thus, users can create a list of people whose opinions matter to them the most and they can choose to have their reviews highlighted on pages or have the algorithm stack them stronger in calculating ratings.

7


Figure 9: A prototype of the user’s home page when checking out a recent post by one of your friends. Having an image of something before you commit to it can have a great sway. And with food, sometimes you just want to eat with your eyes. Users can tag menu items with their #foodporn so users can see what they might be getting into – or what they want for their next meal!

Shared experiences – users can use photos, videos, text or hashtags to describe their experiences. Especially for food, we found that users want to read about the dishes, ambiance and the service but most of them don’t want to write reviews. We require these three basic categories but allow you to express it in any medium. For example, the tagging system adds a quick way to leave thoughts that could be collectively understood. #Sketchy ambiance but #MustTry tacos ? – we’ve all been there. Great #Live #JazzShow but #Terrible crowd? Maybe we’ll listen in from a rooftop patio But first, let me take a selfie – sometimes you want to see before you believe. Seeing an image of an event or show lets users gauge if it is something they’d want to do. Pictures can speak more than words do and is an easy alternative to share a review or a check in. Everybody surveyed mentioned they took pictures of food or events for Instagram or other platforms, so this feature seems to be something people will enjoy greatly.

8


Protein Complexes

Visualizing More Big Data!

I was brought in as a developer and informaticists to create an interactive web application for the visualization of (manually curated!) protein complexes. I also worked on a pipeline that brought several data sources in that would be displayed upon interaction with the visualizations. Frontend: Javascript (D3.js), HTML, CSS Backend: PHP, MySQL, Oracle DB, JSON The first task was to display the protein complexes, which may exist in combinatorial groups. An example is complex [A,B,C, (D or E), F] which can exist as ABCDF or ABCEF exclusively. I iterated through several designs and landed on two final designs: a radiated graph design and a block design. Some design iterations can be seen below. Working prototypes were made of each and edge case testing provided insights on scalability and graphics. Blocks provided a simple display but would not graphically scale nicely upon introduction of large families. The radial graph was able to scale nicely as it keeps a spherical shape and was chosen for the layout.

9


10


The nodes of the radial graph represent the static protein members while subtrees hold variable members (ie, D or E exclusively). At most, the tree will have a depth of two, but the number of members is variable. I decided that a force directed graph algorithm to easily spread the nodes without overlapping. A quick survey of algorithms and libraries and some prototyping led to D3.js, which houses both the algorithm and has graphical components I could utilize.

Figure 11: Working prototype using D3.js . Hovering a node (complex member) displays information from several data sources pulled in from the pipeline. User testing, user stories, and analytics showed that D3.js was too heavy for most users who would be using it in laboratories and academic settings with outdated browsers or slower computers (analytics showed strong bias for the former).

Figure 12: The platform was then rewritten with PHP, shown here. The D3.js renderings were generated for each member and a script was written to extract the graphical co-ordinates for each graph. These data points could then simply be redrawn using PHP with the interactions coded with Javascript. 11


Figure 13: The PHP based visualization showing functional information for each protein member. I also worked on the pipeline that brought in over 20 data sources each week through scheduled Cron jobs. These tasks included retrieving data (downloading data dumps, scraping web pages), data cleaning, database updates, data recalculating and the generation of new visualizations. The visualization was re-written in a dynamic matter that would allow external datasets from the pipeline to be displayed on the graphs. Figure 13 shows the protein members colored by their functions which allows for a quick survey of functions within a family and across all families. Hovering a node displays more information about that protein. Other datasets, such as cancer patient data could also be used to see which complexes have higher activity within a cancer or which diseases a complex is most active in (Figure 14).

12


Figure 14: A display of the activity level of members of a complex within breast cancer patients.


Figure 15: The PHP based visualization showing breast cancer data across all complexes. Users are able to see all disease frequencies for one complex or all complexes for one disease and are able to set different thresholds. These visualizations are helpful in identifying which complexes are most active in certain diseases and can help facilitate research. An overview of all complexes in breast cancer (Figure 15) allow for quick interpretations of which families and which members within those families are most active for the disease. The protein visualization is written to allow for any suitable data set to be displayed easily. It was also written to be able to display other types of data, if it is suitable for a graph structure representation.

14


Ninja

Facilitating Brainstorming I was a researcher at a Human-Computer Interaction and User Experiences research lab at the University of Toronto and started work on a brainstorming application. I cannot disclose the full deliverables but I can talk about what I worked on. Frontend: HTML, CSS, Javascript Backend: Java with JFC/SWING This opportunity allowed me to work on a product from inception and provided me the chance to perform many roles. It allowed me to strengthen and learn many qualitative and quantitative user research skills as well as skills in software engineering and developing intuitive interfaces. Some of the methodologies used in this research project include wire framing, prototyping (paper, usable), user stories, user interviews, audience interviews, surveys, field testing, card sorting, requirements gathering and are skills I continue to utilize in my work today. The goal of this project was to create a brainstorming application that can easily bring you from a messy “creative” state to one that is clean, rigid, formal and deliverable while facilitating the thought process. The application can save brainstorming states to organize your data based on history or it can overview your content and structure and find appropriate structures through machine learning techniques. This process can be queried during brainstorming tasks to give the user structure guidelines or ideas, which can facilitate the brainstorming task.

Figure 16: A prototype being used on the Microsoft PixelSense. The project started on the Microsoft PixelSense (Surface) with a prototype that allows you to easily brainstorm with text and images. The interface allows for easy manipulation of the objects through touch gestures, such as enlargement, zoom, and cutting text apart. It was found that people would put similar ideas together in the same area or, very commonly, run out of space and draw a long arrow to connect ideas. The latter can now be achieved by moving thoughts around or simply overlaying it in the category. From there, a clustering algorithm was created to cluster pieces togehter by proximity as a way to “collect thoughts”. 15


Figure 17: Our own brainstorming for the brainstorming application. Looking at the user interface and interactions and figuring out the machine learning algorithm. In this case, Ninja could group the diagrams and code snippets in separate groups or cluster diagrams with code snippets in close proximity together.

Figure 18: Illustrating and outlining the clustering, structuring and suggestion algorithms for Ninja. Groups could easily be joined or broken up and the layout will always remain fluid and malleable, even in the structured form. Users would be able to move freely between a freeform and structured state (Figure 18). Edits to either state would apply to both states. From the free layout, the algorithm would be able to understand the magenta cluster as a list objects and act appropriately in the structured state as well as recognize the green cluster of images need to be displayed in organized thumbnail forms. The machine learning algorithm could also offer design choices based on the content. The user could query this feature at any point of the brainstorming session and be given structures for similar documents. One suggestion might be that the magenta blocks make sense as headings for the green photos, while the blue object is the album name (right). Users are also able to label their groups or individual items, which will help facilitate the algorithm’s search for layouts and structure. 16

Profile for garon

Garon La Portfolio  

Portfolio for Garon La ------------------------------------ Software Developer Engineer...

Garon La Portfolio  

Portfolio for Garon La ------------------------------------ Software Developer Engineer...

Profile for garonla
Advertisement

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded