Software Developer & User Advocate
Interface • Interaction • Experience • Research
Selected works table of contents
1 ePlant visualizing big data in 3D 4 Rambleo local discovery 9 Cancer Complexes visualizing protein families 15 Ninja facilitating brainstorming
Visualizing Big Data in 3D
Figure 1: The solution was to use three dimensional models to display the multi-faceted data, which shows your queried proteinâ€™s concentration (through color), location (position on the model) and time (each leaf on the lower petal pad represents a different stage in plant development). Hovering the tissues or selecting them from the sample list will display their name, protein level and data source. My work introduced the tissue model (Figure 2), with a tissue dataset and database showing the quantity of the userâ€™s query protein. A relative mode (Figure 3) allows for users to see their query protein in comparison to the master control sample. Other milestones include an expansion on the genetic viewer with new datasets and databases and user studies and surveys to implement desired features and to improve interaction and experience. 1
Figure 2: The root tissue (above, lower center) is able to intuitively show varying levels of protein throughout the root between millimeter distances.
Figure 3: The relative mode clearly shows that the query protein is found in very minimal quantities in the master control sample. 2
Figure 4: A comparison between plant and cell models in Collada (left plant, top cell) and Google O3D renders created for the 3DDI. Constant iterations with user feedback introduced new features and interactivity. This resulted in framing of interactions to familiar tools in biology (ie, protein viewers) that allowed users to easily pick up how to navigate and utilize the tool. Positive feedback on the platform allowed us to take a dynamic approach to developing the framework to allow for easy implementation of other organism models and data sources. The framework was then made open source to further open up the platform. As a result, the 3D Data Display Initiative (3DDI) was created promote the use of 3D models in visualization while offering insight into other open source visualization tools for the web. The formation of the 3DDI and the publication of ePlant initiated case studies into alternative tools for the visualization of large datasets, the use of 3D models for data display as well as the tools required to do so. A goal of the research was to use the findings to move ePlant away from a flash base to one of a lighter footprint. Research & Presentations: ePlant: 3D Visualizations of Biological Data for Arabidopsis thaliana (Research, Presentation, Poster) The Analysis and Integration of Large-Scale Datasets (Research, Presentation) Casestudy: Survey of 3D display web technologies: WebGL, OpenGL, HTML5 Canvas, Three.js, D3.js Publication: Fucile G, Di Biase D, Nahal H, La Garon, Khodabandeh S, et al. (2011) ePlant and the 3D Data Display Initiative: Integrative Systems Biology on the World Wide Web. PLoS ONE 6(1): e15237. doi:10.1371/journal.pone.0015237
Figure 5: Wireframing the landing page for the redesign. The most desirable places or events should be front and center with an easy way to navigate through additional spots. Large action links help explain what the application is about and provides easy access to the main functions of the page. The original landing for Rambleo (Figure 6) feels very dated. I made this and I probably would not use it if I came upon it. Surveys of the layout were surprisingly positive but didnâ€™t exactly meet our user stories or targeted audiences. 4
Figure 6: The original Rambleo design
Figure 7: The Rambleo redesign “Professional” and “Clean” were major keywords that surfaced for the original design but didn’t quite align with the discovery aspect of the app we wanted. We wanted a hip, cool vibe that was also highly usable and accessible – something we would be attracted to, pick up and easily get our next spot for the night. The original page was too dense and provided information that people will not read. Most importantly, the action links were missing, hidden or unclear and would result in too high of a bounce rate. The design is quaint, but ugly and would not attract interest nor does it necessarily fit the expected imagery or usability of the app’s userbase. A redesign (Figure 7) brought bigger visuals to the main page with an ability to scroll through nearby places on the main page. It also made it easier to discover and share new locations from the main page. This iteration brings the main features and actions front and center so that users could easily understand its purpose and begin ramblin’.
Figure 8: A prototype of the user’s home page showing posts from people followed. Users are also able to view which spots are popular or any spots with recent check ins, posts, updates, or changes. The recent feed continually updates to highlight locations with the most activity; places you might want to be at!
The Features User surveys and research for features provided us with four re-occurring themes that became a main focus for the experience: There is a place for every mood – people don’t want to traverse long lists to find what they want to do. They can find the right place by looking at what’s near them now, by selecting categories, or by using a combination of tags. This allows users to decide their own engagement period depending on their own urgency. How about some #Cheap #Beer and #LiveEntertainment? Or #Indie #ArtShow at the #MET? Or we could simply browse all the plays showing around us tonight. Opinions matter – people want to hear what others think, but not just from anyone. They prefer it from their friends, other users with similar tastes, or even professional reviewers (aggregating reviews from newspapers, blogs, magazines and more). Thus, users can create a list of people whose opinions matter to them the most and they can choose to have their reviews highlighted on pages or have the algorithm stack them stronger in calculating ratings.
Figure 9: A prototype of the user’s home page when checking out a recent post by one of your friends. Having an image of something before you commit to it can have a great sway. And with food, sometimes you just want to eat with your eyes. Users can tag menu items with their #foodporn so users can see what they might be getting into – or what they want for their next meal!
Shared experiences – users can use photos, videos, text or hashtags to describe their experiences. Especially for food, we found that users want to read about the dishes, ambiance and the service but most of them don’t want to write reviews. We require these three basic categories but allow you to express it in any medium. For example, the tagging system adds a quick way to leave thoughts that could be collectively understood. #Sketchy ambiance but #MustTry tacos ? – we’ve all been there. Great #Live #JazzShow but #Terrible crowd? Maybe we’ll listen in from a rooftop patio But first, let me take a selfie – sometimes you want to see before you believe. Seeing an image of an event or show lets users gauge if it is something they’d want to do. Pictures can speak more than words do and is an easy alternative to share a review or a check in. Everybody surveyed mentioned they took pictures of food or events for Instagram or other platforms, so this feature seems to be something people will enjoy greatly.
Visualizing More Big Data!
The nodes of the radial graph represent the static protein members while subtrees hold variable members (ie, D or E exclusively). At most, the tree will have a depth of two, but the number of members is variable. I decided that a force directed graph algorithm to easily spread the nodes without overlapping. A quick survey of algorithms and libraries and some prototyping led to D3.js, which houses both the algorithm and has graphical components I could utilize.
Figure 11: Working prototype using D3.js . Hovering a node (complex member) displays information from several data sources pulled in from the pipeline. User testing, user stories, and analytics showed that D3.js was too heavy for most users who would be using it in laboratories and academic settings with outdated browsers or slower computers (analytics showed strong bias for the former).
Figure 13: The PHP based visualization showing functional information for each protein member. I also worked on the pipeline that brought in over 20 data sources each week through scheduled Cron jobs. These tasks included retrieving data (downloading data dumps, scraping web pages), data cleaning, database updates, data recalculating and the generation of new visualizations. The visualization was re-written in a dynamic matter that would allow external datasets from the pipeline to be displayed on the graphs. Figure 13 shows the protein members colored by their functions which allows for a quick survey of functions within a family and across all families. Hovering a node displays more information about that protein. Other datasets, such as cancer patient data could also be used to see which complexes have higher activity within a cancer or which diseases a complex is most active in (Figure 14).
Figure 14: A display of the activity level of members of a complex within breast cancer patients.
Figure 15: The PHP based visualization showing breast cancer data across all complexes. Users are able to see all disease frequencies for one complex or all complexes for one disease and are able to set different thresholds. These visualizations are helpful in identifying which complexes are most active in certain diseases and can help facilitate research. An overview of all complexes in breast cancer (Figure 15) allow for quick interpretations of which families and which members within those families are most active for the disease. The protein visualization is written to allow for any suitable data set to be displayed easily. It was also written to be able to display other types of data, if it is suitable for a graph structure representation.
Figure 16: A prototype being used on the Microsoft PixelSense. The project started on the Microsoft PixelSense (Surface) with a prototype that allows you to easily brainstorm with text and images. The interface allows for easy manipulation of the objects through touch gestures, such as enlargement, zoom, and cutting text apart. It was found that people would put similar ideas together in the same area or, very commonly, run out of space and draw a long arrow to connect ideas. The latter can now be achieved by moving thoughts around or simply overlaying it in the category. From there, a clustering algorithm was created to cluster pieces togehter by proximity as a way to “collect thoughts”. 15
Figure 17: Our own brainstorming for the brainstorming application. Looking at the user interface and interactions and figuring out the machine learning algorithm. In this case, Ninja could group the diagrams and code snippets in separate groups or cluster diagrams with code snippets in close proximity together.
Figure 18: Illustrating and outlining the clustering, structuring and suggestion algorithms for Ninja. Groups could easily be joined or broken up and the layout will always remain fluid and malleable, even in the structured form. Users would be able to move freely between a freeform and structured state (Figure 18). Edits to either state would apply to both states. From the free layout, the algorithm would be able to understand the magenta cluster as a list objects and act appropriately in the structured state as well as recognize the green cluster of images need to be displayed in organized thumbnail forms. The machine learning algorithm could also offer design choices based on the content. The user could query this feature at any point of the brainstorming session and be given structures for similar documents. One suggestion might be that the magenta blocks make sense as headings for the green photos, while the blue object is the album name (right). Users are also able to label their groups or individual items, which will help facilitate the algorithmâ€™s search for layouts and structure. 16
Portfolio for Garon La ------------------------------------ Software Developer Engineer...