Projects

EyePen and FLight: Reading Tool for People with Vision Impairment


EyePen and FLight are both results of 18 months ethnographic work at a local school for children with vision impairments. My contributions in this project were - study design and conducting ethnography, co-designing the tool, study design for evaluation, and data analysis.

Empathize: In this 18 months long ethnography, we observed that children with vision impairments contain themselves within academic books only. They did not read story books simply because Braille printed books were either not available or simple too expensive. For mathamatics, they used a slate with ridges and holes, to insert lead bars in any prefixed orientation to mean a numerical number. We dug through the literature only to find solutions that are easily accessible to first world nations - such as interventions through laptops, tablets, or smartphones. However, we found less privileged children with vision impairments did not have access to these tools.

Define: We defined a set of requirements based on their needs. Among priority requirements were - low cost tools, easy to use, readily accessible, and readily replaceable upon replenishment. Our primary target was to reach a solution tools supporting reading and writing would be low cost. We knew that ink-based systems are cheaper than Braille embossed systems. However, ink is flat and provides no tactile or haptic feedback.

Ideate: We ideated over couple of design directions. One direction was to build a low cost Braille printer, however, much research on Braille printers led to few albeit accessible Braille printers. The other direction that we ultimately pursued was to play with lights. A coupling of LED and phototransistor can distinguish a white surface from a black colored surface.

Prototype: We developed a testbed system and tested with a focus group consisting of both visually impared students and their teachers. In a participatory design study, we thrived to improve our prototype in two phases. The first phase we called it EyePen, helped children to read by providing audio pulses when there is a Braille dot. In the second phase, we improved the design to a wearable glove, using a simple coupling of six LEDs to six Braille dots with a single phototransistor, providing alphabetical output instead of audio pulses. We called it FLight. For both of these phases, we used a trajectory board to help the reader orient the tool while reading.

Test: We asked our focus group members to read using our tool. We received interesting feedback from them, as well as design implications for future. We also came to learn other interesting problems that children with visual impairment face while reading. For example, cognitive load is often high while reading because they have to remember individual letters before they form a word.

We published this work in ACM DEV 2016 and ACM CHI 2017.



Speech-Based Source Code Comprehension Tool for Developers


This project was part of the CPSC 507 Software Engineering course. My team members were Shareen Mahmud and Salvador Corts. Laura Cang was kind enough to add voice to the comments in this demo.

Problem: Developers spend a significant amount of time on program comprehension. Source code authors usually add text-based comments to maintain a documentation of their artifact, as well as to help others understand their source code. However, text-based comments often lack the necessary depth and expressiveness to communicate interpretations of complex concepts. As a result, developers often resort to face-to-face communication. This can particularly be challenging for newcomers on-boarding a project where co-workers may be unavailable.

Empathize: We conducted a literature survey to understand the breadth and depth of speech-based commenting in the literature. While we found few studies addressing speech-based comments, we identified that the literature did not have conceptual clarity on the nuances of speech-based comments - for example, duration of speech comments, who are the users, the pros and cons of speech-based and text-based comments.

Define: Our study contributed to the conceptual gap in understanding nature and substances of speech-based comments in source code comprehension. We implemented a high-fidelity prototype IDE in Java. Our prototype was a simplified version of Eclipse.
My contributions in tool development were - developing speech comment recording module, integrating IBM's Watson Speech to text module to generate transcripts, and synchronise audio and visual play heads.

Next, we conducted a controlled laboratory study to understand how programmers interact with speech-based comments. We recruited 10 participants having varied levels of expertise on Java. We designed a 2x2 within-subjects study. Participants saw two different source codes - merge sort and quick sort, authored and annotated by an expert programmer working in Google. Each participants saw one of the codes with speech comments and the other in text comments. We altered method and variable names. The task was to comprehend the source code with the help of the comments. We tested for comprehension by conducting a post-task questionnaire and a semi-structured interview.

Findings: Some of the key findings were - duration of speech comments should be within a minute and transcripts of speech comments should always remain available. Moreover, speech comments were more helpful to new hires while onboarding a new codebase. Besides, participants rated speech comments to be most helpful when it describes the code flow and how multiple files are working together to produce that flow.



Personalised and Augmented Web History


Re-finding previously visited information on the web has been a long-standing problem. Existing re-finding solutions rely on users manually inputting user-defined tags or browsing through a laundry list of web pages in History. I investigated this problem with Shareen Mahmud, Borke Obada, and Amelia Cole.

Empathize: In the empathize stage, we surveyed literature on information searching behaviors. We also conducted a semi-structured observation to understand how people search for information on the web and what strategies they use. We recruited 10 participants and gave them a search task, then we interrupted them in the middle of that task, and asked them to re-find one of the previously found pages again. We also conducted an online survey and received 71 responses.

Define: Empathizing with the users helped us to learn that - (1) people need support in the form of cues to re-find information, and (2) people need tool support during their wayfinding behavior in re-finding episodes. Therefore, our primary design direction was to redesign Mozilla's web browsing history in order to support a rich set of contextual cues. Informed by the literature and the data from observations, interviews, and questionnaires we identified the following contextual cues to aid information re-finding:

  1. Person: who you shared it with
  2. Website: where you saw it
  3. File: what type the file was
  4. Date: when you accessed it
  5. Activity: what you did with it
  6. Device: which device you saw it on
We chose to integrate the search history result with the contextual cue filters within the search engine result page. This was due to the fact that searching on Google was the most preferred re-finding method for people and they often neglected consulting the web history page.

Ideate: In this stage, we created task descriptions, requirements, conceptual design, and storyboards. For task descriptions, we proposed two different re-finding episodes - (1) looking for a specific web page, and (2) looking for any instance of an information. We identified functional and non-functional requirements. It took few iterations to make peace with conceptual design.

Prototype: We iterated over two different design concepts. We created paper prototype of these design concepts and evaluated them by running Cognitive Walkthrough with some tech-savvy people. We received both positive and negative feedbacks on our design. We refined our design and created a medium fidelity prototype using Justinmind.

Test: The challenge of testing re-finding episodes is that it is difficult to generate a re-finding episode that people actually care about. When we finally tested our prototype, we wanted to test the usefulness of cues and usability of our design. We compared our tool against the more popular tool - Google search. We created two isomorphic tasks - finding a recipe seen before, and buying a Voice Assistant seen before. We recruited 8 participants. We observed their usage behavior, use of cues, page visits, and clicks. We also calculated user engagement by using a survey. Our results indicated that re-finding was more successful with our tool than Google (100% vs. 38% success rates). Some people reported being confused by the use of contextual cues. However, all participants reported to have found our tool useful.