HCI Research

img11
Project Data

  • Location: Pittsburgh, PA
  • Start Date: September 2017
  • End Date: December 2017
Project Features

  • Built on and improved an existing Android accessbility app.
  • Helped create the structure of a new knowledge base to store all on-screen information.
  • Worked in a team of 2.

Overview

When Siri was launched back in 2011, customers were told that it was "an intelligent assistant that helps you get things done just by asking." Now, years later, few people actually use the feature to increase their daily productivity. I, for example, only occasionally ask Siri to tell me what the weather is like, or to turn on low battery mode. This is perhaps due to the limited capabilities of the number of apps and services it supports, and the actions it can perform. The same is true for other smart assistants as well.


This is the exact problem we aimed to solve in this research project -- design and build a smartphone assistant that is end-user programmable, where users are able to teach it new routines by demonstration. Specifically, we targeted the Android platform with an Android accessbility app that supports a multi-modal interface, leveraging both information gained through touch input and verbal command inputs.


Position paper for the CHI 2018 workshop on Rethinking Interaction.

Paper for VL/HCC 2018.