CHI 2016 Proceedings
CHI ’16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems
SESSION: Social Media and Location Data
I Know Where You Live: Inferring Details of People’s Lives by Visualizing Publicly Shared Location Data
This research measures human performance in inferring the functional types (i.e., home, work, leisure and transport) of locations in geo-location data using different visual representations of the data (textual, static and animated visualizations) along with different amounts of data (1, 3 or 5 day(s)). We first collected real life geo-location data from tweets. We then asked the data owners to tag their location points, resulting in ground truth data. Using this dataset we conducted an empirical study involving 45 participants to analyze how accurately they could infer the functional location of the original data owners under different conditions, i.e., three data representations, three data densities and four location types. The study results indicate that while visual techniques perform better than textual ones, the functional locations of human activities can be inferred with a relatively high accuracy even using only textual representations and a low density of location points. Workplace was more easily inferred than home while transport was the functional location with the highest accuracy. Our results also showed that it was easier to infer functional locations from data exhibiting more stable and consistent mobility patterns, which are thus more vulnerable to privacy disclosures. We discuss the implications of our findings in the context of privacy preservation and provide guidelines to users and companies to help preserve and safeguard people’s privacy.
Not at Home on the Range: Peer Production and the Urban/Rural Divide
Wikipedia articles about places, OpenStreetMap features, and other forms of peer-produced content have become critical sources of geographic knowledge for humans and intelligent technologies. In this paper, we explore the effectiveness of the peer production model across the rural/urban divide, a divide that has been shown to be an important factor in many online social systems. We find that in both Wikipedia and OpenStreetMap, peer-produced content about rural areas is of systematically lower quality, is less likely to have been produced by contributors who focus on the local area, and is more likely to have been generated by automated software agents (i.e. “bots”). We then codify the systemic challenges inherent to characterizing rural phenomena through peer production and discuss potential solutions.
App Movement: A Platform for Community Commissioning of Mobile Applications
There is an increasing demand to encourage inclusivity in the design of digital services. In response to this issue we have created App Movement, a platform that enables the promotion, collaborative design, and deployment of community-commissioned mobile applications. The platform facilitates collaborative customization of a common app template, for which the development and deployment of the app is fully automated. We describe the motivation, design and implementation of App Movement, and report the findings from an 8 month deployment wherein 27 campaigns were created, 11 of which have been successful, and over 1,600 users pledged their support using the platform. We present three case studies to demonstrate its use and adoption in successful and unsuccessful campaigns. We discuss the implications of these studies, including questions of governance (ownership of content, liability of user generated content and moderation), sustainability and the potential to extend App Movement beyond location-based review apps.
Generating Personalized Spatial Analogies for Distances and Areas
Distances and areas frequently appear in text articles. However, people struggle to understand these measurements when they cannot relate them to measurements of locations that they are personally familiar with. We contribute tools for generating personalized spatial analogies: re-expressions that contextualize spatial measurements in terms of locations with similar measurements that are more familiar to the user. Our automated approach takes a user’s location and generates a personalized spatial analogy for a target distance or area using landmarks. We present an interactive application that tags distances, areas, and locations in a text article and presents personalized spatial analogies using interactive maps. We find that users who view a personalized spatial analogy map generated by our system rate the helpfulness of the information for understanding a distance or area 1.9 points higher (on a 7 pt scale) than when they see the article with no spatial analogy and 0.7 points higher than when they see generic spatial analogy.
SESSION: How Fast Can You Type on Your Phone?
IJQwerty: What Difference Does One Key Change Make? Gesture Typing Keyboard Optimization Bounded by One Key Position Change from Qwerty
Despite of a significant body of research in optimizing the virtual keyboard layout, none of them has gained large adoption, primarily due to the steep learning curve. To address this learning problem, we introduced three types of Qwerty constraints, Qwerty1, QwertyH1, and One-Swap bounds in layout optimization, and investigated their effects on layout learnability and performance. This bounded optimization process leads to IJQwerty, which has only one pair of keys different from Qwerty. Our theoretical analysis and user study show that IJQwerty improves the accuracy and input speed of gesture typing over Qwerty once a user reaches the expert mode. IJQwerty is also extremely easy to learn. The initial upon-use text entry speed is the same with Qwerty. Given the high performance and learnability, such a layout will more likely gain large adoption than any of previously obtained layouts. Our research also shows the disparity from Qwerty substantially affects layout learning. To minimize the learning effort, a new layout needs to hold a strong resemblance to Qwerty.
DualKey: Miniature Screen Text Entry via Finger Identification
Fast and accurate access to keys for text entry remains an open question for miniature screens. Existing works typically use a cumbersome two-step selection process, first to zero-in on a particular zone and second to make the key selection. We introduce DualKey, a miniature screen text entry technique with a single selection step that relies on finger identification. We report on the results of a 10 day longitudinal study with 10 participants that evaluated speed, accuracy, and learning. DualKey outperformed the existing techniques on long-term performance with a speed of 19.6 WPM. We then optimized the keyboard layout for reducing finger switching time based on the study data. A second 10 day study with eight participants showed that the new sweqty layout improved upon DualKey even further to 21.59 WPM for long-term speed, was comparable to existing techniques on novice speed and outperformed existing techniques on novice accuracy rate.
One-Dimensional Handwriting: Inputting Letters and Words on Smart Glasses
We present 1D Handwriting, a unistroke gesture technique enabling text entry on a one-dimensional interface. The challenge is to map two-dimensional handwriting to a reduced one-dimensional space, while achieving a balance between memorability and performance efficiency. After an iterative design, we finally derive a set of ambiguous two-length unistroke gestures, each mapping to 1-4 letters. To input words, we design a Bayesian algorithm that takes into account the probability of gestures and the language model. To input letters, we design a pause gesture allowing users to switch into letter selection mode seamlessly. Users studies show that 1D Handwriting significantly outperforms a selection-based technique (a variation of 1Line Keyboard) for both letter input (4.67 WPM vs. 4.20 WPM) and word input (9.72 WPM vs. 8.10 WPM). With extensive training, text entry rate can reach 19.6 WPM. Users’ subjective feedback indicates 1D Handwriting is easy to learn and efficient to use. Moreover, it has several potential applications for other one-dimensional constrained interfaces.
A Cost-Benefit Study of Text Entry Suggestion Interaction
Mobile keyboards often present error corrections and word completions (suggestions) as candidates for anticipated user input. However, these suggestions are not cognitively free: they require users to attend, evaluate, and act upon them. To understand this trade-off between suggestion savings and interaction costs, we conducted a text transcription experiment that controlled interface assertiveness: the tendency for an interface to present itself. Suggestions were either always present (extraverted), never present (introverted), or gated by a probability threshold (ambiverted). Results showed that although increasing the assertiveness of suggestions reduced the number of keyboard actions to enter text and was subjectively preferred, the costs of attending to and using the suggestions impaired average time performance.
SESSION: Front Stage on Social Media
The Social Media Ecology: User Perceptions, Strategies and Challenges
Many existing studies of social media focus on only one platform, but the reality of users’ lived experiences is that most users incorporate multiple platforms into their communication practices in order to access the people and networks they desire to influence. In order to better understand how people make sharing decisions across multiple sites, we asked our participants (N=29) to categorize all modes of communication they used, with the goal of surfacing their mental models about managing sharing across platforms. Our interview data suggest that people simultaneously consider “audience” and “content” when sharing and these needs sometimes compete with one another; that they have the strong desire to both maintain boundaries between platforms as well as allowing content and audience to permeate across these boundaries; and that they strive to stabilize their own communication ecosystem yet need to respond to changes necessitated by the emergence of new tools, practices, and contacts. We unpack the implications of these tensions and suggest future design possibilities.
Sharing Personal Content Online: Exploring Channel Choice and Multi-Channel Behaviors
People share personal content online with varied audiences, as part of tasks ranging from conversational-style content sharing to collaborative activities. We use an interview- and diary-based study to explore: 1) what factors impact channel choice for sharing with particular audiences; and 2) what behavioral patterns emerge from the ability to combine or switch between channels. We find that in the context of different tasks, participants match channel features to selective-sharing and other task-based needs, shaped by recipient attributes and communication dynamics. Participants also combine multiple channels to create composite sharing features or reach broader audiences when one channel is insufficient. We discuss design implications of these channel dynamics.
Snap Decisions?: How Users, Content, and Aesthetics Interact to Shape Photo Sharing Behaviors
Participants in social media systems must balance many considerations when choosing what to share and with whom. Sharing with others invites certain risks, as well as potential benefits; achieving the right balance is even more critical when sharing photos, which can be particularly engaging, but potentially compromising. In this paper, we examine photo-sharing decisions as an interaction between high-level user preferences and specific features of the images being shared. Our analysis combines insights from a 96-user survey with metadata from 10.4M photos to develop a model integrating these perspectives to predict permissions settings for uploaded photos. We discuss implications, including how such a model can be applied to provide online sharing experiences that are more safe, more scalable, and more satisfying.
Does Saying This Make Me Look Good?: How Posters and Outsiders Evaluate Facebook Updates
People often try to impress their friends online, but we don’t know how well they do it or what they talk about to try to make themselves look good. In the face of known egocentric biases, which cause communicators to overestimate the extent that audiences will understand the intent of their messages, and self-enhancement biases, that cause people to overvalue their own behavior, it is likely that many self-presentation attempts will often fail. However, we don’t know which topics cause such failure. In an empirical study, 1300 Facebook users evaluated their most recent status update in terms of how good it make them look. In addition external judges also evaluated the same update. Posters and outsiders agreed only modestly about how good an update made the poster appear (r=.36, p<.001). Posters generally thought that their posts make them look better than did the outsider judges. They also disagreed on which topics made them look good. Posters were especially likely to overestimate their self-presentation when they wrote about the mundane details of their daily life (e.g., Clothing, Sleep, or Religious imagery), but underestimated it when they wrote about family and relationships (e.g., Birthday, Father’s Day, Love).
SESSION: Families and Assistive Technology
Designing Smart Objects with Autistic Children: Four Design Exposès
This paper describes the design work being conducted as part of the OutsideTheBox project. Within the time-frame of eight months, we engaged four children with autism in a participatory design process to develop their own smart object. We re-interpreted Future Workshops and Co-operative Inquiry to demonstrate that a) autistic children can lead processes with a deliberately open design brief and b) this leads us to explore design spaces that are un-imaginable for neuro-typical, adult designers. To capture these four design cases, we have developed Design Exposes, a concept that is inspired by annotated portfolios and Actor-Network Theory. We apply this concept to our cases and present four exposes that subsequently allow us to draw out intermediate-level design knowledge about co-creating technology with autistic children. We close by critically reflecting on the design processes as well as our concept of capturing them.
Investigating the Influence of Avatar Facial Characteristics on the Social Behaviors of Children with Autism
Autism spectrum disorder (ASD) is characterized by unusual social communication and interaction. These traits are often targets for intervention, particularly computer-based interventions (CBIs). We examined whether interactive behaviors in children with autism could be influenced by modifying the facial characteristics of computer avatars and how behavior toward avatars compared to that toward video. Participants spoke with a therapist over a modified videoconferencing system that permitted manipulation of her appearance (i.e., using cartoon or more realistic avatars versus video) and motion (i.e., exaggerating or damping facial movements). We measured the participants’ speech, gaze, and gestures. In the first study, we found that the appearance complexity of the avatar did not significantly affect any social interaction behaviors. However, the results of the second study suggest that exaggerated facial motion can improve nonverbal social behaviors, such as gaze and gesture. These findings have implications for character design in CBIs for ASD.
Changing Family Practices with Assistive Technology: MOBERO Improves Morning and Bedtime Routines for Children with ADHD
Families of children with Attention Deficit Hyperactivity Disorder (ADHD) often report morning and bedtime routines to be stressful and frustrating. Through a design process involving domain professionals and families we designed MOBERO, a smartphone-based system that assists families in establishing healthy morning and bedtime routines with the aim to assist the child in becoming independent and lowering the parents’ frustration levels. In a two-week intervention with 13 children with ADHD and their families, MOBERO significantly improved children’s independence and reduced parents’ frustration levels. Additionally, use of MOBERO was associated with a 16.5% reduction in core ADHD symptoms and an 8.3% improvement in the child’s sleep habits, both measured by standardized questionnaires. Our study highlights the potential of assistive technologies to change the everyday practices of families of children with ADHD.
Incloodle: Evaluating an Interactive Application for Young Children with Mixed Abilities
Every child should have an equal opportunity to learn, play, and participate in his or her life. In this work, we investigate how interactive technology design features support children with and without disabilities with inclusion during play. We developed four versions of Incloodle, a two-player picture-taking tablet application, designed to be inclusive of children with different abilities and needs. Each version of the application varied in (1) whether or not it enforced co-operation between children; and in (2) whether it prompted interactions through in-app characters or more basic instructions. A laboratory study revealed technology-enforced cooperation was helpful for child pairs who needed scaffolding, but character-based prompting had little effect on children’s experiences. We provide an empirical evaluation of interactive technology for inclusive play and offer guidance for designing technology that facilitates inclusive play between young neurotypical and neurodiverse children.
SESSION: 3D Virtual Space
Dynamic Stereoscopic 3D Parameter Adjustment for Enhanced Depth Discrimination
Most modern stereoscopic 3D applications use fixed stereoscopic 3D parameters (separation and convergence) to render the scene on a 3D display. But, keeping these parameters fixed during usage does not always provide the best experience since it can reduce the amount of depth perception possible in some applications which have large variability in object distances. We developed two stereoscopic rendering techniques which actively vary the stereo parameters based on the scene content. Our first algorithm calculates a low resolution depth map of the scene and chooses ideal stereo parameters based on that depth map. Our second algorithm uses eye tracking data to get the gaze direction of the user and chooses ideal stereo parameters based on the distance of the gazed object. We evaluated our techniques in an experiment that uses three depth judgment tasks: depth ranking, relative depth judgment and path tracing. Our results indicate that variable stereo parameters provide enhanced depth discrimination compared to static parameters and were preferred by our participants over the traditional fixed parameter approach. We discuss our findings and possible implications on the design of future stereoscopic 3D applications.
Modeling the Impact of Depth on Pointing Performance
An important visual cue for the distance to a target is its binocular depth, the disparity between the left and right eyes. We examined mid-air pointing on a large screen, varying the physical distances (depths) to targets. Welford’s two-part formulation provided a better model than the one-part Fitts’s Law formulation to predict movement time from movement amplitude and target width. Angular measures suggested by Kopper et al. did not improve the model. Consistent variations of Shoemaker et al.’s k-factor suggest target depth plays a role similar to gain for mid-air pointing. We compared both physical and virtual targets to determine if artificial binocular depth cues induce the same performance as purely physical binocular depth cues. Variation of the k-factor was different when virtual depth and physical depth were not identical. This has implications for calibrating 3-D virtual environments and for the design of interactive 3-D pointing techniques for those environments.
Compensating for Distance Compression in Audiovisual Virtual Environments Using Incongruence
A key requirement for a sense of presence in Virtual Environments (VEs) is for a user to perceive space as naturally as possible. One critical aspect is distance perception. When judging distances, compression is a phenomenon where humans tend to underestimate the distance between themselves and target objects (termed egocentric or absolute compression), and between other objects (exocentric or relative compression). Results of studies in virtual worlds rendered through head mounted displays are striking, demonstrating significant distance compression error. Distance compression is a multisensory phenomenon, where both audio and visual stimuli are often compressed with respect to their distances from the observer. In this paper, we propose and test a method for reducing crossmodal distance compression in VEs. We report an empirical evaluation of our method via a study of 3D spatial perception within a virtual reality (VR) head mounted display. Applying our method resulted in more accurate distance perception in a VE at longer range, and suggests a modification that could adaptively compensate for distance compression at both shorter and longer ranges. Our results have a significant and intriguing implication for designers of VEs: an incongruent audiovisual display, i.e. where the audio and visual information is intentionally misaligned, may lead to better spatial perception of a virtual scene.
miniStudio: Designers’ Tool for Prototyping Ubicomp Space with Interactive Miniature
Recently, it has become common for designers to deal with complex and large-scale ubicomp or IoT spaces. Designers without technical implementation skills have difficulties in prototyping such spaces, especially in the early phases of design. We present miniStudio, a designers’ tool for prototyping ubicomp space with proxemic interactions. It is built on designers’ existing software and modeling materials (Photoshop, Lego, and paper). Interactions can be defined in Photoshop based on five spatial relations: location, distance, motion, orientation, and custom. Projection-based augmented reality was applied to miniatures in order to enable tangible interactions and dynamic representations. Hidden marker stickers and a camera-projector system enable the unobtrusive integration of digital images on the physical miniature. Through the user study with 12 designers and researchers in the ubicomp field, we found that miniStudio supported rapid prototyping of large and complex ideas with multiple connected components. Based on the tool development and the study, we discuss the implications for prototyping ubicomp environments in the early phase of the design.
SESSION: Mining Human Behaviors
Unsupervised Clickstream Clustering for User Behavior Analysis
Online services are increasingly dependent on user participation. Whether it’s online social networks or crowdsourcing services, understanding user behavior is important yet challenging. In this paper, we build an unsupervised system to capture dominating user behaviors from clickstream data (traces of users’ click events), and visualize the detected behaviors in an intuitive manner. Our system identifies “clusters” of similar users by partitioning a similarity graph (nodes are users; edges are weighted by clickstream similarity). The partitioning process leverages iterative feature pruning to capture the natural hierarchy within user clusters and produce intuitive features for visualizing and understanding captured user behaviors. For evaluation, we present case studies on two large-scale clickstream traces (142 million events) from real social networks. Our system effectively identifies previously unknown behaviors, e.g., dormant users, hostile chatters. Also, our user study shows people can easily interpret identified behaviors using our visualization tool.
Augur: Mining Human Behaviors from Fiction to Power Interactive Systems
From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this paper, we instead demonstrate it is possible to mine a broad knowledge base of human behavior by analyzing more than one billion words of modern fiction. Our resulting knowledge base, Augur, trains vector models that can predict many thousands of user activities from surrounding objects in modern contexts: for example, whether a user may be eating food, meeting with a friend, or taking a selfie. Augur uses these predictions to identify actions that people commonly take on objects in the world and estimate a user’s future activities given their current situation. We demonstrate Augur-powered, activity-based systems such as a phone that silences itself when the odds of you answering it are low, and a dynamic music player that adjusts to your present activity. A field deployment of an Augur-powered wearable camera resulted in 96% recall and 71% precision on its unsupervised predictions of common daily activities. A second evaluation where human judges rated the system’s predictions over a broad set of input images found that 94% were rated sensible.
Modeling and Understanding Human Routine Behavior
Human routines are blueprints of behavior, which allow people to accomplish purposeful repetitive tasks at many levels, ranging from the structure of their day to how they drive through an intersection. People express their routines through actions that they perform in the particular situations that triggered those actions. An ability to model routines and understand the situations in which they are likely to occur could allow technology to help people improve their bad habits, inexpert behavior, and other suboptimal routines. However, existing routine models do not capture the causal relationships between situations and actions that describe routines. Our main contribution is the insight that byproducts of an existing activity prediction algorithm can be used to model those causal relationships in routines. We apply this algorithm on two example datasets, and show that the modeled routines are meaningful-that they are predictive of people’s actions and that the modeled causal relationships provide insights about the routines that match findings from previous research. Our approach offers a generalizable solution to model and reason about routines.
Setwise Comparison: Consistent, Scalable, Continuum Labels for Computer Vision
A growing number of domains, including affect recognition and movement analysis, require a single, real number ground truth label capturing some property of a video clip. We term this the provision of continuum labels. Unfortunately, there is often an uncacceptable trade-off between label consistency and the efficiency of the labelling process with current tools. We present a novel interaction technique, setwise comparison, which leverages the intrinsic human capability for consistent relative judgements and the TrueSkill algorithm to solve this problem. We describe SorTable, a system demonstrating this technique. We conducted a real-world study where clinicians labelled videos of patients with multiple sclerosis for the ASSESS MS computer vision system. In assessing the efficiency-consistency trade-off of setwise versus pairwise comparison, we demonstrated that not only is setwise comparison more efficient, but it also elicits more consistent labels. We further consider how our findings relate to the interactive machine learning literature.
SESSION: Behavioral Change
TimeAware: Leveraging Framing Effects to Enhance Personal Productivity
To help people enhance their personal productivity by providing effective feedback, we designed and developed TimeAware, a self-monitoring system for capturing and reflecting on personal computer usage behaviors. TimeAware employs an ambient widget to promote self-awareness and to lower the feedback access burden, and web-based information dashboard to visualize people’s detailed computer usage. To examine the effect of framing on individual’s productivity, we designed two versions of TimeAware, each with a different framing setting-one emphasizing productive activities (positive framing) and the other emphasizing distracting activities (negative framing), and conducted an eight-week deployment study (N = 24). We found a significant effect of framing on participants’ productivity: only participants in the negative framing condition improved their productivity. The ambient widget seemed to help sustain engagement with data and enhance self-awareness. We discuss how to leverage framing effects to help people enhance their productivity, and how to design successful productivity monitoring tool.
Personal Tracking of Screen Time on Digital Devices
Numerous studies have tracked people’s everyday use of digital devices, but without consideration of how such data might be of personal interest to the user. We have developed a personal tracking application that enables users to automatically monitor their ‘screen time’ on mobile devices (iOS and Android) and computers (Mac and Windows). The application interface enables users to combine screen time data from multiple devices. We trialled the application for 28+ days with 21 users, collecting log data and interviewing each user. We found that there is interest in personal tracking in this area, but that the study participants were less interested in quantifying their overall screen time than in gaining data about their use of specific devices and applications. We found that personal tracking of device use is desirable for goals including: increasing productivity, disciplining device use, and cutting down on use.
Crowd-Designed Motivation: Motivational Messages for Exercise Adherence Based on Behavior Change Theory
Developing motivational technology to support long-term behavior change is a challenge. A solution is to incorporate insights from behavior change theory and design technology to tailor to individual users. We carried out two studies to investigate whether the processes of change, from the Transtheoretical Model, can be effectively represented by motivational text messages. We crowdsourced peer-designed text messages and coded them into categories based on the processes of change. We evaluated whether people perceived messages tailored to their stage of change as motivating. We found that crowdsourcing is an effective method to design motivational messages. Our results indicate that different messages are perceived as motivating depending on the stage of behavior change a person is in. However, while motivational messages related to later stages of change were perceived as motivational for those stages, the motivational messages related to earlier stages of change were not. This indicates that a person’s stage of change may not be the (only) key factor that determines behavior change. More individual factors need to be considered to design effective motivational technology.
Understanding the Mechanics of Persuasive System Design: A Mixed-Method Theory-driven Analysis of Freeletics
While we know that persuasive system design matters, we barely understand when persuasive strategies work and why they only work in some cases. We propose an approach to systematically understand and design for motivation, by studying the fundamental building blocks of motivation, according to the theory of planned behavior (TPB): attitude, subjective norm, and perceived control. We quantitatively analyzed (N=643) the attitudes, beliefs, and values of mobile fitness coach users with TPB. Capacity (i.e., perceived ability to exercise) had the biggest effect on users’ motivation. Using individual differences theory, we identified three distinct user groups, namely followers, hedonists, and achievers. With insights from semi-structured interviews (N=5) we derive design implications finding that transformation videos that feature other users’ success stories as well as suggesting an appropriate workout can have positive effects on perceived capacity. Practitioners and researchers can use our theory-based mixed-method research design to better understand user behavior in persuasive applications.
SESSION: Vulnerable Populations and Technological Support
Designing for Transient Use: A Human-in-the-loop Translation Platform for Refugees
Refugees undergoing resettlement in a new country post exile and migration face disruptive life changes. They rely on a network of individuals in the host country to help them rebuild their lives and livelihoods. We investigated whether technology could contribute to minimizing the vulnerabilities resettling refugees face. We designed Rivrtran, a messaging platform that provides ‘human-in-the-loop’ interpretation between individuals who don’t share a common language. We report the findings from the deployment of Rivrtran to mediate communication between resettling refugee families in the United States and the American families they are paired with who serve as their mentors. Our findings suggest that scaffolding communication in such a way provides refugees one means of accessing diversified help outside their cultural group. Moreover human-in-the-loop interpretation may help to mitigate the effects of cultural barriers between those communicating. We establish the notion of designing for transient use in the development of systems to scaffold communication for short-term use by resettling refugees.
Syrian Refugees and Digital Health in Lebanon: Opportunities for Improving Antenatal Health
There are currently over 1.1 million Syrian refugees in need of healthcare services from an already overstretched Lebanese healthcare system. Access to antenatal care (ANC) services presents a particular challenge. We conducted focus groups with 59 refugees in rural Lebanon to identify contextual and cultural factors that can inform the design of digital technologies to support refugee ANC. Previously identified high utilization of smartphones by the refugee population offers a particular opportunity for using digital technology to support access to ANC as well as health advocacy. Our findings revealed a number of considerations that should be taken into account in the design of refugee ANC technologies, including: refugee health beliefs and experiences, literacy levels, refugee perceptions of negative attitudes of healthcare providers, and hierarchal and familial structures.
A Real-Time IVR Platform for Community Radio
Interactive Voice Response (IVR) platforms have been widely deployed in resource-limited settings. These systems tend to afford asynchronous push interactions, and within the context of health, provide medication reminders, descriptions of symptoms and tips on self-management. Here, we present the development of an IVR system for resource-limited settings that enables real-time, synchronous interaction. Inspired by community radio, and calls for health systems that are truly local, we developed “Sehat ki Vaani”. Sehat ki Vaani is a real-time IVR platform that enables hosting and participation in radio chat shows on community-led topics. We deployed Sehat ki Vaani with two communities in North India on topics related to the management of Type 2 diabetes and maternal health. Our deployments highlight the potential for synchronous IVR systems to offer community connection and localised sharing of experience, while also highlighting the complexity of producing, hosting and participating in radio shows in real time through IVR. We discuss the relative strengths and weaknesses of synchronous IVR systems, and highlight lessons learnt for interaction design in this area.
Contextualizing Intermediated Use in the Developing World: Findings from India & Ghana
This short paper extends the existing conceptualization of intermediated use in the developing world by demonstrating a range of informal practices that are conducted outside of a discrete (intermediary/beneficiary) user-interface interaction in a given point in time. Further, this paper also demonstrates how low-literate users may often voluntarily relinquish custody of an information resource in order to create and maintain intermediation. In this way we describe a broader conceptualization of intermediated use in the developing world that needs to take into account the entire sociotechnical workflow. This is particularly critical when explicitly designing for secondary/beneficiary users; it considers their specific requirements that often get overlooked while simultaneously revealing their vulnerabilities within the formal-informal continuum. We present findings from ethnographic work conducted in India and Ghana.
SESSION: Online Behaviors
Could This Be True?: I Think So! Expressed Uncertainty in Online Rumoring
Rumors are regular features of crisis events due to the extreme uncertainty and lack of information that often characterizes these settings. Despite recent research that explores rumoring during crisis events on social media platforms, limited work has focused explicitly on how individuals and groups express uncertainty. Here we develop and apply a flexible typology for types of expressed uncertainty. By applying our framework across six rumors from two crisis events we demonstrate the role of uncertainty in the collective sensemaking process that occurs during crisis events.
Order in the Warez Scene: Explaining an Underground Virtual Community with the CPR Framework
The paper analyzes the warez scene, an illegal underground subculture on the Internet, which specializes in removing copy protection from software and releasing the cracked software for free. Despite the lack of economic incentives and the absence of external laws regulating it, the warez scene has been able to self-govern and self-organize for more than three decades. Through a directed content analysis of the subculture’s digital traces, the paper argues that the ludic competition within the warez scene is an institution of collective action, and can, therefore, be approached as a common-pool resource (CPR). Subsequently, the paper uses Ostrom’s framework of long-enduring common-pool resource institutions to understand the warez scene’s longevity and ability to govern itself. Theoretical and design implications of these findings are then discussed.
SESSION: Collaborative Fabricatio? Making Much of Machines
Understanding Newcomers to 3D Printing: Motivations, Workflows, and Barriers of Casual Makers
Interest in understanding and facilitating 3D digital fabrication is growing in the HCI research community. However, most of our insights about end-user interaction with fabrication are currently based on interactions of professional users, makers, and technology enthusiasts. We present a study of casual makers, users who have no prior experience with fabrication and mainly explore walk-up-and-use 3D printing services at public print centers, such as libraries, universities, and schools. We carried out 32 interviews with casual makers, print center operators, and fabrication experts to understand the motivations, workflows, and barriers in appropriating 3D printing technologies. Our results suggest that casual makers are deeply dependent on print center operators throughout the process from bootstrapping their 3D printing workflow, to seeking help and troubleshooting, to verifying their outputs. However, print center operators are usually not trained domain experts in fabrication and cannot always address the nuanced needs of casual makers. We discuss implications for optimizing 3D design tools and interactions that can better facilitate casual makers’ workflows.
How Novices Sketch and Prototype Hand-Fabricated Objects
We are interested in how to create digital tools to support informal sketching and prototyping of physical objects by novices. Achieving this goal first requires a deeper understanding of how non-professional designers generate, explore, and communicate design ideas with traditional tools, i.e., sketches on paper and hands-on prototyping materials. We describe a study framed around two all-day design charrettes where participants perform a complete design process: ideation sketching, concept development and presentation, fabrication planning documentation and collaborative fabrication of hand-crafted prototypes. This structure allows us to control key aspects of the design process while collecting rich data about creative tasks, including sketches on paper, physical models, and videos of collaboration discussions. Participants used a variety of drawing techniques to convey 3D concepts. They also extensively manipulated physical materials, such as paper, foam, and cardboard, both to support concept exploration and communication with design partners. Based on these observations, we propose design guidelines for CAD tools targeted at novice crafters.
RetroFab: A Design Tool for Retrofitting Physical Interfaces using Actuators, Sensors and 3D Printing
We present RetroFab, an end-to-end design and fabrication environment that allows non-experts to retrofit physical interfaces. Our approach allows for changing the layout and behavior of physical interfaces. Unlike customizing software interfaces, physical interfaces are often challenging to adapt because of their rigidity. With RetroFab, a new physical interface is designed that serves as a proxy interface for the legacy controls that are now operated by actuators. RetroFab makes this concept of retrofitting devices available to non-experts by automatically generating an enclosure structure from an annotated 3D scan. This enclosure structure holds together actuators, sensors as well as components for the redesigned interface. To allow retrofitting a wide variety of legacy devices, the RetroFab design tool comes with a toolkit of 12 components. We demonstrate the versatility and novel opportunities of our approach by retrofitting five domestic objects and exploring their use cases. Preliminary user feedback reports on the experience of retrofitting devices with RetroFab.
HotFlex: Post-print Customization of 3D Prints Using Embedded State Change
While 3D printing offers great design flexibility before the object is printed, it is very hard for end-users to customize a 3D-printed object to their specific needs after it is printed. We propose HotFlex: a new approach allowing precisely located parts of a 3D object to transition on demand from a solid into a deformable state and back. This approach enables intuitive hands-on remodeling, personalization, and customization of a 3D object after it is printed. We introduce the approach and present an implementation based on computer-controlled printed heating elements that are embedded within the 3D object. We present a set of functional patterns that act as building blocks and enable various forms of hands-on customization. Furthermore, we demonstrate how to integrate sensing of user input and visual output. A series of technical experiments and various application examples demonstrate the practical feasibility of the approach.
SESSION: Learning Feedback
Effects of Pedagogical Agent’s Personality and Emotional Feedback Strategy on Chinese Students’ Learning Experiences and Performance: A Study Based on Virtual Tai Chi Training Studio
In virtual learning environment, both personality and emotional features of animated pedagogical agents (APAs) may influence learning. To investigate this question, we developed four APAs with two distinct personality types and two sets of gestures expressing distinct emotional feedback. Effects of APAs’ personality types and emotional feedback strategies on learning experiences and performance were assessed experimentally using a virtual Tai Chi training system. Fifty six participants completed the experiment. Results showed that positive emotional feedback strategy led to better learning experiences and performance than negative feedback strategy. Moreover, personality type had significant effect on learning. Choleric APAs led to better performance than Phlegmatic APAs. Personality types moderated the effect of emotional feedback on learning satisfaction. Our study demonstrates that APAs with distinct personality types and emotional feedback are important design parameters for virtual learning environments.
MapSense: Multi-Sensory Interactive Maps for Children Living with Visual Impairments
We report on the design process leading to the creation of MapSense, a multi-sensory interactive map for visually impaired children. We conducted a formative study in a specialized institute to understand children’s educational needs, their context of care and their preferences regarding interactive technologies. The findings (1) outline the needs for tools and methods to help children to acquire spatial skills and (2) provide four design guidelines for educational assistive technologies. Based on these findings and an iterative process, we designed and deployed MapSense in the institute during two days. It enables collaborations between children with a broad range of impairments, proposes reflective and ludic scenarios and allows caretakers to customize it as they wish. A field experiment reveals that both children and caretakers considered the system successful and empowering.
Framing Feedback: Choosing Review Environment Features that Support High Quality Peer Assessment
Peer assessment is rapidly growing in online learning, as it presents a method to address scalability challenges. However, research suggests that the benefits of peer review are obtained inconsistently. This paper explores why, introducing three ways that framing task goals significantly changes reviews. Three experiments manipulated features in the review environment. First, adding a numeric scale to open text reviews was found to elicit more explanatory, but lower quality reviews. Second, structuring a review task into short, chunked stages elicited more diverse feedback. Finally, showing reviewers a draft along with finished work elicited reviews that focused more on the work’s goals than aesthetic details. These findings demonstrate the importance of carefully structuring online learning environments to ensure high quality peer reviews.
Revising Learner Misconceptions Without Feedback: Prompting for Reflection on Anomalies
The Internet has enabled learning at scale, from Massive Open Online Courses (MOOCs) to Wikipedia. But online learners may become passive, instead of actively constructing knowledge and revising their beliefs in light of new facts. Instructors cannot directly diagnose thousands of learners’ misconceptions and provide remedial tutoring. This paper investigates how instructors can prompt learners to reflect on facts that are anomalies with respect to their existing misconceptions, and how to choose these anomalies and prompts to guide learners to revise incorrect beliefs without any feedback. We conducted two randomized experiments with online crowd workers learning statistics. Results show that prompts to explain why these anomalies are true drive revision towards correct beliefs. But prompts to simply articulate thoughts about anomalies have no effect on learning. Furthermore, we find that explaining multiple anomalies is more effective than explaining only one, but the anomalies should rule out multiple misconceptions simultaneously.
SESSION: Visual Design Principles for Unconventional Displays
Designing Visual Complexity for Dual-screen Media
So many people are now using handheld second screens whilst watching TV that application developers and broadcasters are designing companion applications Second screen content that accompanies a TV programme. The nature of such dual-screen use cases inherently causes attention to be split, somewhat unpredictably. Dual-screen complexity, a clear factor in this attention split, is largely unexplored by the literature and will have an unknown (and likely negative) impact on user experience (UX). Therefore, we use empirical techniques to investigate the objective and subjective effect of dual-screen visual complexity on attention distribution in a companion content scenario. Our sequence of studies culminates in the deployment of a companion application prototype that supports adjustment of complexity (by either content curator or viewer) to allow convergence on optimum experience. Our findings assist the effective design of dual-screen content, informing content providers how to manage dual second screen complexity for enhanced UX through a more blended, complementary dual-screen experience.
Hidden in Plain Sight: an Exploration of a Visual Language for Near-Eye Out-of-Focus Displays in the Peripheral View
In this paper, we set out to find what encompasses an appropriate visual language for information presented on near-eye out-of-focus displays. These displays are positioned in a user’s peripheral view, very near to the user’s eyes, for example on the inside of the temples of a pair of glasses. We explored the usable display area, the role of spatial and retinal variables, and the influence of motion and interaction for such a language. Our findings show that a usable visual language can be accomplished by limiting the possible shapes and by making clever use of orientation and meaningful motion. We found that especially motion is very important to improve perception and comprehension of what is being displayed on near-eye out-of-focus displays, and that perception is further improved if direct interaction with the content is allowed.
Investigating Text Legibility on Non-Rectangular Displays
Emerging technologies allow for the creation of non-rectangular displays with unlimited constraints in shape. However, the introduction of such displays radically deviates from the prevailing tradition of placing content on rectangular screens and raises fundamental design questions. Among these is the foremost question of how to legibly present text. We address this fundamental concern through a multi-part exploration that includes: (1) a focus-group study from which we collected free-form display scenarios and extracted display shape properties; (2) a framework that identifies different mappings of text onto a non-rectangular shape and formulates hypotheses concerning legibility for different display shape properties; and (3) a series of quantitative text legibility studies to assess our hypotheses. Or results agree with and extend upon other findings in the existing literature on text legibility, but they also uncover unique instances in which different rules need to be applied for non-rectangular displays. These results also provide guidelines for the design of visual interfaces.
The Effect of Focus Cues on Separation of Information Layers
Our eyes use multiple cues to perceive depth. Current 3D displays do not support all depth cues humans can perceive. While they support binocular disparity and convergence, no commercially available 3D display supports focus cues. To use them requires accommodation, i.e. stretching the eye lens when focusing on an individual distance. Previous work proposed multilayer and light field displays that require the eye to accommodate. Such displays enable the user to focus on different depths and blur out content that is out of focus. Thereby, they might ease the separation of content displayed on different depth layers. In this paper we investigate the effect of focus cues by comparing 3D shutter glasses with a multilayer display. We show that recognizing content displayed on a multilayer display takes less time and results in fewer errors compared to shutter glasses. We further show that separating overlapping content on multilayer displays again takes less time, results in fewer errors, and is less demanding. Hence, we argue that multilayer displays are superior to standard 3D displays if layered 3D content is displayed, and they have the potential to extend the design space of standard GUI.
SESSION: Privacy – Social and Geolocated
The Geography and Importance of Localness in Geotagged Social Media
Geotagged tweets and other forms of social media volunteered geographic information (VGI) are becoming increasingly critical to many applications and scientific studies. An important assumption underlying much of this research is that social media VGI is “local”, or that its geotags correspond closely with the general home locations of its contributors. We demonstrate through a study on three separate social media communities (Twitter, Flickr, Swarm) that this localness assumption holds in only about 75% of cases. In addition, we show that the geographic contours of localness follow important sociodemographic trends, with social media in, for instance, rural areas and older areas, being substantially less local in character (when controlling for other demographics). We demonstrate through a case study that failure to account for non-local social media VGI can lead to misrepresentative results in social media VGI-based studies. Finally, we compare the methods for determining localness, finding substantial disagreement in certain cases, and highlight new best practices for social media VGI-based studies and systems.
Usability and Security of Text Passwords on Mobile Devices
Recent research has improved our understanding of how to create strong, memorable text passwords. However, this research has generally been in the context of desktops and laptops, while users are increasingly creating and entering passwords on mobile devices. In this paper we study whether recent password guidance carries over to the mobile setting. We compare the strength and usability of passwords created and used on mobile devices with those created and used on desktops and laptops, while varying password policy requirements and input methods. We find that creating passwords on mobile devices takes significantly longer and is more error prone and frustrating. Passwords created on mobile devices are also weaker, but only against attackers who can make more than 10^13 guesses. We find that the effects of password policies differ between the desktop and mobile environments, and suggest ways to ease password entry for mobile users.
Evaluation of Personalized Security Indicators as an Anti-Phishing Mechanism for Smartphone Applications
Mobile application phishing happens when a malicious mobile application masquerades as a legitimate one to steal user credentials. Personalized security indicators may help users to detect phishing attacks, but rely on the user’s alertness. Previous studies in the context of website phishing have shown that users tend to ignore personalized security indicators and fall victim to attacks despite their deployment. Consequently, the research community has deemed personalized security indicators an ineffective phishing detection mechanism. We revisit the question of personalized security indicator effectiveness and evaluate them in the previously unexplored and increasingly important context of mobile applications. We conducted a user study with 221 participants and found that the deployment of personalized security indicators decreased the phishing attack success rate to 50%. Personalized security indicators can, therefore, help phishing detection in mobile applications and their reputation as an anti-phishing mechanism in the mobile context should be reconsidered.
Computationally Mediated Pro-Social Deception
Deception is typically regarded as a morally impoverished choice. However, in the context of increasingly intimate, connected and ramified systems of online interaction, manipulating information in ways that could be considered deceptive is often necessary, useful, and even morally justifiable. In this study, we apply a speculative design approach to explore the idea of tools that assist in pro-social forms of online deception, such as those that conceal, distort, falsify and omit information in ways that promote sociality. In one-on-one semi-structured interviews, we asked 15 participants to respond to a selection of speculations, consisting of imagined tools that reify particular approaches to deception. Participants reflected upon potential practical, ethical, and social implications of the use of such tools, revealing a variety of ways such tools might one day encourage polite behaviour, support individual autonomy, provide a defence against privacy intrusions, navigate social status asymmetries, and even promote more open, honest behaviour.
SESSION: Social Media Engagement
Changes in Engagement Before and After Posting to Facebook
The asynchronous nature of communications on social network sites creates a unique opportunity for studying how posting content interacts with individuals’ engagement. This study focuses on the behavioral changes occurring hours before and after contribution to better understand the changing needs and preferences of contributors. Using observational data analysis of individuals’ activity on Facebook, we test hypotheses regarding the motivations for site visits, changes in the distribution of attention to content, and shifts in decisions to interact with others. We find that after posting content people are intrinsically motivated to visit the site more often, are more attentive to content from friends (but not others), and choose to interact more with friends (in large part due to reciprocity). In addition, contributors are more active on the site hours before posting and remain more active for less than a day afterwards. Our study identifies a unique pattern of engagement that accompanies contribution and can inform the design of social network sites to better support contributors.
Fast, Cheap, and Good: Why Animated GIFs Engage Us
Animated GIFs have been around since 1987 and recently gained more popularity on social networking sites. Tumblr, a large social networking and micro blogging platform, is a popular venue to share animated GIFs. Tumblr users follow blogs, generating a feed or posts, and choose to “like’ or to “reblog’ favored posts. In this paper, we use these actions as signals to analyze the engagement of over 3.9 million posts, and conclude that animated GIFs are significantly more engaging than other kinds of media. We follow this finding with deeper visual analysis of nearly 100k animated GIFs and pair our results with interviews with 13 Tumblr users to find out what makes animated GIFs engaging. We found that the animation, lack of sound, immediacy of consumption, low bandwidth and minimal time demands, the storytelling capabilities and utility for expressing emotions were significant factors in making GIFs the most engaging content on Tumblr. We also found that engaging GIFs contained faces and had higher motion energy, uniformity, resolution and frame rate. Our findings connect to media theories and have implications in design of effective content dashboards, video summarization tools and ranking algorithms to enhance engagement.
Engineering Information Disclosure: Norm Shaping Designs
Nudging behaviors through user interface design is a practice that is well-studied in HCI research. Corporations often use this knowledge to modify online interfaces to influence user information disclosure. In this paper, we experimentally test the impact of a norm-shaping design patterns on information divulging behavior. We show that (1) a set of images, biased toward more revealing figures, change subjects’ personal views of appropriate information to share; (2) that shifts in perceptions significantly increases the probability that a subject divulges personal information; and (3) that these shift also increases the probability that the subject advises others to do so. Our main contribution is empirically identifying a key mechanism by which norm-shaping designs can change beliefs and subsequent disclosure behaviors.
A Market in Your Social Network: The Effects of Extrinsic Rewards on Friendsourcing and Relationships
Friendsourcing consists of broadcasting questions and help requests to friends on social networking sites. Despite its potential value, friendsourcing requests often fall on deaf ears. One way to improve response rates and motivate friends to undertake more effortful tasks may be to offer extrinsic rewards, such as money or a gift, for responding to friendsourcing requests. However, past research suggests that these extrinsic rewards can have unintended consequences, including undermining intrinsic motivations and undercutting the relationship between people. To explore the effects of extrinsic reward on friends’ response rate and perceived relationship, we conducted an experiment on a new friendsourcing platform – Mobilyzr. Results indicate that large extrinsic rewards increase friends’ response rates without reducing the relationship strength between friends. Additionally, the extrinsic rewards allow requesters to explain away the failure of friendsourcing requests and thus preserve their perceptions of relationship ties with friends.
SESSION: Computer Supported Parenting
LGBT Parents and Social Media: Advocacy, Privacy, and Disclosure during Shifting Social Movements
Increasing numbers of American parents identify as lesbian, gay, bisexual, or transgender (LGBT). Shifting social movements are beginning to achieve greater recognition for LGBT parents and more rights for their families; however, LGBT parents still experience stigma and judgment in a variety of social contexts. We interviewed 28 LGBT parents to investigate how they navigate their online environments in light of these societal shifts. We find that 1) LGBT parents use social media sites to detect disapproval and identify allies within their social networks; 2) LGBT parents become what we call incidental advocates, when everyday social media posts are perceived as advocacy work even when not intended as such; and 3) for LGBT parents, privacy is a complex and collective responsibility, shared with children, partners, and families. We consider the complexities of LGBT parents’ online disclosures in the context of shifting social movements and discuss the importance of supporting individual and collective privacy boundaries in these contexts.
Information Seeking Practices of Parents: Exploring Skills, Face Threats and Social Networks
Parents are often responsible for finding, selecting, and facilitating their children’s out-of-school learning experiences. One might expect that the recent surge in online educational tools and the vast online network of information about informal learning would make this easier for all parents. Instead, the increase in these free, accessible resources is contributing to an inequality of use between children from lower and higher socio-economic status (SES). Through over 60 interviews with a diverse group of parents, we explored parents’ ability to find learning opportunities and their role in facilitating educational experiences for their children. We identified differences in the use of online social networks in finding learning opportunities for their children based on SES. Building upon these findings, we conducted a national survey in partnership with ACT, an educational testing services organization, to understand if these differences were generalizable to and consistent among a broader audience.
“Best of Both Worlds”: Opportunities for Technology in Cross-Cultural Parenting
Families are becoming more culturally heterogeneous due to a rise in intermarriage, geographic mobility, and access to a greater diversity of cultural perspectives online. Investigating the challenges of cross-cultural parenting can help us support this growing demographic, as well as better understand how families integrate and negotiate advice from diverse online and offline sources in making parenting decisions. We interviewed parents from 18 families to understand the practices they adopt to meet the challenges of cross-cultural parenting. We investigated how these families respond to conflicts while integrating diverse cultural views, as well as how they utilize the wealth of parenting resources available online in navigating these tasks. We identify five themes focused on how these families find and evaluate advice, connect with social support, resolve intra-family tensions, incorporate multicultural practices, and seek out diverse views. Based on our findings, we contribute three implications for design and translations of these implications to concrete technology ideas that aim to help families better integrate multiple cultures into everyday life.
Screen Time Tantrums: How Families Manage Screen Media Experiences for Toddlers and Preschoolers
Prior work shows that setting limits on young children’s screen time is conducive to healthy development but can be a challenge for families. We investigate children’s (age 1 – 5) transitions to and from screen-based activities to understand the boundaries families have set and their experiences living within them. We report on interviews with 27 parents and a diary study with a separate 28 families examining these transitions. These families turn on screens primarily to facilitate parents’ independent activities. Parents feel this is appropriate but self-audit and express hesitation, as they feel they are benefiting from an activity that can be detrimental to their child’s well-being. We found that families turn off screens when parents are ready to give their child their full attention and technology presents a natural stopping point. Transitioning away from screens is often painful, and predictive factors determine the pain of a transition. Technology-mediated transitions are significantly more successful than parent-mediated transitions, suggesting that the design community has the power to make this experience better for parents and children by creating technologies that facilitate boundary-setting and respect families’ self-defined limits.
SESSION: Personal informatic Dear Data
GenomiX: A Novel Interaction Tool for Self-Exploration of Personal Genomic Data
The increase in the availability of personal genomic data to lay consumers using online services poses a challenge to HCI researchers: such data are complex and sensitive, involve multiple dimensions of uncertainty, and can have substantial implications for individuals’ well-being. Personal genomic data are also unique because unlike other personal data, which constantly change, genomic data are largely stable during a person’s lifetime; it is their interpretation and implications that change over time as new medical research exposes relationships between genes and health. In this paper, we present a novel tool for self exploration of personal genomic data. To evaluate the usability and utility of the tool, we conducted the first study of a genome interpretation tool to date, in which users used their own personal genomic data. We conclude by offering design implications for the development of interactive personal genomic reports.
Taking 5: Work-Breaks, Productivity, and Opportunities for Personal Informatics for Knowledge Workers
Taking breaks from work is an essential and universal practice. In this paper, we extend current research on productivity in the workplace to consider the break habits of knowledge workers and explore opportunities of break logging for personal informatics. We report on three studies. Through a survey of 147 U.S.-based knowledge workers, we investigate what activities respondents consider to be breaks from work, and offer an understanding of the benefit workers desire when they take breaks. We then present results from a two-week in-situ diary study with 28 participants in the U.S. who logged 800 breaks, offering insights into the effect of work breaks on productivity. We finally explore the space of information visualization of work breaks and productivity in a third study. We conclude with a discussion of implications for break recommendation systems, availability and interuptibility research, and the quantified workplace.
Metadating: Exploring the Romance and Future of Personal Data
We introduce Metadating — a future-focused research and speed-dating event where single participants were invited to “explore the romance of personal data”. Participants created “data profiles” about themselves, and used these to “date” other participants. In the rich context of dating, we study how personal data is used conversationally to communicate and illustrate identity. We note the manner in which participants carefully curated their profiles, expressing ambiguity before detail, illustration before accuracy. Our findings proposition a set of data services and features, each concerned with representing and curating data in new ways, beyond a focus on purely rational or analytic relationships with a quantified self. Through this, we build on emerging interest in “lived informatics” and raise questions about the experience and social reality of a “data-driven life”.
Design Opportunities in Three Stages of Relationship Development between Users and Self-Tracking Devices
Recently, self-tracking devices such as wearable activity trackers have become more available to end users. While these emerging products are imbued with new characteristics in terms of human-computer interaction, it is still unclear how to describe and design for user experience in such devices. In this paper, we present a three-week field study, which aimed to unfold users’ experience with wearable activity trackers. Drawing from Knapp’s model of interaction stages in interpersonal relationship development, we propose three stages of relationship development between users and self-tracking devices: initiation & experimentation, intensifying & integration, and stagnation & termination. We highlight the challenges in each stage and design opportunities for future self-tracking devices.
SESSION: Older Adult Support
Designing for the Other ‘Hereafter’: When Older Adults Remember about Forgetting
Designing to support memory for older individuals is a complex challenge in human-computer interaction (HCI) research. Past literature on human memory has mapped processes for recalling past experiences, learning new things, remembering to carry out future intentions and the importance of attention. However, the understanding of how older adults perceive forgetting in daily life remains limited. This paper narrows this gap through a study with older persons (n=18) living independently using self-reporting and semi-structured focus groups to explore what they forget, how they react, and what mechanisms they put in place to recover from and avoid forgetting. Findings include occurrences of prospective and retrospective memory lapses, conflicting negative and neutral perceptions, and techniques to manage forgetting. Participant responses indicate that an awareness of forgetting fosters internal tensions among older adults, thereby creating opportunities for further design research, e.g., to defuse and normalise these reactions.
Typing Tutor: Individualized Tutoring in Text Entry for Older Adults Based on Input Stumble Detection
Many older adults are interested in smartphones. However most of them encounter difficulties in self-instruction and need support. Text entry, which is essential for various applications, is one of the most difficult operations to master. In this paper, we propose Typing Tutor, an individualized tutoring system for text entry that detects input stumbles and provides instructions. By conducting two user studies, we clarify the common difficulties that novice older adults experience and how skill level is related to input stumbles. Based on these studies, we develop Typing Tutor to support learning how to enter text on a smartphone. A two-week evaluation experiment with novice older adults (65+) showed that Typing Tutor was effective in improving their text entry proficiency, especially in the initial stage of use.
Not For Me: Older Adults Choosing Not to Participate in a Social Isolation Intervention
This paper considers what we can learn from the experiences of people who choose not to participate in technology-based social interventions. We conducted ethnographically-informed field studies with socially isolated older adults, who used and evaluated a new iPad application designed to help build new social connections. In this paper we reflect on how the values and assumptions guiding the technological intervention were not always shared by those participating in the evaluation. Drawing on our field notes and interviews with the older adults who chose to discontinue participation, we use personas to illustrate the complexities and tensions involved in individual decisions to not participate. This analysis contributes to HCI research calling for a more critical perspective on technological interventions. We provide detailed examples highlighting the complex circumstances of our non-participants’ lives, present a framework that outlines the socio-technical context of non-participation, and use our findings to promote reflective practice in HCI research that aims to address complex social issues.
SESSION: Real Reality Interfaces
The Augmented Climbing Wall: High-Exertion Proximity Interaction on a Wall-Sized Interactive Surface
We present the design and evaluation of the Augmented Climbing Wall (ACW). The system combines computer vision and interactive projected graphics for motivating and instructing indoor wall climbing. We have installed the system in a commercial climbing center, where it has been successfully used by hundreds of climbers, including both children and adults. Our primary contribution is a novel movement-based game system that can inform the design of future games and augmented sports. We evaluate ACW based on three user studies (N=50, N=10, N=10) and further observations and interviews. We highlight three central themes of how digital augmentation can contribute to a sport: increasing diversity of movement and challenges, enabling user-created content in an otherwise risky environment, and enabling procedurally generated content. We further discuss how ACW represents an underexplored class of interactive systems, i.e., proximity interaction on wall-sized interactive surfaces, which presents novel human-computer interaction challenges.
BitDrones: Towards Using 3D Nanocopter Displays as Interactive Self-Levitating Programmable Matter
We present BitDrones, a toolbox for building interactive real reality 3D displays that use nano-quadcopters as self-levitating tangible building blocks. Our prototype is a first step towards interactive self-levitating programmable matter, in which the user interface is represented using Catomic structures. We discuss three types of BitDrones: PixelDrones, equipped with an RGB LED and a small OLED display; ShapeDrones, augmented with an acrylic mesh spun over a 3D printed frame in a larger geometric shape; and DisplayDrones, fitted with a thin-film 720p touchscreen. We present a number of unimanual and bimanual input techniques, including touch, drag, throw and resize of individual drones and compound models, as well as user interface elements such as self-levitating cone trees, 3D canvases and alert boxes. We describe application scenarios and depict future directions towards creating high-resolution self-levitating programmable matter.
Pmomo: Projection Mapping on Movable 3D Object
We introduce Pmomo (acronym of projection mapping on movable object), a dynamic projection mapping system that tracks the 6-DOF position of real-world object, and shades it with virtual 3D contents by projection. The system can precisely lock the projection on the moving object in real-time, even the one with complex geometry. Based on depth camera, we developed a novel and robust tracking method that samples the structure of the object into low-density point cloud, then performs an adaptive searching scheme for the registration procedure. As a fully interactive system, our method can handle both internal and external complex occlusions, and can quickly track back the object even when losing track. In order to further improve the realism of the projected virtual textures, our system innovatively culls occlusions away from projection, which is achieved by a facet-covering method. As a result, the Pmomo system enables the possibility of new interactive Augmented Reality applications that require high-quality dynamic projection effect.
Combining Shape-Changing Interfaces and Spatial Augmented Reality Enables Extended Object Appearance
We propose combining shape-changing interfaces and spatial augmented reality for extending the space of appearances and interactions of actuated interfaces. While shape-changing interfaces can dynamically alter the physical appearance of objects, the integration of spatial augmented reality additionally allows for dynamically changing objects’ optical appearance with high detail. This way, devices can render currently challenging features such as high frequency texture or fast motion. We frame this combination in the context of computer graphics with analogies to established techniques for increasing the realism of 3D objects such as bump mapping. This extensible framework helps us identify challenges of the two techniques and benefits of their combination. We utilize our prototype shape-changing device enriched with spatial augmented reality through projection mapping to demonstrate the concept. We present a novel mechanical distance-fields algorithm for real-time fitting of mechanically constrained shape-changing devices to arbitrary 3D graphics. Furthermore, we present a technique for increasing effective screen real estate for spatial augmented reality through view-dependent shape change.
SESSION: Sociotechnical Assemblage, Participation, Interaction & Materiality
The Ethics of Unaware Participation in Public Interventions
Interaction design is increasingly merging with designing our everyday environment. Trialing and evaluating such designs in an ecologically valid way often requires that they be installed in public space without clearly communicating their nature as trials. This leads to unaware participation in what, in fact, is an experimental intervention.
This article focuses on the ethical considerations that arise from doing, and studying, interventions in public space, including but not restricted to interactive installations. It argues that under certain circumstances, such as when the known risks are low and the intervention presents sufficient support for avoiding involvement, active participation can be considered implicit consent. We revisit some example interventions from literature and press to scrutinize the potential risks and pitfalls associated to unaware participation.
The Poetics of Socio-Technical Space: Evaluating the Internet of Things Through Craft
Drawing on semi-structured interviews and cognitive mapping with 14 craftspeople, this paper analyzes the socio-technical arrangements of people and tools in the context of workspaces and productivity. Using actor-network theory and the concept of companionability, both of which emphasize the role of human and non-human actants in the socio-technical fabrics of everyday life, I analyze the relationships between people, productivity and technology through the following themes: embodiment, provenance, insecurity, flow and companionability. The discussion section develops these themes further through comparison with rhetoric surrounding the Internet of Things (IoT). By putting the experiences of craftspeople in conversation with IoT rhetoric, I suggest several policy interventions for understanding connectivity and inter-device operability as material, flexible and respectful of human agency.
Object-Oriented Publics
Social computing-or computing in a social context-has largely concerned itself with understanding social interaction among and between people. This paper asserts that ignoring material components-including computing itself-as social actors is a mistake. Computing has its own agenda and agencies, and including it as a member of the social milieu provides a means of producing design objects that attend to how technology use can extend beyond merely amplifying or augmenting human actions. In this paper, we offer examples of projects that utilize the capacity of object-oriented publics to both analyze the conditions and consequences around existing publics and engage with matters of concern inherent to emerging publics. Considering how computing as an actor contributes to the construction of publics provides insight into the design of computational systems that address issues. We end by introducing the idea of the object ecology as a way to coordinate design approaches to computational publics.
Repurposing Bits and Pieces of the Digital
Repurposing refers to a broad set of practices, such as recycling or upcycling, all aiming to make better use of or give new life to physical materials and artifacts. While these practices have an obvious interest regarding sustainability issues, they also bring about unique aesthetics and values that may inspire design beyond sustainability concerns. What if we can harness these qualities in digital materials? We introduce Delete by Haiku, an application that transforms old mobile text messages into haiku poems. We elaborate on how the principles of repurposing — working on a low budget, introducing chance and combining the original values with the new ones — can inform interaction design in evoking some of these aesthetic values. This approach changes our views on what constitutes “digital materials” and the opportunities they offer. We also connect recent debates concerning ownership of data with discussions in the arts on the “Death of the Author.”
SESSION: Thinking Critically
Five Provocations for Ethical HCI Research
We present five provocations for ethics, and ethical research, in HCI. We discuss, in turn, informed consent, the researcher-participant power differential, presentation of data in publications, the role of ethical review boards, and, lastly, corporate-facilitated projects. By pointing to unintended consequences of regulation and oversimplifications of unresolvable moral conflicts, we propose these provocations not as guidelines or recommendations but as instruments for challenging our views on what it means to do ethical research in HCI. We then suggest an alternative grounded in the sensitivities of those being studied and based on everyday practice and judgement, rather than one driven by bureaucratic, legal, or philosophical concerns. In conclusion, we call for a wider and more practical discussion on ethics within the community, and suggest that we should be more supportive of low-risk ethical experimentation to further the field.
Acting with Technology: Rehearsing for Mixed-Media Live Performances
Digital technologies provide theater with new possibilities for combining traditional stage-based performances with interactive artifacts, for streaming remote parallel performances and for other device facilitated audience interaction. Compared to traditional theater, mixed-media performances require a different type of engagement from the actors and rehearsing is challenging, as it can be impossible to rehearse with all the functional technology and interaction. Here, we report experiences from a case study of two mixed-media performances; we studied the rehearsal practices of two actors who were performing in two different plays. We describe how the actors practiced presence during rehearsal in a play where they would be geographically remote, and we describe the challenges of rehearsing with several remote and interactive elements. Our study informs the broader aims of interactive and mixed media performances through addressing critical factors of implementing technology into rehearsal practices.
SESSION: Prototyping for Fabricatio, 3D Designing, Modelling & Printing
What you Sculpt is What you Get: Modeling Physical Interactive Devices with Clay and 3D Printed Widgets
We present a method for fabricating prototypes of interactive computing devices from clay sculptures without requiring the designer to be skilled in CAD software. The method creates a “what you sculpt is what you get” process that mimics the “what you see is what you get” processes used in interface design for 2D screens. Our approach uses clay for modeling the basic shape of the device around 3D printed representations, which we call “blanks”, of physical interaction widgets such as buttons, sliders, knobs and other electronics. Each blank includes 4 fiducial markers uniquely arranged on a visible surface. After scanning the sculpture, these fiducial marks allow our software to identify widget types and locations in the scanned model. The software then converts the scan into a printable prototype by positioning mounting surfaces, openings for the controls and a splitting plane for assembly. Because the blanks fit in the sculpted shape, they will reliably fit in the interactive prototype. Creating an interactive prototype requires about 30 minutes of human effort for sculpting, and after scanning, involves a single button click to use the process.
On-The-Fly Print: Incremental Printing While Modelling
Current interactive fabrication tools offer tangible feedback by allowing users to work directly on the physical model, but they are slow because users need to participate in the physical instantiation of their designs. In contrast, CAD software offers powerful tools for 3D modeling but delays access to the physical workpiece until the end of the design process. In this paper we propose On-the-Fly Print: a 3D modeling approach that allows the user to design 3D models digitally while having a low-fidelity physical wireframe model printed in parallel. Our software starts printing features as soon as they are created and updates the physical model as needed. Users can quickly check the design in a real usage context by removing the partial physical print from the printer and replacing it afterwards to continue printing. Digital content modification can be updated with quick physical correction using a retractable cutting blade. We present the detailed description of On-the-Fly Print and showcase several examples designed and printed with our system.
CardBoardiZer: Creatively Customize, Articulate and Fold 3D Mesh Models
Computer-aided design of flat patterns allows designers to prototype foldable 3D objects made of heterogeneous sheets of material. We found origami designs are often characterized by pre-synthesized patterns and automated algorithms. Furthermore, augmenting articulated features to a desired model requires time-consuming synthesis of interconnected joints. This paper presents CardBoardiZer, a rapid cardboard based prototyping platform that allows everyday sculptural 3D models to be easily customized, articulated and folded. We develop a building platform to allow the designer to 1) import a desired 3D shape, 2) customize articulated partitions into planar or volumetric foldable patterns, and 3) define rotational movements between partitions. The system unfolds the model into 2D crease-cut-slot patterns ready for die-cutting and folding. In this paper, we developed interactive algorithms and validated the usability of CardBoardiZer using various 3D models. Furthermore, comparisons between CardBoardiZer and methods of Autodesk® 123D Make, demonstrated significantly shorter time-to-prototype and ease of fabrication.
ChronoFab: Fabricating Motion
We present ChronoFab, a 3D modeling tool to craft motion sculptures, tangible representations of 3D animated models, visualizing an object’s motion with static, transient, ephemeral visuals that are left behind. Our tool casts 3D modeling as a dynamic art-form by employing 3D animation and dynamic simulation for the modeling of motion sculptures. Our work is inspired by the rich history of stylized motion depiction techniques in existing 3D motion sculptures and 2D comic art. Based on a survey of such techniques, we present an interface that enables users to rapidly explore and craft a variety of static 3D motion depiction techniques, including motion lines, multiple stroboscopic stamps, sweeps and particle systems, using a 3D animated object as input. In a set of professional and non-professional usage sessions, ChronoFab was found to be a superior tool for the authoring of motion sculptures, compared to traditional 3D modeling workflows, reducing task completion times by 79%.
SESSION: Learning @ School
Lessons Learned from In-School Use of rTAG: A Robo-Tangible Learning Environment
As technology is increasingly integrated into the classroom, understanding the facilitators and barriers for deployment becomes an important part of the process. While systems that employ traditional WIMP-based interfaces have a well-established body of work describing their integration into classroom environments, more novel technologies generally lack such a foundation to guide their advancement. In this paper we present Robo-Tangible Activities for Geometry (rTAG), a tangible learning environment that utilizes a teachable agent framing, together with a physical robotic agent. We describe its deployment in a school environment, qualitatively analyzing how teachers chose to orchestrate its use, the value they saw in it, and the barriers they faced while organizing the sessions with their students. Based on this analysis, we extract four recommendations that aid in designing and deploying systems that make use of affordances that are similar to those of the rTAG system.
Human Proxies for Remote University Classroom Attendance
Our research explores the idea of using a human proxy to attend a class on one’s behalf where video streaming is used to share the class with the remote student. We explored this idea through an online survey and in-class participation. Survey results show that people favored “top students” to represent them where gender and race played a much less important role. Students also highly valued a proxy who was also taking the class so they could discuss the course material. In class, students found the setup beneficial and highly valued the pair-wised learning that it afforded. Despite this, proxies found it difficult to concentrate in class and to be a surrogate for someone else at the same time. Together our results highlight the benefits and challenges with human proxies for classroom attendance and raise a series of design sensitivities that should be explored as part of future research.
Ingenium: Engaging Novice Students with Latin Grammar
Reading Latin poses many difficulties for English speakers, because they are accustomed to relying on word order to determine the roles of words in a sentence. In Latin, the grammatical form of a word, and not its position, is responsible for determining the word’s function in a sentence. It has proven challenging to develop pedagogical techniques that successfully draw students’ attention to the grammar of Latin and that students find engaging enough to use. Building on some of the most promising prior work in Latin instruction-the Michigan Latin approach–and on the insights underlying block-based programming languages used to teach children the basics of computer science, we developed Ingenium. Ingenium uses abstract puzzle blocks to communicate grammatical concepts. Engaging students in grammatical reflection, Ingenium succeeds when students are able to effectively decipher the meaning of Latin sentences. We adapted Ingenium to be used for two standard classroom activities: sentence translations and fill-in-the-blank exercises. We evaluated Ingenium with 67 novice Latin students in universities across the USA. When using Ingenium, participants opted to perform more optional exercises, completed translation exercises with significantly fewer errors related to word order and errors overall, as well as reported higher levels of engagement and attention to grammar than when using a traditional text-based interface.
SESSION: Learning Facilitaton
Social Situational Language Learning through an Online 3D Game
Learning a second language is challenging. Becoming fluent requires learning contextual information about how language should be used as well as word meanings and grammar. The majority of existing language learning applications provide only thin context around content. In this paper, we present work in Crystallize, a language learning game that combines traditional learning approaches with a situated learning paradigm by integrating a spaced-repetition system within a language learning roleplaying game. To facilitate long-term engagement with the game, we added a new quest paradigm, “jobs,” that allow a small amount of design effort to generate a large set of highly-scaffolded tasks that grow iteratively. A large-scale evaluation of the language learning game “in the wild” with a diverse set of 186 people revealed that the game was not only engaging players for extended amounts of time but that players learned an average of 8.7 words in an average of 40.5 minutes.
Using Gamification to Motivate Students with Dyslexia
The concept of gamification is receiving increasing attention, particularly for its potential to motivate students. However, to date the majority of studies in the context of education have predominantly focused on University students. This paper explores how gamification could potentially benefit a specific student population, children with dyslexia who are transitioning from primary to secondary school. Two teachers from specialist dyslexia teaching centres used classDojo, a gamification platform, during their teaching sessions for one term. We detail how the teachers appropriated the platform in different ways and how the students discussed classDojo in terms of motivation. These findings have subsequently informed a set of provisional implications for gamification distilling opportunities for future pedagogical uses, gamification design for special education and methodological approaches to how gamification is studied.
Local Standards for Sample Size at CHI
We describe the primary ways researchers can determine the size of a sample of research participants, present the benefits and drawbacks of each of those methods, and focus on improving one method that could be useful to the CHI community: local standards. To determine local standards for sample size within the CHI community, we conducted an analysis of all manuscripts published at CHI2014. We find that sample size for manuscripts published at CHI ranges from 1 — 916,000 and the most common sample size is 12. We also find that sample size differs based on factors such as study setting and type of methodology employed. The outcome of this paper is an overview of the various ways sample size may be determined and an analysis of local standards for sample size within the CHI community. These contributions may be useful to researchers planning studies and reviewers evaluating the validity of results.
A Comparative Evaluation on Online Learning Approaches using Parallel Coordinate Visualization
As visualizations are increasingly used as a storytelling medium for the general public, it becomes important to help people learn how to understand visualizations. Prior studies indicate that interactive multimedia learning environments can increase the effectiveness of learning [11]. To investigate the efficacy of the multimedia learning environments for data visualization education, we compared four online learning approaches 1) baseline (i.e., no tutorial), 2) static tutorial, 3) video tutorial, and 4) interactive tutorial-through a crowdsourced user study. We measured participants’ learning outcomes in using parallel coordinates with 18 tasks. Results show that participants with the interactive condition achieved higher scores than those with the static and baseline conditions, and reported that they had a more engaging experience than those with the static condition.
SESSION: Paying Attention to Smartphones
Lock n’ LoL: Group-based Limiting Assistance App to Mitigate Smartphone Distractions in Group Activities
Prior studies have addressed many negative aspects of mobile distractions in group activities. In this paper, we present Lock n’ LoL. This is an application designed to help users focus on their group activities by allowing group members to limit their smartphone usage together. In particular, it provides synchronous social awareness of each other’s limiting behavior. This synchronous social awareness can arouse feelings of connectedness among group members and can mitigate social vulnerability due to smartphone distraction (e.g., social exclusion) that often results in poor social experiences. After following an iterative prototyping process, we conducted a large-scale user study (n = 976) via real field deployment. The study results revealed how the participants used Lock n’ LoL in their diverse contexts and how Lock n’ LoL helped them to mitigate smartphone distractions.
“Silence Your Phones”: Smartphone Notifications Increase Inattention and Hyperactivity Symptoms
As smartphones increasingly pervade our daily lives, people are ever more interrupted by alerts and notifications. Using both correlational and experimental methods, we explored whether such interruptions might be causing inattention and hyperactivity-symptoms associated with Attention Deficit Hyperactivity Disorder (ADHD) even in people not clinically diagnosed with ADHD. We recruited a sample of 221 participants from the general population. For one week, participants were assigned to maximize phone interruptions by keeping notification alerts on and their phones within their reach/sight. During another week, participants were assigned to minimize phone interruptions by keeping alerts off and their phones away. Participants reported higher levels of inattention and hyperactivity when alerts were on than when alerts were off. Higher levels of inattention in turn predicted lower productivity and psychological well-being. These findings highlight some of the costs of ubiquitous connectivity and suggest how people can reduce these costs simply by adjusting existing phone settings.
My Phone and Me: Understanding People’s Receptivity to Mobile Notifications
Notifications are extremely beneficial to users, but they often demand their attention at inappropriate moments. In this paper we present an in-situ study of mobile interruptibility focusing on the effect of cognitive and physical factors on the response time and the disruption perceived from a notification. Through a mixed method of automated smartphone logging and experience sampling we collected 10372 in-the-wild notifications and 474 questionnaire responses on notification perception from 20 users. We found that the response time and the perceived disruption from a notification can be influenced by its presentation, alert type, sender-recipient relationship as well as the type, completion level and complexity of the task in which the user is engaged. We found that even a notification that contains important or useful content can cause disruption. Finally, we observe the substantial role of the psychological traits of the individuals on the response time and the disruption perceived from a notification.
SESSION: Interaction Design for Audio Interfaces
Voices from the War: Design as a Means of Understanding the Experience of Visiting Heritage
We use design research to explore ways in which tangible and embodied interaction can be used to create novel experiences of heritage. We identified five design principles and used them to frame the challenge. In collaboration with curators, we co-created an interactive multi-narrative soundscape for the remains of trenches and a fortified camp from World War I. The soundscape is activated by presence and the use of a bespoke device. The design intertwines technology and historical content in context to augment the visitors’ experience of the place in an evocative, personal way. The field trial showed that experimenting with different forms is key, as they have an impact on visitors’ expectations beyond what they experience directly. It also showed the value in simultaneously designing interaction and content to achieve an effect that goes beyond the contribution of the single components.
Simplified Audio Production in Asynchronous Voice-Based Discussions
Voice communication adds nuance and expressivity to virtual discussions, but its one-shot nature tends to discourage collaborators from utilizing it. However, text-based interfaces have made voice editing much easier, especially with recent advancements enabling live, time-aligned speech transcription. We introduce SimpleSpeech, an easy-to-use platform for asynchronous audio communication (AAC) with lightweight tools for inserting content, adjusting pauses, and correcting transcript errors. Qualitative and quantitative results suggest that novice audio producers, such as high school students, experience decreased mental workload when using SimpleSpeech to produce audio messages than without editing. The linguistic formality in SimpleSpeech messages was also studied, and found to form a middle ground between oral and written media. Our findings on editable voice messages show new implications for the optimal design and use cases of AAC systems.
Tap the ShapeTones: Exploring the Effects of Crossmodal Congruence in an Audio-Visual Interface
There is growing interest in the application of crossmodal perception to interface design. However, most research has focused on task performance measures and often ignored user experience and engagement. We present an examination of crossmodal congruence in terms of performance and engagement in the context of a memory task of audio, visual, and audio-visual stimuli. Participants in a first study showed improved performance when using a visual congruent mapping that was cancelled by the addition of audio to the baseline conditions, and a subjective preference for the audio-visual stimulus that was not reflected in the objective data. Based on these findings, we designed an audio-visual memory game to examine the effects of crossmodal congruence on user experience and engagement. Results showed higher engagement levels with congruent displays with some reported preference for potential challenge and enjoyment that an incongruent display may support, particularly for increased task complexity.
Maps and Location: Acceptance of Modern Interaction Techniques for Audio Guides
Traditional audio guides in museums and similar spaces typically require the visitor to locate a track number at each exhibit and enter it on a keypad. These guides, however, provide no information on the amount of content available. Current mobile devices provide rich output capabilities, and indoor location tracking technology can simplify the selection of content in modern audio guides. In this paper, we compare the keypad-based interface to a map-based interface with and without automatic localization. Through a field study in a local museum with 84 participants, we found that the usability of all versions is rated high, with the keypad interface coming out ahead. Nevertheless, visitors favored the overview of the map and thumbnails to find the right exhibit, while numbers were considered helpful indicators in the real world. Those who used the self-localizing guide preferred it over manually adjusting the map.
SESSION: Living Healthy
Staying the Course: System-Driven Lapse Management for Supporting Behavior Change
The negative effect of lapses during a behavior-change program has been shown to increase the risk of repeated lapses and, ultimately, program abandonment. In this paper, we examine the potential of system-driven lapse management — supporting users through lapses as part of a behavior-change tool. We first review lessons from domains such as dieting and addiction research and discuss the design space of lapse management. We then explore the value of one approach to lapse management — the use of “cheat points” — as a way to encourage sustained participation. In an online study, we first examine interpretations of progress that was reached through using cheat points. We then present findings from a deployment of lapse management in a two-week field study with 30 participants. Our results demonstrate the potential of this approach to motivate and change users’ behavior. We discuss important open questions for the design of future technology-mediated behavior change programs.
Designing for Future Behaviors: Understanding the Effect of Temporal Distance on Planned Behaviors
Despite the prevalence of theories and interventions related to behavior change, our knowledge on how intention for a target, or planned behavior, changes over time remains limited. This hinders our ability to consider the temporal aspect in our designs to support behavior change. To understand the effect of temporal distances on planned behaviors, we conducted two studies, building on the Theory of Planned Behavior and Constual Level Theory. We found that attitude about the target is more salient the further away the event, as people focus on the why of a behavior. On the other hand, perceived behavior control can influence intention in both near and far future. When the target is in the near future, people generally focus on the feasibility, or the how of the behavior. In the far future, people may also consider factors related to behavior control, if they are motivated to do so (i.e., hold a strong attitude towards the action). Findings help advance the Theory of Planned Behavior and offer strategies for designers and event organizers to motivate planned behaviors that are in the near and far future.
ClimbAware: Investigating Perception and Acceptance of Wearables in Rock Climbing
Wearable sports devices like GPS watches and heart rate monitors are ubiquitous in sports like running or road cycling and enable the users to receive real-time performance feedback. Although rock climbing is a trending sport, there are little to no consumer electronics available to support rock climbing training during exercise. In this paper, we investigated the acceptance and appropriateness of wearables in climbing on different body parts. Based on an online survey with 54 climbers, we designed a wearable device and conducted a perception study with 12 participants in a climbing gym. Using vibro-tactile, audible, and visual cues while climbing an easy route and a hard route, requiring high physical and cognitive load, we found that the most suited notification channel is sound, directly followed by vibro-tactile output. Light has been found to be inappropriate for the use in the sport of climbing.
Beyond Abandonment to Next Steps: Understanding and Designing for Life after Personal Informatics Tool Use
Recent research examines how and why people abandon self tracking tools. We extend this work with new insights drawn from people reflecting on their experiences after they stop tracking, examining how designs continue to influence people even after abandonment. We further contrast prior work considering abandonment of health and wellness tracking tools with an exploration of why people abandon financial and location tracking tools, and we connect our findings to models of personal informatics. Surveying 193 people and interviewing 12 people, we identify six reasons why people stop tracking and five perspectives on life after tracking. We discuss these results and opportunities for design to consider life after self tracking.
SESSION: Designing Quality in Social Media
Supporting Comment Moderators in Identifying High Quality Online News Comments
Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments.
“Popcorn Tastes Good”: Participatory Policymaking and Reddit’s
In human-computer interaction research and practice, policy concerns can sometimes fall to the margins, orbiting at the periphery of the traditionally core interests of design and practice. This perspective ignores the important ways that policy is bound up with the technical and behavioral elements of the HCI universe. Policy concerns are triggered as a matter of course in social computing, CSCW, systems engineering, UX, and related contexts because technological design, social practice and policy are dynamically entangled and mutually constitutive. Through this research, we demonstrate the value of a stronger emphasis on policy in HCI by exploring a recent controversy on Reddit: “AMAgeddon.” Applying Hirschman’s exit, voice and loyalty framework, we argue that the sustainability of online communities like Reddit will require successful navigation of the complex and often murky intersections among technical design and human interaction through a distributed participatory policymaking process that promotes user loyalty.
Going Dark: Social Factors in Collective Action Against Platform Operators in the Reddit Blackout
This paper describes how people who lead communities on online platforms join together in mass collective action to influence platform operators. I investigate this by analyzing a protest against the social news platform reddit by moderators of 2,278 subreddit communities in July 2015. These moderators collectively disabled their subreddits, preventing millions of readers from accessing major parts of reddit and convincing the company to negotiate over their demands. This paper offers a descriptive analysis of the protest, combining qualitative content analysis, interviews, and quantitative analysis with the population of 52,735 active subreddits. Through participatory hypotheses testing with moderators, this study reveals social factors including the grievances of moderators, relations with platform operators, relations among moderators, subreddit resources, subreddit isolation, and moderators’ relations with their subreddits that can lead to participation in mass collective action against a platform.
Surviving an “Eternal September”: How an Online Community Managed a Surge of Newcomers
We present a qualitative analysis of interviews with participants in the NoSleep community within Reddit where millions of fans and writers of horror fiction congregate. We explore how the community handled a massive, sudden, and sustained increase in new members. Although existing theory and stories like Usenet’s infamous “Eternal September” suggest that large influxes of newcomers can hurt online communities, our interviews suggest that NoSleep survived without major incident. We propose that three features of NoSleep allowed it to manage the rapid influx of newcomers gracefully: (1) an active and well-coordinated group of administrators, (2) a shared sense of community which facilitated community moderation, and (3) technological systems that mitigated norm violations. We also point to several important trade-offs and limitations.
“This Post Will Just Get Taken Down”: Characterizing Removed Pro-Eating Disorder Social Media Content
Social media sites like Facebook and Instagram remove content that is against community guidelines or is perceived to be deviant behavior. Users also delete their own content that they feel is not appropriate within personal or community norms. In this paper, we examine characteristics of over 30,000 pro-eating disorder (pro-ED) posts that were at one point public on Instagram but have since been removed. Our work shows that straightforward signals can be found in deleted content that distinguish them from other posts, and that the implications of such classification are immense. We build a classifier that compares public pro-ED posts with this removed content that achieves moderate accuracy of 69%. We also analyze the characteristics in content in each of these post categories and find that removed content reflects more dangerous actions, self-harm tendencies, and vulnerability than posts that remain public. Our work provides early insights into content removal in a sensitive community and addresses the future research implications of the findings.
SESSION: Physical and Digital Collections
Accountable Artefacts: The Case of the Carolan Guitar
We explore how physical artefacts can be connected to digital records of where they have been, who they have encountered and what has happened to them, and how this can enhance their meaning and utility. We describe how a travelling technology probe in the form of an augmented acoustic guitar engaged users in a design conversation as it visited homes, studios, gigs, workshops and lessons, and how this revealed the diversity and utility of its digital record. We describe how this record was captured and flexibly mapped to the physical guitar and proxy artefacts. We contribute a conceptual framework for accountable artefacts that articulates how multiple and complex mappings between physical artefacts and their digital records may be created, appropriated, shared and interrogated to deliver accounts of provenance and use as well as methodological reflections on technology probes.
Things We Own Together: Sharing Possessions at Home
Sharing is an important facet of human relationships, yet there is a lack of research on how people share ownership of possessions. This paper reports on a study that investi-gates shared ownership of physical and digital possessions through interviews with couples and families in 13 house-holds. We offer a more nuanced definition of shared owner-ship and show that certain practices, which are central to sharing physical objects, are not supported in the sharing of digital content. We suggest potential approaches to address this, focusing in particular on how the sharing of posses-sions plays a role in the building of relationships and is done against a backdrop of trust.
Mailing Archived Emails as Postcards: Probing the Value of Virtual Collections
People accumulate huge assortments of virtual possessions, but it is not yet clear how systems and system designers can help people make meaning from these large archives. Early research in HCI has suggested that people generally appear to value their virtual things less than their material things, but theory on material possessions does not entirely explain this difference. To investigate if changes to the form and behavior of virtual things may surface valued elements of a virtual archive, we designed a technology probe that selected snippets from old emails and mailed them as physical postcards to participating households. The probe uncovered features of emails that trigger meaningful reflection, and how contextual information can help people engage in reminiscence. Our study revealed insights about how materializing virtual possessions influences factors shaping how people draw on, understand, and value those possessions. We conclude with implication and strategies for aimed at supporting people in having more meaningful interactions and experiences with their virtual possessions.
Finding Email in a Multi-Account, Multi-Device World
Email is far from dead; in fact the volume of messages exchanged daily, the number of accounts per user, and the number of devices on which email is accessed have been constantly growing. Most previous studies on email have focused on management and retrieval behaviour within a single account and on a single device. In this paper, we examine how people find email in today’s ecosystem through an in-depth qualitative diary study with 16 participants. We found that personal and work accounts are managed differently, resulting in diverse retrieval strategies: while work accounts are more structured and thus email is retrieved through folders, personal accounts have fewer folders and users rely primarily on the built-in search option. Moreover, retrieval occurs primarily on laptops and PCs compared to smartphones. We explore the reasons, and uncover barriers and workarounds related to managing multiple accounts and devices. Finally, we consider new design possibilities for email clients to better support how email is used today.
SESSION: Augmented AR and VR Experiences
Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays
Emerging virtual reality (VR) displays must overcome the prevalent issue of visual discomfort to provide high-quality and immersive user experiences. In particular, the mismatch between vergence and accommodation cues inherent to most stereoscopic displays has been a long standing challenge. In this paper, we evaluate several adaptive display modes afforded by focus-tunable optics or actuated displays that have the promise to mitigate visual discomfort caused by the vergence-accommodation conflict, and improve performance in VR environments. We also explore monovision as an unconventional mode that allows each eye of an observer to accommodate to a different distance. While this technique is common practice in ophthalmology, we are the first to report its effectiveness for VR applications with a custom built set up. We demonstrate that monovision and other focus-tunable display modes can provide better user experiences and improve user performance in terms of reaction times and accuracy, particularly for nearby simulated distances in VR.
Augmenting the Field-of-View of Head-Mounted Displays with Sparse Peripheral Displays
In this paper, we explore the concept of a sparse peripheral display, which augments the field-of-view of a head-mounted display with a lightweight, low-resolution, inexpensively produced array of LEDs surrounding the central high-resolution display. We show that sparse peripheral displays expand the available field-of-view up to 190º horizontal, nearly filling the human field-of-view. We prototyped two proof-of-concept implementations of sparse peripheral displays: a virtual reality headset, dubbed SparseLightVR, and an augmented reality headset, called SparseLightAR. Using SparseLightVR, we conducted a user study to evaluate the utility of our implementation, and a second user study to assess different visualization schemes in the periphery and their effect on simulator sickness. Our findings show that sparse peripheral displays are useful in conveying peripheral information and improving situational awareness, are generally preferred, and can help reduce motion sickness in nausea-susceptible people.
SnapToReality: Aligning Augmented Reality to the Real World
Augmented Reality (AR) applications may require the precise alignment of virtual objects to the real world. We propose automatic alignment of virtual objects to physical constraints calculated from the real world in real time (“snapping to reality”). We demonstrate SnapToReality alignment techniques that allow users to position, rotate, and scale virtual content to dynamic, real world scenes. Our proof-of-concept prototype extracts 3D edge and planar surface constraints. We furthermore discuss the unique design challenges of snapping in AR, including the user’s limited field of view, noise in constraint extraction, issues with changing the view in AR, visualizing constraints, and more. We also report the results of a user study evaluating SnapToReality, confirming that aligning objects to the real world is significantly faster when assisted by snapping to dynamically extracted constraints. Perhaps more importantly, we also found that snapping in AR enables a fresh and expressive form of AR content creation.
Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments: How They Shape Communication Behavior and User Task Load
In collaborative activities, collaborators can use physical objects in their shared environment as spatial cues to guide each other’s attention. Collaborative mixed reality environments (MREs) include both, physical and digital objects. To study how virtual objects influence collaboration and whether they are used as spatial cues, we conducted a controlled lab experiment with 16 dyads. Results of our study show that collaborators favored the digital objects as spatial cues over the physical environment and the physical objects: Collaborators used significantly less deictic gestures in favor of more disambiguous verbal references and a decreased subjective workload when virtual objects were present. This suggests adding additional virtual objects as spatial cues to MREs to improve user experience during collaborative mixed reality tasks.
VR-STEP: Walking-in-Place using Inertial Sensing for Hands Free Navigation in Mobile VR Environments
Low-cost smartphone adapters can bring virtual reality to the masses, but input is typically limited to using head tracking, which makes it difficult to perform complex tasks like navigation. Walking-in-place (WIP) offers a natural and immersive form of virtual locomotion that can reduce simulation sickness. WIP, however, is difficult to implement in mobile contexts as it typically relies on bulky controllers or an external camera. We present VR-STEP; a WIP implementation that uses real-time pedometry to implement virtual locomotion. VR-STEP requires no additional instrumentation outside of a smartphone’s inertial sensors. A user study with 18 users compares VR-STEP with a commonly used auto-walk navigation method and finds no significant difference in performance or reliability, though VR-STEP was found to be more immersive and intuitive.
SESSION: Technological Care for Autism
“Will I always be not social?”: Re-Conceptualizing Sociality in the Context of a Minecraft Community for Autism
Traditional face-to-face social interactions can be challenging for individuals with autism, leading some to perceive and categorize them as less social than their typically-developing peers. Individuals with autism may even see themselves as less social relative to their peers. Online communities can provide an alternative venue for social expression, enabling different types of communication beyond face-to-face, oral interaction. Using ethnographic methods, we studied the communication ecology that has emerged around a Minecraft server for children with autism and their allies. Our analysis shows how members of this community search for, practice, and define sociality through a variety of communication channels. These findings suggest an expansion in how sociality has traditionally been conceptualized for individuals with autism.
Anxiety and Autism: Towards Personalized Digital Health
For many people living with conditions such as autism, anxiety manifests so powerfully it has a big impact on quality of life. By investigating the suitability of truly customizable wearable health devices we build on prior research that found each experience of anxiety in people with autism is unique, so ‘one-suits all’ solutions are not suitable. In addition, users desire agency and control in all aspects of the system. The participative approach we take is to iteratively co-develop prototypes with end users. Here we describe a case study of the co-development of one prototype, a digital stretch wristband that records interaction for later reflection called Snap. Snap has been designed to sit within a platform that allows the distributed and sustainable design, manufacture and data analysis of customizable digital health technologies. We contribute to HCI with (1) lessons learned from a DIY co-development process that follows the principles of modularity, participation and iteration and (2) the potential impact of technology in self-management of anxiety and the broader design implications of addressing unique anxiety experiences.
EnhancedTouch: A Smart Bracelet for Enhancing Human-Human Physical Touch
We present EnhancedTouch, a novel bracelet-type wearable device for facilitating human-human physical touch. In particular, we aim to support children with autism spectrum disorder (ASD), who often exhibit particular communication patterns, such as lack of physical touch. EnhancedTouch is a unique device that can measure human-human touch events and provide visual feedback to augment touch interaction. We employ personal area network (PAN) technology for communication with partner devices via modulated electrical current flowing through the users’ hands. Our user study show that the visual feedback provided by the developed bracelet motivates children with ASD to touch one another. Moreover, EnhancedTouch offers a function to record the time and duration of a touch event as well as the identity of the touched person. This allows us to identify and evaluate intervention based on physical touch in a quantitative manner.
“This is how I want to learn”: High Functioning Autistic Teens Co-Designing a Serious Game
This paper presents a project that developed a Serious Game with a Natural User Interface, via a Participatory Design approach with two adolescents with High-Functioning Autism (HFA). The project took place in a highly specialized school for young people with Special Educational Needs (SEN). The teenagers were empowered by assigning them specific roles across several sessions. They could express their voice as user, informant, designer and tester. As a result, teachers and young people developed a digital educational game based on their experience as video gamers to improve academic skills in Geography. This paper contributes by describing the sensitive and flexible approach to the design process which promoted stakeholders’ participation.
SESSION: Sustainability, Design and Environmental Sensibilities
Challenging the Car Norm: Opportunities for ICT to Support Sustainable Transportation Practices
The use of practices as a unit of analysis has been suggested in order to scale up efforts within sustainable HCI and to shift the focus from changing individual behaviours to supporting transitions at a societal level. In this paper, we take a practice approach to the case of sustainable transportation, and more specifically to car-free transportation. Car use is intertwined in many practices and managing life without a car is difficult, particularly for people in contexts where owning at least one car per family is the norm. We studied three families in Stockholm who replaced their cars with different combinations of light electric vehicles during one year. From the families’ experiences, we identified a number of opportunities for designers of interactive technologies to support environmental pioneers in the particular case of car-free living, as well as to support transitions towards sustainable practices in general.
Learning from Green Designers: Green Design as Discursive Practice
This paper looks at the activities of environmentally minded technology designers and provides an account of how these designers think, behave and differ. In contrast to traditional designers, green designers appear to: (1) Bias decisions (2) Design from a deep, personal ethos; (3) Accept ‘not knowing’ as a part of the design process; (4) Rely on alternative ways of knowing, and; (5) Shift roles as needed throughout the design process. While a superficial treatment of these differences might seem to disenfranchise green designers, we show that an analysis of green design as a discursive practice highlights how, through engagement with others, green design can enhance pro-environmental dialog and enact meaningful change.
Understanding and Mitigating the Effects of Device and Cloud Service Design Decisions on the Environmental Footprint of Digital Infrastructure
Interactive devices and the services they support are reliant on the cloud and the digital infrastructure supporting it. The environmental impacts of this infrastructure are substantial and for particular services the infrastructure can account for up to 85% of the total impact. In this paper, we apply the principles of Sustainable Interaction Design to cloud services use of the digital infrastructure. We perform a critical analysis of current design practice with regard to interactive services, which we identify as the cornucopian paradigm. We show how user-centered design principles induce environmental impacts in different ways, and combine with technical and business drivers to drive growth of the infrastructure through a reinforcing feedback cycle. We then create a design rubric, substantially extending that of Blevis [6], to cover impacts of the digital infrastructure. In doing so, we engage in design criticism, identifying examples (both actual and potential) of good and bad practice. We then extend this rubric beyond an eco-efficiency paradigm to consider deeper and more radical perspectives on sustainability, and finish with future directions for exploration.
MyPart: Personal, Portable, Accurate, Airborne Particle Counting
In 2012, air pollution in both cities and rural areas was estimated to have caused 3.7 million premature deaths, 88% of those in at risk communities. The primary pollutant was small airborne particulate matter of 10 microns or less in diameter which led to the development of cardiovascular and respiratory diseases. In response, we developed MyPart, the first personal, portable, and accurate particle sensor under $50 capable of distinguishing and counting differently sized particles. We demonstrate how MyPart offers substantial enhancements over most existing air particle sensors by simultaneously improving accessibility, flexibility, portability, and accuracy. We describe the evolution and implementation of the sensor design, demonstrate its performance across twenty everyday urban environments versus a calibrated instrument, and conduct a preliminary user study to report on the overall user experience of MyPart. We also present a novel smart-phone visualization interface and a series of simple form factor adaptations of our design.
SESSION: Authentication and Privacy
Evaluating the Influence of Targets and Hand Postures on Touch-based Behavioural Biometrics
Users’ individual differences in their mobile touch behaviour can help to continuously verify identity and protect personal data. However, little is known about the influence of GUI elements and hand postures on such touch biometrics. Thus, we present a metric to measure the amount of user-revealing information that can be extracted from touch targeting interactions and apply it in eight targeting tasks with over 150,000 touches from 24 users in two sessions. We compare touch-to-target offset patterns for four target types and two hand postures. Our analyses reveal that small, compactly shaped targets near screen edges yield the most descriptive touch targeting patterns. Moreover, our results show that thumb touches are more individual than index finger ones. We conclude that touch-based user identification systems should analyse GUI layouts and infer hand postures. We also describe a framework to estimate the usefulness of GUIs for touch biometrics.
Enhancing Mobile Content Privacy with Proxemics Aware Notifications and Protection
Given the widespread adoption of mobile devices and the private personal and work information they carry, casual or deliberate shoulder surfing is an increasing concern with these devices. We iteratively designed a tablet interface that detects when people nearby are looking at the screen, providing awareness through glyph notifications, and response through visual protections, and evaluated its use in two experiments. The results indicate that mobile content privacy management systems such as ours could help alleviate the cognitive and social burden of managing mobile device privacy in dynamic settings. We identify physical privacy behaviours and preferences that can inform the design of privacy notification and management protocols on mobile devices. We argue that such systems require subtlety so as not to advertise the users’ intention for privacy, flexibility in addressing dynamic privacy needs and trustworthiness to promote adoption.
CalendarCast: Setup-Free, Privacy-Preserving, Localized Sharing of Appointment Data
We introduce CalendarCast, a novel method to support the common task of finding a suitable time and date for a shared meeting among co-located participants using their personal mobile devices. In this paper, we describe the Bluetooth-based wireless protocol and interaction concept on which CalendarCast is based, present a prototypical implementation with Android smartphones and dedicated beacons, and report on results of a user study demonstrating improved task performance compared to unaugmented calendars. The motivating scenario for CalendarCast occurs quite often in a variety of contexts, for example at the end of a prior meeting or during ad-hoc conversations in the hallway. Despite a large variety of digital calendar tools, this situation still usually involves a lengthy manual comparison of free and busy time slots. CalendarCast utilizes Bluetooth Low Energy (BTLE) advertisement broadcasts to share the required free/busy information with a limited, localized audience, on demand only, and without revealing detailed personal information. No prior knowledge about the other participants, such as email addresses or account names, is required.
SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull
Secure user identification is important for the increasing number of eyewear computers but limited input capabilities pose significant usability challenges for established knowledge-based schemes, such as passwords or PINs. We present SkullConduct, a biometric system that uses bone conduction of sound through the user’s skull as well as a microphone readily integrated into many of these devices, such as Google Glass. At the core of SkullConduct is a method to analyze the characteristic frequency response created by the user’s skull using a combination of Mel Frequency Cepstral Coefficient (MFCC) features as well as a computationally light-weight 1NN classifier. We report on a controlled experiment with 10 participants that shows that this frequency response is person-specific and stable — even when taking off and putting on the device multiple times — and thus serves as a robust biometric. We show that our method can identify users with 97.0% accuracy and authenticate them with an equal error rate of 6.9%, thereby bringing biometric user identification to eyewear computers equipped with bone conduction technology.
Use Your Words: Designing One-time Pairing Codes to Improve User Experience
The Internet of Things is connecting an ever-increasing number of devices. These devices often require access to personal information, but their meagre user interfaces usually do not permit traditional modes of authentication. On such devices, one-time pairing codes are often used instead. This pairing process can involve transcribing randomly generated alphanumeric codes, which can be frustrating, slow and error-prone. In this paper, we present an improved pairing method that uses sets of English words instead of random strings. The word method, although longer in terms of character length, allows users to pair devices more quickly, whilst still maintaining the complexity necessary for secure interactions.
SESSION: (Re)understanding Makin? A Critical Broadening of Maker Cultures
Reconstituting the Utopian Vision of Making: HCI After Technosolutionism
HCI research has both endorsed “making” for its innovation and democratization capacity and critiqued its underlying technosolutionism, i.e., the idea that technology provides solutions to complex social problems. This paper offers a reflexive-interventionist approach that simultaneously takes seriously the critiques of making’s claims as technosolutionist while also embracing its utopian project as worth reconstituting in broader sociopolitical terms. It applies anthropological theory of the global and feminist-utopianism to the analysis of findings from research on making cultures in Taiwan and China. More specifically, the paper provides ethnographic snippets of utopian glimmers in order to speculatively imagine and explore alternative futures of making worth pursuing, and in so doing re-constitute the utopian vision of making.
Values in Repair
This paper examines the question of “values in repair” — the distinct forms of meaning and care that may be built into human-technology interactions through individual and collective acts of repair. Our work draws on research in HCI and the social sciences and findings from ethnographic studies in four sites — two amateur “fixers” collectives’ in Brooklyn and Seattle, USA and two mobile phone repair communities in Uganda and Bangladesh — to advance two arguments. First, studies of repair account for new sites and processes of value that differ from those appearing at HCI’s better-studied moments of design and use. Second, repair may embed modes of human interaction with technology and with each other in ways that surface values as contingent and ongoing accomplishments, suggesting ongoing processes of valuation that can never be fully fixed or commoditized. These insights help HCI account for human relationships to technology built into the world through repair.
Making Community: The Wider Role of Makerspaces in Public Life
Makerspaces public workshops where makers can share tools and knowledge are a growing resource for amateurs and professionals alike. While the role of makerspaces in innovation and peer learning is widely discussed, we attempt to look at the wider roles that makerspaces play in public life. Through site visits and interviews at makerspaces and similar facilities across the UK, we have identified additional roles that these spaces play: as social spaces, in supporting wellbeing, by serving the needs of the communities they are located in and by reaching out to excluded groups. Based on these findings, we suggest implications and future directions for both makerspace organisers and community researchers.
Continuing the Dialogue: Bringing Research Accounts Back into the Field
This paper examines the work to bring HCI research back to the people and sites under study. We draw on our ongoing collaboration with members of feminist hackerspaces in Northern California where we conducted fieldwork over eighteen months in 2014 and 2015. Together we created and distributed a zine — a self-published magazine produced with a photocopier — that knit together content of a published paper with local histories of feminist print production. By tracing the efforts involved in this collaboration and its effects on our research project, our research community, and ourselves, we extend HCI’s efforts to foster continued dialogue with our sites of study. We end by outlining strategies for bolstering this mission both within and beyond HCI.
SESSION: Learning Programming
Programming, Problem Solving, and Self-Awareness: Effects of Explicit Guidance
More people are learning to code than ever, but most learning opportunities do not explicitly teach the problem solving skills necessary to succeed at open-ended programming problems. In this paper, we present a new approach to impart these skills, consisting of: 1) explicit instruction on programming problem solving, which frames coding as a process of translating mental representations of problems and solutions into source code, 2) a method of visualizing and monitoring progression through six problem solving stages, 3) explicit, on-demand prompts for learners to reflect on their strategies when seeking help from instructors, and 4) context-sensitive help embedded in a code editor that reinforces the problem solving instruction. We experimentally evaluated the effects of our intervention across two 2-week web development summer camps with 48 high school students, finding that the intervention increased productivity, independence, programming self-efficacy, metacognitive awareness, and growth mindset. We discuss the implications of these results on learning technologies and classroom instruction.
Understanding Conversational Programmers: A Perspective from the Software Industry
Recent research suggests that some students learn to program with the goal of becoming conversational programmers: they want to develop programming literacy skills not to write code in the future but mainly to develop conversational skills and communicate better with developers and to improve their marketability. To investigate the existence of such a population of conversational programmers in practice, we surveyed professionals at a large multinational technology company who were not in software development roles. Based on 3151 survey responses from professionals who never or rarely wrote code, we found that a significant number of them (42.6%) had invested in learning programming on the job. While many of these respondents wanted to perform traditional end-user programming tasks (e.g., data analysis), we discovered that two top motivations for learning programming were to improve the efficacy of technical conversations and to acquire marketable skillsets. The main contribution of this work is in empirically establishing the existence and characteristics of conversational programmers in a large software development context.
Blind Spots in Youth DIY Programming: Examining Diversity in Creators, Content, and Comments within the Scratch Online Community
Much attention has focused on the lack of diversity in access and participation in digital media available to youth. Far less attention has been paid to the diversity of youth creators and the content that is produced by youth. We examined the diversity of project creators, content, and comments in one of the largest youth programming sites called Scratch (scratch.mit.edu), with over 7 million registered members between ages 6-16, over 10 million posted projects and 16 million comments. We used keyword and webcrawler searches to reveal that only a small number of users (<.01%) self-disclosed their racial and ethnic identities. Case studies further illuminated how project designs and comments delved into race, provided cultural critique or addressed racial harassment. In the discussion, we address these blind spots of diversity in massive online DIY youth communities, discuss methodological limitations, and provide recommendations for future directions in supporting diversity.
Skill Progression in Scratch Revisited
This paper contributes to a growing body of work that attempts to measure informal learning online by revisiting two of the most surprising findings from a 2012 study on skill progression in Scratch by Scaffidi and Chambers: users tend to share decreasingly code-heavy projects over time; and users’ projects trend toward using a less diverse range of code concepts. We revisit Scaffidi and Chambers’s work in three ways: with a replication of their study using the full population of projects from which they sampled, a simulation study that replicates both their analytic and sampling methodology, and an alternative analysis that addresses several important threats. Our results suggest that the population estimates are opposite in sign to those presented in the original work.
SESSION: Tracking Fingers
SkinTrack: Using the Body as an Electrical Waveguide for Continuous Finger Tracking on the Skin
SkinTrack is a wearable system that enables continuous touch tracking on the skin. It consists of a ring, which emits a continuous high frequency AC signal, and a sensing wristband with multiple electrodes. Due to the phase delay inherent in a high-frequency AC signal propagating through the body, a phase difference can be observed between pairs of electrodes. SkinTrack measures these phase differences to compute a 2D finger touch coordinate. Our approach can segment touch events at 99% accuracy, and resolve the 2D location of touches with a mean error of 7.6mm. As our approach is compact, non-invasive, low-cost and low-powered, we envision the technology being integrated into future smartwatches, supporting rich touch interactions beyond the confines of the small touchscreen.
Finexus: Tracking Precise Motions of Multiple Fingertips Using Magnetic Sensing
With the resurgence of head-mounted displays for virtual reality, users need new input devices that can accurately track their hands and fingers in motion. We introduce Finexus, a multipoint tracking system using magnetic field sensing. By instrumenting the fingertips with electromagnets, the system can track fine fingertip movements in real time using only four magnetic sensors. To keep the system robust to noise, we operate each electromagnet at a different frequency and leverage bandpass filters to distinguish signals attributed to individual sensing points. We develop a novel algorithm to efficiently calculate the 3D positions of multiple electromagnets from corresponding field strengths. In our evaluation, we report an average accuracy of 1.33 mm, as compared to results from an optical tracker. Our real-time implementation shows Finexus is applicable to a wide variety of human input tasks, such as writing in the air.
FingerIO: Using Active Sonar for Fine-Grained Finger Tracking
We present fingerIO, a novel fine-grained finger tracking solution for around-device interaction. FingerIO does not require instrumenting the finger with sensors and works even in the presence of occlusions between the finger and the device. We achieve this by transforming the device into an active sonar system that transmits inaudible sound signals and tracks the echoes of the finger at its microphones. To achieve sub-centimeter level tracking accuracies, we present an innovative approach that use a modulation technique commonly used in wireless communication called Orthogonal Frequency Division Multiplexing (OFDM). Our evaluation shows that fingerIO can achieve 2-D finger tracking with an average accuracy of 8 mm using the in-built microphones and speaker of a Samsung Galaxy S4. It also tracks subtle finger motion around the device, even when the phone is in the pocket. Finally, we prototype a smart watch form-factor fingerIO device and show that it can extend the interaction space to a 0.5×0.25 m2 region on either side of the device and work even when it is fully occluded from the finger.
DigitSpace: Designing Thumb-to-Fingers Touch Interfaces for One-Handed and Eyes-Free Interactions
Thumb-to-fingers interfaces augment touch widgets on fingers, which are manipulated by the thumb. Such interfaces are ideal for one-handed eyes-free input since touch widgets on the fingers enable easy access by the stylus thumb. This study presents DigitSpace, a thumb-to-fingers interface that addresses two ergonomic factors: hand anatomy and touch precision. Hand anatomy restricts possible movements of a thumb, which further influences the physical comfort during the interactions. Touch precision is a human factor that determines how precisely users can manipulate touch widgets set on fingers, which determines effective layouts of the widgets. Buttons and touchpads were considered in our studies to enable discrete and continuous input in an eyes-free manner. The first study explores the regions of fingers where the interactions can be comfortably performed. According to the comfort regions, the second and third studies explore effective layouts for button and touchpad widgets. The experimental results indicate that participants could discriminate at least 16 buttons on their fingers. For touchpad, participants were asked to perform unistrokes. Our results revealed that since individual participant performed a coherent writing behavior, personalized $1 recognizers could offer 92% accuracy on a cross-finger touchpad. A series of design guidelines are proposed for designers, and a DigitSpace prototype that uses magnetic-tracking methods is demonstrated.
SESSION: VR for Collaboration
Head Mounted Projection Display & Visual Attention: Visual Attentional Processing of Head Referenced Static and Dynamic Displays while in Motion and Standing
The Head Mounted Projection Display (HMPD) is a growing interest area in HCI. Although various aspects of HMPDs have been investigated, there is not enough information regarding the effect of HMPDs (i.e., head referenced static and dynamic displays while a user is in motion and standing) on visual attentional performance. For this purpose, we conducted a user study (N=18) with three experimental conditions (control, standing, walking) and two visual perceptual tasks (with dynamic and static displays). Significant differences between conditions were only found for the task with dynamic display; accuracy was lower in walking condition compared to the other two conditions. Our work contributes an empirical investigation of the effect of HMPDs on visual attentional performance by providing data-driven benchmarks for developing graphical user interface design guidelines for HMPDs.
Stabilized Annotations for Mobile Remote Assistance
Recent mobile technology has provided new opportunities for creating remote assistance systems. However, mobile support systems present a particular challenge: both the camera and display are held by the user, leading to shaky video. When pointing or drawing annotations, this means that the desired target often moves, causing the gesture to lose its intended meaning. To address this problem, we investigate annotation stabilization techniques, which allow annotations to stick to their intended location. We studied two annotation systems, using three different forms of annotations, with both tablets and head-mounted displays. Our analysis suggests that stabilized annotations and head-mounted displays are only beneficial in certain situations. However, the simplest approach of automatically freezing video while drawing annotations was surprisingly effective in facilitating the completion of remote assistance tasks.
Parallel Eyes: Exploring Human Capability and Behaviors with Paralleled First Person View Sharing
Our research explores how humans can understand and develop viewing behaviors with mutual paralleled first person view sharing in which a person can see others’ first person video perspectives as well as their own perspective in realtime. We developed a paralleled first person view sharing system which consists of multiple video see-through head mounted displays and an embedded eye tracking system. With this system, four persons can see four shared first person videos of each other. We then conducted workshop based research with two activities, drawing pictures and playing a simple chasing game with our view sharing system. Our results show that 1) people can complement each other’s memory and decisions and 2) people can develop their viewing behaviors to understand their own physical embodiment and spatial relationship with others in complex situations. Our findings about patterns of viewing behavior and design implications will contribute to building design experience in paralleled view sharing applications.
Gaze Augmentation in Egocentric Video Improves Awareness of Intention
Video communication using head-mounted cameras could be useful to mediate shared activities and support collaboration. Growing popularity of wearable gaze trackers presents an opportunity to add gaze information on the egocentric video. We hypothesized three potential benefits of gaze-augmented egocentric video to support collaborative scenarios: support deictic referencing, enable grounding in communication, and enable better awareness of the collaborator’s intentions. Previous research on using egocentric videos for real-world collaborative tasks has failed to show clear benefits of gaze point visualization. We designed a study, deconstructing a collaborative car navigation scenario, to specifically target the value of gaze-augmented video for intention prediction. Our results show that viewers of gaze-augmented video could predict the direction taken by a driver at a four-way intersection more accurately and more confidently than a viewer of the same video without the superimposed gaze point. Our study demonstrates that gaze augmentation can be useful and encourages further study in real-world collaborative scenarios.
SESSION: I want to know my data Democratizing, Opening and Comprehending Data
Open Data in Scientific Settings: From Policy to Practice
Open access to data is commonly required by funding agencies, journals, and public policy, despite the lack of agreement on the concept of “open data.” We present findings from two longitudinal case studies of major scientific collaborations, the Sloan Digital Sky Survey in astronomy and the Center for Dark Energy Biosphere Investigations in deep subseafloor biosphere studies. These sites offer comparisons in rationales and policy interpretations of open data, which are shaped by their differing scientific objectives. While policy rationales and implementations shape infrastructures for scientific data, these rationales also are shaped by pre-existing infrastructure. Meanings of the term “open data” are contingent on project objectives and on the infrastructures to which they have access.
The Datacatcher: Batch Deployment and Documentation of 130 Location-Aware, Mobile Devices That Put Sociopolitically-Relevant Big Data in People’s Hands: Polyphonic Interpretation at Scale
This paper reports the results of a field trial of 130 bespoke devices as well as our methodological approach to the undertaking. Datacatchers are custom-built, location-aware devices that stream messages about the area they are in. Derived from a large number of ‘big data’ sources, the messages simultaneously draw attention to the socio-political topology of the lived environment and to the nature of big data itself. We used a service design consultancy to deploy the devices, and two teams of documentary filmmakers to capture participants’ experiences. Here we discuss the development of this approach and how people responded to the Datacatchers as products, as revealing sociopolitical issues, and as purveyors of big data that might be open to question.
Physikit: Data Engagement Through Physical Ambient Visualizations in the Home
Internet of things (IoT) devices and sensor kits have the potential to democratize the access, use, and appropriation of data. Despite the increased availability of low cost sensors, most of the produced data is “black box” in nature: users often do not know how to access or interpret data. We propose a “human-data design” approach in which end-users are given tools to create, share, and use data through tangible and physical visualizations. This paper introduces Physikit, a system designed to allow users to explore and engage with environmental data through physical ambient visualizations. We report on the design and implementation of Physikit, and present a two-week field study which showed that participants got an increased sense of the meaning of data, embellished and appropriated the basic visualizations to make them blend into their homes, and used the visualizations as a probe for community engagement and social behavior.
Accountable: Exploring the Inadequacies of Transparent Financial Practice in the Non-Profit Sector
Increasingly, governments and organisations publish data on expenditure and finance as ‘open’ data in order to be more transparent to the public in how funding is spent. Accountable is a web-based tool that visualises and relates open financial data provided by local government and non-profit organisations (NPOs) in the UK. A qualitative study was conducted where Accountable was treated as a technology probe, and used by representatives of NPOs and members of the public who invest their time or effort voluntarily into such organisations. The study highlighted how: current open data sets provided by public bodies are inadequate in their representation of funding structures; the focus on finance and fiscal expenditure in such data makes invisible the in-kind effort of volunteers and the wider beneficiaries of an organisation’s work; and problems arising from the interoperability of open data technologies. The paper concludes with implications for the design of future systems, considering the domains of transparency and accountability in relation to the findings.
SESSION: The Economics of Being Online
Designing for Labour: Uber and the On-Demand Mobile Workforce
Apps allowing passengers to hail and pay for taxi service on their phone? such as Uber and Lyft-have affected the livelihood of thousands of workers worldwide. In this paper we draw on interviews with traditional taxi drivers, rideshare drivers and passengers in London and San Francisco to understand how “ride-sharing” transforms the taxi business. With Uber, the app not only manages the allocation of work, but is directly involved in “labour issues”: changing the labour conditions of the work itself. We document how Uber driving demands new skills such as emotional labour, while increasing worker flexibility. We discuss how the design of new technology is also about creating new labour opportunities — jobs — and how we might think about our responsibilities in designing these labour relations.
‘MASTerful’ Matchmaking in Service Transactions: Inferred Abilities, Needs and Interests versus Activity Histories
Timebanking is a growing type of peer-to-peer service exchange, but is hampered by the effort of finding good transaction partners. We seek to reduce this effort by using a Matching Algorithm for Service Transactions (MAST). MAST matches transaction partners in terms of similarity of interests and complementarity of abilities and needs. We present an experiment involving data and participants from a real timebanking network, that evaluates the acceptability of MAST, and shows that such an algorithm can retrieve matches that are subjectively better than matches based on matching the category of people’s historical offers or requests to the category of a current transaction request.
Of Two Minds, Multiple Addresses, and One Ledger: Characterizing Opinions, Knowledge, and Perceptions of Bitcoin Across Users and Non-Users
Digital currencies represent a new method for exchange — a payment method with no physical form, made real by the Internet. This new type of currency was created to ease online transactions and to provide greater convenience in making payments. However, a critical component of a monetary system is the people who use it. Acknowledging this, we present results of our interview study (N=20) with two groups of participants (users and non-users) about how they perceive the most popular digital currency, Bitcoin. Our results reveal: non-users mistakenly believe they are incapable of using Bitcoin, users are not well-versed in how the protocol functions, they have misconceptions about the privacy of transactions, and that Bitcoin satisfies properties of ideal payment systems as defined by our participants. Our results illustrate Bitcoin’s tradeoffs, its uses, and barriers to entry.
Hosting via Airbnb: Motivations and Financial Assurances in Monetized Network Hospitality
We examine how financial assurance structures and the clearly defined financial transaction at the core of monetized network hospitality reduce uncertainty for Airbnb hosts and guests. We apply the principles of social exchange and intrinsic and extrinsic motivation to a qualitative study of Airbnb hosts to 1) describe activities that are facilitated by the peer-to-peer exchange platform and 2) how the assurance of the initial financial exchange facilitates additional social exchanges between hosts and guests. The study illustrates that the financial benefits of hosting do not necessarily crowd out intrinsic motivations for hosting but instead strengthen them and even act as a gateway to further social exchange and interpersonal interaction. We describe the assurance structures in networked peer-to-peer exchange, and explain how such assurances can reconcile contention between extrinsic and intrinsic motivations. We conclude with implications for design and future research.
SESSION: Designing Physical Games
Digitally Augmenting Sports: An Opportunity for Exploring and Understanding Novel Balancing Techniques
Using game balancing techniques can provide the right level of challenge and hence enhance player engagement for sport players with different skill levels. Digital technology can support and enhance balancing techniques in sports, for example, by adjusting players’ level of intensity based on their heart rate. However, there is limited knowledge on how to design such balancing and its impact on the user experience. To address this we created two novel balancing techniques enabled by digitally augmenting a table tennis table. We adjusted the more skilled player’s performance by inducing two different styles of play and studied the effects on game balancing and player engagement. We showed that by altering the more skilled player’s performance we can balance the game through: (i) encouraging game mistakes, and (ii) changing the style of play to one that is easier for the opponent to counteract. We outline the advantages and disadvantages of each approach, extending our understanding of game balancing design. We also show that digitally augmenting sports offers opportunities for novel balancing techniques while facilitating engaging experiences, guiding those interested in HCI and sports.
SwimTrain: Exploring Exergame Design for Group Fitness Swimming
We explore design opportunities for using interactive technologies to enrich group fitness exercises, such as group spinning and swimming, in which an instructor guides a workout program and members synchronously perform a shared physical activity. As a case study, we investigate group fitness swimming. The design challenge is to coordinate a large group of people by considering trade-offs between social awareness and information overload. Our resulting group fitness swimming game, SwimTrain, allows a group of people to have localized synchronous interactions over a virtual space. The game uses competitive and cooperative phases to help group members acquire group-wide awareness. The results of our user study showed that SwimTrain provides socially-enriched swimming experiences, motivates swimmers to follow a training regimen and exert more intensely, and allows strategic game play dealing with skill differences among swimmers. Consequently, we propose several practical considerations for designing group fitness exergames.
SESSION: Work, Multitasking, and Distraction
Influence of Display Transparency on Background Awareness and Task Performance
It has been argued that transparent displays are beneficial for certain tasks by allowing users to simultaneously see on-screen content as well as the environment behind the display. However, it is yet unclear how much in background awareness users gain and if performance suffers for tasks performed on the transparent display, since users are no longer shielded from distractions. Therefore, we investigate the influence of display transparency on task performance and background awareness in a dual-task scenario. We conducted an experiment comparing transparent displays with conventional displays in different horizontal and vertical configurations. Participants performed an attention-demanding primary task on the display while simultaneously observing the background for target stimuli. Our results show that transparent and horizontal displays increase the ability of participants to observe the background while keeping primary task performance constant.
Email Duration, Batching and Self-interruption: Patterns of Email Use on Productivity and Stress
While email provides numerous benefits in the workplace, it is unclear how patterns of email use might affect key workplace indicators of productivity and stress. We investigate how three email use patterns: duration, interruption habit, and batching, relate to perceived workplace productivity and stress. We tracked email usage with computer logging, biosensors and daily surveys for 40 information workers in their in situ workplace environments for 12 workdays. We found that the longer daily time spent on email, the lower was perceived productivity and the higher the measured stress. People who primarily check email through self-interruptions report higher productivity with longer email duration compared to those who rely on notifications. Batching email is associated with higher rated productivity with longer email duration, but despite widespread claims, we found no evidence that batching email leads to lower stress. We discuss the implications of our results for improving organizational email practices.
‘Don’t Waste My Time’: Use of Time Information Improves Focus
Maintaining work focus when on a computer is a major challenge, and people often feel that they use their time ineffectively. To improve focus we designed meTime, a real-time awareness application that shows users how they allocate their time across applications. In two real-world deployments involving 118 participants, we examined whether greater awareness of time use improves focus. In our first deployment, we provided awareness information using meTime, to both office workers and students. Exposure to meTime reduced use of social media, email, browsing and total time online. However increased awareness didn’t affect time spent in productivity applications. A second educational deployment largely replicated these results and showed that meTime also reduced users’ perceptions of their ability to focus effectively. Changed perceptions were associated with higher class grades. We discuss practical and theoretical implications as well as design principles for use of time applications.
Neurotics Can’t Focus: An in situ Study of Online Multitasking in the Workplace
In HCI research, attention has focused on understanding external influences on workplace multitasking. We explore instead how multitasking might be influenced by individual factors: personality, stress, and sleep. Forty information workers’ online activity was tracked over two work weeks. The median duration of online screen focus was 40 seconds. The personality trait of Neuroticism was associated with shorter online focus duration and Impulsivity-Urgency was associated with longer online focus duration. Stress and sleep duration showed trends to be inversely associated with online focus. Shorter focus duration was associated with lower assessed productivity at day’s end. Factor analysis revealed a factor of lack of control which significantly predicts multitasking. Our results suggest that there could be a trait for distractibility where some individuals are susceptible to online attention shifting in the workplace. Our results have implications for information systems (e.g. educational systems, game design) where attention focus is key.
SESSION: Physical Disability and Assistive Technologies
An Intimate Laboratory?: Prostheses as a Tool for Experimenting with Identity and Normalcy
This paper is about the aspects of ability, selfhood, and normalcy embodied in people’s relationships with prostheses. Drawing on interviews with 14 individuals with upper-limb loss and diverse experiences with prostheses, we find people not only choose to use and not use prosthesis throughout their lives but also form close and complex relationships with them. The design of “assistive” technology often focuses on enhancing function; however, we found that prostheses played important roles in people’s development of identity and sense of normalcy. Even when a prosthesis failed functionally, such as was the case with 3D-printed prostheses created by an on-line open-source maker community (e-NABLE), we found people still praised the design and initiative because of the positive impacts on popular culture, identity, and community building. This work surfaces crucial questions about the role of design interventions in identity production, the promise of maker communities for accelerating innovation, and a broader definition of “assistive” technology.
The Design of Assistive Location-based Technologies for People with Ambulatory Disabilities: A Formative Study
In this paper, we investigate how people with mobility impairments assess and evaluate accessibility in the built environment and the role of current and emerging location-based technologies therein. We conducted a three-part formative study with 20 mobility impaired participants: a semi-structured interview (Part 1), a participatory design activity (Part 2), and a design probe activity (Part 3). Part 2 and 3 actively engaged our participants in exploring and designing the future of what we call assistive location-based technologies (ALTs) location-based technologies that specifically incorporate accessibility features to support navigating, searching, and exploring the physical world. Our Part 1 findings highlight how existing mapping tools provide accessibility benefits even though often not explicitly designed for such uses. Findings in Part 2 and 3 help identify and uncover useful features of future ALTs. In particular, we synthesize 10 key features and 6 key data qualities. We conclude with ALT design recommendations.
Helping Hands: Requirements for a Prototyping Methodology for Upper-limb Prosthetics Users
This paper presents a case study of three participants with upper-limb amputations working with researchers to design prosthetic devices for specific tasks: playing the cello, operating a hand-cycle, and using a table knife. Our goal was to identify requirements for a design process that can engage the assistive technology user in rapidly prototyping assistive devices that fill needs not easily met by traditional assistive technology. Our study made use of 3D printing and other playful and practical prototyping materials. We discuss materials that support on-the-spot design and iteration, dimensions along which in-person iteration is most important (such as length and angle) and the value of a supportive social network for users who prototype their own assistive technology. From these findings we argue for the importance of extensions in supporting modularity, community engagement, and relatable prototyping materials in the iterative design of prosthetics.
Motivating Stroke Rehabilitation Through Music: A Feasibility Study Using Digital Musical Instruments in the Home
Digital approaches to physical rehabilitation are becoming increasingly common and embedding these new technologies within a musical framework may be particularly motivating. The current feasibility study aimed to test if digital musical instruments (DMIs) could aid in the self-management of stroke rehabilitation in the home, focusing on seated forward reach movements of the upper limb. Participants (n=3), all at least 11 months post stroke, participated in 15 researcher-led music making sessions over a 5 week intervention period. The sessions involved them ‘drumming’ to the beat of self-chosen tunes using bespoke digital drum pads that were synced wirelessly to an iPad App and triggered percussion sounds as feedback. They were encouraged to continue these exercises when the researcher was not present. The results showed significant levels of self-management and significant increases in functional measures with some evidence for transfer into tasks of daily living.
SESSION: Citizenry and the Science? Design as Inquiry and Participation
Everyday Food Science as a Design Space for Community Literacy and Habitual Sustainable Practice
Focusing on food as a platform for everyday science, this paper details our fieldwork with practitioners who routinely experiment with preserving, fermenting, brewing, pickling, foraging for, and healing with food. We engage with these at-home science initiatives as community-driven efforts to construct knowledge and envision alternatives to top-down modes of production. Our findings detail the motivations, challenges, and workarounds behind these practices, as well as participants’ hybrid lay-professional knowledge, and the iterative mechanisms by which their expertise is scaffolded. Our paper contributes to CHI’s amateur/citizen science research by examining how social, digital, and physical materials shape scientific literacy; and to sustainable HCI by presenting habitual practice as an alternative (bottom-up) form of food production and preservation.
You Put What, Where?: Hobbyist Use of Insertable Devices
The human body has emerged as more than a canvas for wearable electronic devices. Devices too go within the body, such as internal medical devices. Within the last decade, individual hobbyists have begun voluntarily inserting non-medical devices in, through and underneath their skin. This paper investigates the current use of insertable devices. Through interviews we report on the types of devices people are inserting into their bodies. We classify the use of insertables and the reasons individuals choose insertables over more traditional wearable or luggable devices. These classifications facilitate understanding of insertables as a legitimate category of device for hardware designers that present new challenges for interaction designers.
On Looking at the Vagina through Labella
Women’s understandings of their own intimate anatomy has been identified as critical to women’s reproductive health and sexual wellbeing. However, talking about it, seeking medical help when necessary as well as examining oneself in order to ‘know’ oneself is complicated by social-cultural constructions of the vagina, i.e. it is something private, shameful and not to be talked about. In response to this, we designed Labella, an augmented system that supports intimate bodily knowledge and pelvic fitness in women. It combines a pair of underwear and a mobile phone as a tool for embodied intimate self-discovery. In this paper, we describe Labella, and its evaluation with fourteen women, aged 25-63. We show how through situated embodied perception Labella empowers ‘looking’. We highlight how the simple act of augmented looking enables the construction of knowledge which ranges from establishing the ‘very basics’ through to a nuanced understanding of pelvic muscle structure. Finally, we highlight the role of awkwardness and humour in the design of interactions to overcome taboo.
Citizens for Science and Science for Citizens: The View from Participatory Design
The rise of citizen science as a form of public participation in research has engaged many disciplines and communities. This paper uses the lens of Participatory Design to contrast two different approaches to citizen science: one that puts citizens in the service of science and another that involves them in the production of knowledge. Through an empirical study of a diverse array of projects, we show how participation in citizen science often takes the more limited forms suggested by the former approach. Our analysis highlights the implications of limited participation and demonstrates how the CHI community is uniquely positioned to ameliorate these limitations.
To Sign Up, or not to Sign Up?: Maximizing Citizen Science Contribution Rates through Optional Registration
Many citizen science projects ask people to create an account before they participate – some require it. What effect does the registration process have on the number and quality of contributions? We present a controlled study comparing the effects of mandatory registration with an interface that enables people to participate without registering, but allows them to sign up to ‘claim’ contributions. We demonstrate that removing the requirement to register increases the number of visitors to the site contributing to the project by 62%, without reducing data quality. We also discover that contribution rates are the same for people who choose to register, and those who remain anonymous, indicating that the interface should cater for differences in participant motivation. The study provides evidence that to maximize contribution rates, projects should offer the option to create an account, but the process should not be a barrier to immediate contribution, nor should it be required.
SESSION: Evaluating Technological Application in Education
Facilitator, Functionary, Friend or Foe?: Studying the Role of iPads within Learning Activities Across a School Year
We present the findings from a longitudinal study of iPad use in a Primary school classroom. While tablet devices have found their way into classroom environments, we still lack in-depth and long-term studies of how they integrate into everyday classroom activities. Our findings illustrate in-classroom tablet use and the broad range of learning activities in subjects such as maths, languages, social sciences, and even physical education. Our observations expand current models on teaching and learning supported by tablet technology. Our findings are child-centred, focusing on three different roles that tablets can play as part of learning activities: Friend, Functionary, and Facilitator. This new perspective on in-classroom tablet use can facilitate critical discussions around the integration and impact of these devices in the educational context, from a design and educational point of view.
SESSION: Quantifying Efficiency of Input Methods
Modeling the Steering Time Difference between Narrowing and Widening Tunnels
The performance of trajectory-based tasks is modeled by the steering law, which predicts the required time from the index of difficulty (ID). This paper focuses on the fact that the time required to pass through a straight path with linearly-varying width alters depending on the direction of the movement. In this study, an expression for the relationship between the ID of narrowing and widening paths has been developed. This expression can be used to predict the movement time needed to pass through in the opposite direction from only a few data points, after measuring the time needed in the other direction. In the experiment, the times for five IDs were predicted with high precision from the measured time for one ID, thereby illustrating the effectiveness of the proposed method.
Modelling Error Rates in Temporal Pointing
We present a novel model to predict error rates in temporal pointing. With temporal pointing, a target is about to appear within a limited time window for selection. Unlike in spatial pointing, there is no movement to control in the temporal domain; the user can only determine when to launch the response. Although this task is common in interactions requiring temporal precision, rhythm, or synchrony, no previous HCI model predicts error rates as a function of task properties. Our model assumes that users have an implicit point of aim but their ability to elicit the input event at that time is hampered by variability in three processes: 1) an internal time-keeping process, 2) a response-execution stage, and 3) input processing in the computer. We derive a mathematical model with two parameters from these assumptions. High fit is shown for user performance with two task types, including a rapidly paced game. The model can explain previous findings showing that touchscreens are much worse in temporal pointing than physical input devices. It also has novel implications for design that extend beyond the conventional wisdom of minimising latency.
SESSION: Mobile Behaviors
Monetary Assessment of Battery Life on Smartphones
Research claims that users value the battery life of their smartphones, but no study to date has attempted to quantify battery value and how this value changes according to users’ current context and needs. Previous work has quantified the monetary value that smartphone users place on their data (e.g., location), but not on battery life. Here we present a field study and methodology for systematically measuring the monetary value of smartphone battery life, using a reverse second-price sealed-bid auction protocol. Our results show that the prices for the first and last 10% battery segments differ substantially. Our findings also quantify the tradeoffs that users consider in relation to battery, and provide a monetary model that can be used to measure the value of apps and enable fair ad-hoc sharing of smartphone resources.
Technology at the Table: Attitudes about Mobile Phone Use at Mealtimes
Mealtimes are a cherished part of everyday life around the world. Often centered on family, friends, or special occasions, sharing meals is a practice embedded with traditions and values. However, as mobile phone adoption becomes increasingly pervasive, tensions emerge about how appropriate it is to use personal devices while sharing a meal with others. Furthermore, while personal devices have been designed to support awareness for the individual user (e.g., notifications), little is known about how to support shared awareness in acceptability in social settings such as meals. In order to understand attitudes about mobile phone use during shared mealtimes, we conducted an online survey with 1,163 English-speaking participants. We find that attitudes about mobile phone use at meals differ depending on the particular phone activity and on who at the meal is engaged in that activity, children versus adults. We also show that three major factors impact participants’ attitudes: 1) their own mobile phone use; 2) their age; and 3) whether a child is present at the meal. We discuss the potential for incorporating social awareness features into mobile phone systems to ease tensions around conflicting mealtime behaviors and attitudes.
“I thought she would like to read it”: Exploring Sharing Behaviors in the Context of Declining Mobile Web Use
The use of applications on mobile devices has changed dramatically over the past few years. While web browsing was once a common activity, it’s now reported that 86% of time on mobile phones is in apps other than the browser. We set out to understand how the mobile web was currently fitting into people’s lives and what web sessions looked like. Finding a dramatic reduction in mobile web revisi-tation rates compared to previous work and that a large number of sessions comprised single page views, we then studied how web content was shared with others in mobile messaging, the source of many single page sessions. The HCI community has not heavily studied this sharing activity that many people perform daily. We conclude with design implications for new mobile applications from our two studies with a combined 287 participants where we studied actual logs of mobile web use and link sharing behavior.
Forget-me-not: History-less Mobile Messaging
Text messaging has long been a popular activity, and today smartphone apps enable users to choose from a plethora of mobile messaging applications. While we know a lot about SMS practices, we know less about practices of messaging applications. In this paper, we take a first step to explore one ubiquitous aspect of mobile messaging — messaging history. We designed, built, and trialled a mobile messaging application without history named forget-me-not. The two-week trial showed that history-less messaging no longer supports chit-chat as seen in e.g. WhatsApp, but is still considered conversational and more ‘engaging’. Participants expressed being lenient and relaxed about what they wrote. Removing the history allowed us to gain insights into what uses history has in other mobile messaging applications, such as planning events, allowing for distractions, and maintaining multiple conversation threads.
SESSION: Touchscreen Interactions
Detecting Swipe Errors on Touchscreens using Grip Modulation
We show that when users make errors on mobile devices they make immediate and distinct physical responses that can be observed with standard sensors. We used three standard cognitive tasks (Flanker, Stroop and SART) to induce errors from 20 participants. Using simple low-resolution capacitive touch sensors placed around a standard mobile device and the built-in accelerometer, we demonstrate that errors can be predicted at low error rates from micro-adjustments to hand grip and movement in the period shortly after swiping the touchscreen. Specifically, when combining features derived from hand grip and movement we obtain a mean AUC of 0.96 (with false accept and reject rates both below 10%). Our results demonstrate that hand grip and movement provide strong and low latency evidence for mistakes. The ability to detect user errors in this way could be a valuable component in future interaction systems, allowing interfaces to make it easier for users to correct erroneous inputs.
Characterizing How Interface Complexity Affects Children’s Touchscreen Interactions
Most touchscreen devices are not designed specifically with children in mind, and their interfaces often do not optimize interaction for children. Prior work on children and touchscreen interaction has found important patterns, but has only focused on simplified, isolated interactions, whereas most interfaces are more visually complex. We examine how interface complexity might impact children’s touchscreen interactions. We collected touch and gesture data from 30 adults and 30 children (ages 5 to 10) to look for similarities, differences, and effects of interface complexity. Interface complexity affected some touch interactions, primarily related to visual salience, and it did not affect gesture recognition. We also report general differences between children and adults. We provide design recommendations that support the design of touchscreen interfaces specifically tailored towards children of this age.
Smart Touch: Improving Touch Accuracy for People with Motor Impairments with Template Matching
We present two contributions toward improving the accessibility of touch screens for people with motor impairments. First, we provide an exploration of the touch behaviors of 10 people with motor impairments, e.g., we describe how touching with the back or sides of the hand, with multiple fingers, or with knuckles creates varied multi-point touches. Second, we introduce Smart Touch, a novel template-matching technique for touch input that maps any number of arbitrary contact-areas to a user’s intended (x,y) target location. The result is that users with motor impairments can touch however their abilities allow, and Smart Touch will resolve their intended touch point. Smart Touch therefore allows users to touch targets in whichever ways are most comfortable and natural for them. In an experimental evaluation, we found that Smart Touch predicted (x,y) coordinates of the users’ intended target locations over three times closer to the intended target than the native Land-on and Lift-off techniques reported by the built-in touch sensors found in the Microsoft PixelSense interactive tabletop. This result is an important step toward improving touch accuracy for people with motor impairments and others for whom touch screen operation was previously impossible.
Indirect 2D Touch Panning: How Does It Affect Spatial Memory and Navigation Performance?
We present experimental work which explores the effect of touch indirectness on spatial memory and navigation performance in a 2D panning task. In this regard and based on the theory of embodied cognition, prior work has observed performance increases for direct touch input over indirect mouse input. As indirect touch systems gain in importance, we designed an experiment to systematically investigate the effect of spatial indirectness while maintaining the proprioceptive and kinesthetic cues provided by touch input. In an abstract search task, participants of our study navigated a 2D space and were asked to reproduce spatial item configurations in a recall task. Our results indicate that spatial memory performance is not decreased by a spatial separation of touch input gestures and visual display. Further, our results suggest that decreasing the size of the input surface in the indirect condition increases the navigation efficiency.
EyeSwipe: Dwell-free Text Entry Using Gaze Paths
Text entry using gaze-based interaction is a vital communication tool for people with motor impairments. Most solutions require the user to fixate on a key for a given dwell time to select it, thus limiting the typing speed. In this paper we introduce EyeSwipe, a dwell-time-free gaze-typing method. With EyeSwipe, the user gaze-types the first and last characters of a word using the novel selection mechanism “reverse crossing.” To gaze-type the characters in the middle of the word, the user only needs to glance at the vicinity of the respective keys. We compared the performance of EyeSwipe with that of a dwell-time-based virtual keyboard. EyeSwipe afforded statistically significantly higher typing rates and more comfortable interaction in experiments with ten participants who reached 11.7 words per minute (wpm) after 30 min typing with EyeSwipe.
SESSION: VR & Feedback
Annexing Reality: Enabling Opportunistic Use of Everyday Objects as Tangible Proxies in Augmented Reality
Advances in display and tracking technologies hold the promise of increasingly immersive augmented-reality experiences. Unfortunately, the on-demand generation of haptic experiences is lagging behind these advances in other feedback channels. We present Annexing Reality; a system that opportunistically annexes physical objects from a user’s current physical environment to provide the best-available haptic sensation for virtual objects. It allows content creators to a priori specify haptic experiences that adapt to the user’s current setting. The system continuously scans user’s surrounding, selects physical objects that are similar to given virtual objects, and overlays the virtual models on to selected physical ones reducing the visual-haptic mismatch. We describe the developer’s experience with the Annexing Reality system and the techniques utilized in realizing it. We also present results of a developer study that validates the usability and utility of our method of defining haptic experiences.
Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences
Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We propose a solution that overcomes this limitation by hacking human perception. We have created a framework for repurposing passive haptics, called haptic retargeting, that leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: world manipulation, body manipulation and a hybrid technique which combines both world and body manipulation. Our study results indicate that all our haptic retargeting techniques improve the sense of presence when compared to typical wand-based 3D control of virtual objects. Furthermore, our hybrid haptic retargeting achieved the highest satisfaction and presence scores while limiting the visible side-effects during interaction.
HaptoClone (Haptic-Optical Clone) for Mutual Tele-Environment by Real-time 3D Image Transfer with Midair Force Feedback
In this paper, we propose a novel interactive system that mutually copies adjacent 3D environments optically and physically. The system realizes mutual user interactions through haptics without wearing any devices. A realistic volumetric image is displayed using a pair of micro-mirror array plates (MMAPs). The MMAP transmissively reflects the rays from an object, and a pair of them reconstructs the floating aerial image of the object. Our system can optically copy adjacent environments based on this technology. Haptic feedback is also given by using an airborne ultrasound tactile display (AUTD). Converged ultrasound can give force feedback in midair. Based on the optical characteristics of the MMAPs, the cloned image and the user share an identical coordinate system. When a user touches the transferred clone image, the system gives force feedback so that the user can feel the mechanical contact and reality of the floating image.
Dexmo: An Inexpensive and Lightweight Mechanical Exoskeleton for Motion Capture and Force Feedback in VR
We present Dexmo: an inexpensive and lightweight mechanical exoskeleton system for motion capturing and force feedback in virtual reality applications. Dexmo combines multiple types of sensors, actuation units and link rod structures to provide users with a pleasant virtual reality experience. The device tracks the user’s motion and uniquely provides passive force feedback. In combination with a 3D graphics rendered environment, Dexmo provides the user with a realistic sensation of interaction when a user is for example grasping an object. An initial evaluation with 20 participants demonstrate that the device is working reliably and that the addition of force feedback resulted in a significant reduction in error rate. Informal comments by the participants were overwhelmingly positive.
SwiVRChair: A Motorized Swivel Chair to Nudge Users’ Orientation for 360 Degree Storytelling in Virtual Reality
We present SwiVRChair, a motorized swivel chair to nudge users’ orientation in 360 degree storytelling scenarios. Since rotating a scene in virtual reality (VR) leads to simulator sickness, storytellers currently have no way of controlling users’ attention. SwiVRChair allows creators of 360 degree VR movie content to be able to rotate or block users’ movement to either show certain content or prevent users from seeing something. To enable this functionality, we modified a regular swivel chair using a 24V DC motor and an electromagnetic clutch. We developed two demo scenarios using both mechanisms (rotate and block) for the Samsung GearVR and conducted a user study (n=16) evaluating the presence, enjoyment and simulator sickness for participants using SwiVRChair compared to self control (Foot Control). Users rated the experience using SwiVRChair to be significantly more immersive and enjoyable whilst having a decrease in simulator sickness.
SESSION: Gamification
Personality-targeted Gamification: A Survey Study on Personality Traits and Motivational Affordances
While motivational affordances are widely used to enhance user engagement in “gamified” apps, they are often employed en masse. Prior research offers little guidance about how individuals with different dispositions may react-positively and negatively-to specific affordances. In this paper, we present a survey study investigating the relationships among individuals’ personality traits and perceived preferences for various motivational affordances used in gamification. Our results show that extraverts tend to be motivated by Points, Levels, and Leaderboards; people with high levels of imagination/openness are less likely to be motivated by Avatars. Negative correlations were found between emotional stability (the inverse of neuroticism) and several motivational affordances, indicating a possible limitation of gamification as an approach for a large segment of the population. Our findings contribute to the HCI community, and in particular to designers of persuasive and gamified apps, by providing design suggestions for targeting specific audiences based on personality.
Gamer Style: Performance Factors in Gamified Simulation
Serious games and gamified simulations are increasingly being used to aid instruction in technical disciplines including the medical field. Assessments of player performance are important in understanding user profiles in order to establish serious games as a reliable, consistent method for increasing skills and competence in all trainees. In this study we used questionnaires, game characteristic metrics and EEG analysis to explore players’ performance in a bronchoscopy simulator. We found that players who performed better were younger, made fewer errors, were quicker and differed in spectral profile during game play. Our findings, while speculative, have implications for training regimes in which gamified simulations are employed. We make suggestions for game design and for tailoring training regimes to suit individual learning styles to enhance knowledge acquisition and retention.
“Don’t Whip Me With Your Games”: Investigating “Bottom-Up” Gamification
In this paper we investigate “bottom-up” gamification, i.e. providing users with the option to gamify an experience on their own. To this end, we review commonly used gamification elements in terms of their suitability for such an approach and present the results of an online questionnaire (N=75) complemented by semi-structured interviews with employees of a manufacturing company (N=8). In a twelve-day-long study (N=20) we investigated the usefulness of a task managing app implementing a “bottom-up” gamification concept. With these studies, we derived requirements “bottom-up” applications should fulfill. The study results reveal that people want to use such an approach and are open to the creation of their own gamified experience, thus suggesting that “bottom-up” can be an alternative to “top-down” gamification often used today.
‘Choose a Game’: Creation and Evaluation of a Prototype Tool to Support Therapists in Brain Injury Rehabilitation
Brain injury (BI) is recognized as a major health issue. It is common for therapists to include commercial-off-the-shelf (COTS) games in their therapies to help motivate patients who have had a BI engage in rehabilitation tasks. In this paper, we present a prototype ‘Choose a Game’ tool that focuses on helping therapists select appropriate games that match their therapeutic goals and patient attributes. The tool leveraged a knowledge-base that we created about COTS games use in BI therapy. We evaluated the prototype through user studies with 29 therapists at two rehabilitation hospitals. While further improvements are needed, the tool enabled therapists to use games from a wider range of selections and therapists were generally satisfied with the game recommendations and the tool’s user experience. This project is also a demonstration of a novel research model for investigating domains where technologies are rapidly proliferating for users with wide-ranging attributes such as the domain of therapeutic gaming for BI rehabilitation.
SESSION: Displays and Shared Interactions
Negotiating for Space?: Collaborative Work Using a Wall Display with Mouse and Touch Input
Wall-sized displays support group work by allowing several people to work both separately and together. However, whether people interact directly through touch input or indirectly through mouse input can have profound effects on collaboration. We present a study that compares how groups collaborate using either multitouch or multiple mice on a wall-display. Participants used both input methods to work on two tasks: a shared-goal task and a mixed-motive task. Results show differences in participants’ awareness in collaborative tasks between the two input methods. The results also help understand the physical constraints touch input set on participants’ control of actions in collaborative tasks. We discuss how this influences collaboration. Results also show that touch input did not promote more equal participation than mouse input. We contrast the findings to earlier research on wall-display and tabletop collaboration.
An Actionable Approach to Understand Group Experience in Complex, Multi-surface Spaces
There is a steadily growing interest in the design of spaces in which multiple interactive surfaces are present and, in turn, in understanding their role in group activity. However, authentic activities in these multi-surface spaces can be complex. Groups commonly use digital and non-digital artefacts, tools and resources, in varied ways depending on their specific social and epistemic goals. Thus, designing for collaboration in such spaces can be very challenging. Importantly, there is still a lack of agreement on how to approach the analysis of groups’ experiences in these heterogeneous spaces. This paper presents an actionable approach that aims to address the complexity of understanding multi-user multi-surface systems. We provide a structure for applying different analytical tools in terms of four closely related dimensions of user activity: the setting, the tasks, the people and the runtime co-configuration. The applicability of our approach is illustrated with six types of analysis of group activity in a multi-surface design studio.
Shared Interaction on a Wall-Sized Display in a Data Manipulation Task
Wall-sized displays support small groups of users working together on large amounts of data. Observational studies of such settings have shown that users adopt a range of collaboration styles, from loosely to closely coupled. Shared interaction techniques, in which multiple users perform a command collaboratively, have also been introduced to support co-located collaborative work. In this paper, we operationalize five collaborative situations with increasing levels of coupling, and test the effects of providing shared interaction support for a data manipulation task in each situation. The results show the benefits of shared interaction for close collaboration: it encourages collaborative manipulation, it is more efficient and preferred by users, and it reduces physical navigation and fatigue. We also identify the time costs caused by disruption and communication in loose collaboration and analyze the trade-offs between parallelization and close collaboration. These findings inform the design of shared interaction techniques to support collaboration on wall-sized displays.
Creating Your Bubble: Personal Space On and Around Large Public Displays
We describe an empirical study that explores how users establish and use personal space around large public displays (LPDs). Our study complements field studies in this space by more fully characterizing interpersonal distances based on coupling and confirms the use of on-screen territories on vertical displays. Finally, we discuss implications for future research: limitations of proxemics and territoriality, how user range can augment existing theory, and the influence of dis- play size on personal space.
Gaze-based Notetaking for Learning from Lecture Videos
Taking notes has been shown helpful for learning. This activity, however, is not well supported when learning from watching lecture videos. The conventional video interface does not allow users to quickly locate and annotate important content in the video as notes. Moreover, users sometimes need to manually pause the video while taking notes, which is often distracting. In this paper, we develop a gaze-based system to assist a user in notetaking while watching lecture videos. Our system has two features to support notetaking. First, our system integrates offline video analysis and online gaze analysis to automatically detect and highlight key content from the lecture video for notetaking. Second, our system provides adaptive video control that automatically reduces the video playback speed or pauses it while a user is taking notes to minimize the user’s effort in controlling video. Our study shows that our system enables users to take notes more easily and with better quality than the traditional video interface.
SESSION: Mental Health in Technology Design and Social Media
Discovering Shifts to Suicidal Ideation from Mental Health Content in Social Media
History of mental illness is a major factor behind suicide risk and ideation. However research efforts toward characterizing and forecasting this risk is limited due to the paucity of information regarding suicide ideation, exacerbated by the stigma of mental illness. This paper fills gaps in the literature by developing a statistical methodology to infer which individuals could undergo transitions from mental health discourse to suicidal ideation. We utilize semi-anonymous support communities on Reddit as unobtrusive data sources to infer the likelihood of these shifts. We develop language and interactional measures for this purpose, as well as a propensity score matching based statistical approach. Our approach allows us to derive distinct markers of shifts to suicidal ideation. These markers can be modeled in a prediction framework to identify individuals likely to engage in suicidal ideation in the future. We discuss societal and ethical implications of this research.
Recovery Amid Pro-Anorexia: Analysis of Recovery in Social Media
Online communities can promote illness recovery and improve well-being in the cases of many kinds of illnesses. However, for challenging mental health condition like anorexia, social media harbor both recovery communities as well as those that encourage dangerous behaviors. The effectiveness of such platforms in promoting recovery despite housing both communities is underexplored. Our work begins to fill this gap by developing a statistical framework using survival analysis and situating our results within the cognitive behavioral theory of anorexia. This model identifies content and participation measures that predict the likelihood of recovery. From our dataset of over 68M posts and 10K users that self-identify with anorexia, we find that recovery on Tumblr is protracted – only half of the population is estimated to exhibit signs of recovery after four years. We discuss the effectiveness of social media in improving well-being around anorexia, a unique health challenge, and emergent questions from this line of work.
Health Technologies ‘In the Wild’: Experiences of Engagement with Computerised CBT
The widespread deployment of technology by professional health services will provide a substantial opportunity for studies that consider usage in naturalistic settings. Our study has documented experiences of engaging with technologies intended to support recovery from common mental health problems, often used as a part of a multi-year recovery process. In analyzing this material, we identify issues of broad interest to effective health technology design, and reflect on the challenge of studying engagement with health technologies over lengthy time periods. We also consider the importance of designing technologies that are sensitive to the needs of users experiencing chronic health problems, and discuss how the term sensitivity might be defined in a technology design context.
Challenges for Designing new Technology for Health and Wellbeing in a Complex Mental Healthcare Context
This paper describes the challenges and lessons learned in the experience-centered design (ECD) of the Spheres of Wellbeing, a technology to promote the mental health and wellbeing of a group of women, suffering from significant mental health problems and living in a medium secure hospital unit. First, we describe how our relationship with mental health professionals at the hospital and the aspirations for person-centric care that we shared with them enabled us, in the design of the Spheres, to innovate outside traditional healthcare procedures. We then provide insights into the challenges presented by the particular care culture and existing services and practices in the secure hospital unit that were revealed through our technology deployment. In discussing these challenges, our design enquiry opens up a space to make sense of experience living with complex mental health conditions in highly constrained contexts within which the deployment of the Spheres becomes an opportunity to think about wellbeing in similar contexts.
SESSION: Visual Impairment and Technology
Haptic Wave: A Cross-Modal Interface for Visually Impaired Audio Producers
We present the Haptic Wave, a device that allows cross-modal mapping of digital audio to the haptic domain, intended for use by audio producers/engineers with visual impairments. We describe a series of participatory design activities adapted to non-sighted users where the act of prototyping facilitates dialog. A series of workshops scoping user needs, and testing a technology mock up and lo-fidelity prototype fed into the design of a final high-spec prototype. The Haptic Wave was tested in the laboratory, then deployed in real world settings in recording studios and audio production facilities. The cross-modal mapping is kinesthetic and allows the direct manipulation of sound without the translation of an existing visual interface. The research gleans insight into working with users with visual impairments, and transforms perspective to think of them as experts in non-visual interfaces for all users.
“I Always Wanted to See the Night Sky”: Blind User Preferences for Sensory Substitution Devices
Sensory Substitution Devices (SSDs) convert visual information into another sensory channel (e.g. sound) to improve the everyday functioning of blind and visually impaired persons (BVIP). However, the range of possible functions and options for translating vision into sound is largely open-ended. To provide constraints on the design of this technology, we interviewed ten BVIPs who were briefly trained in the use of three novel devices that, collectively, showcase a large range of design permutations. The SSDs include the ‘Depth-vOICe,’ ‘Synaestheatre’ and ‘Creole’ that offer high spatial, temporal, and colour resolutions respectively via a variety of sound outputs (electronic tones, instruments, vocals). The participants identified a range of practical concerns in relation to the devices (e.g. curb detection, recognition, mental effort) but also highlighted experiential aspects. This included both curiosity about the visual world (e.g. understanding shades of colour, the shape of cars, seeing the night sky) and the desire for the substituting sound to be responsive to movement of the device and aesthetically engaging.
Linespace: A Sensemaking Platform for the Blind
For visually impaired users, making sense of spatial information is difficult as they have to scan and memorize content before being able to analyze it. Even worse, any update to the displayed content invalidates their spatial memory, which can force them to manually rescan the entire display. Making display contents persist, we argue, is thus the highest priority in designing a sensemaking system for the visually impaired. We present a tactile display system designed with this goal in mind. The foundation of our system is a large tactile display (140x100cm, 23x larger than Hyperbraille), which we achieve by using a 3D printer to print raised lines of filament. The system’s software then trades in this space in order to minimize screen updates. Instead of panning and zooming, for example, our system creates additional views, leaving display contents intact and thus supporting users in preserving their spatial memory. We illustrate our system and its design principles at the example of four spatial applications. We evaluated our system with six blind users. Participants responded favorably to the system and expressed, for example, that having multiple views at the same time was helpful. They also judged the increased expressiveness of lines over the more traditional dots as useful for encoding information.
Tangible Reels: Construction and Exploration of Tangible Maps by Visually Impaired Users
Maps are essential in everyday life, but inherently inaccessible to visually impaired users. They must be transcribed to non-editable tactile graphics, or rendered on very expensive shape changing displays. To tackle these issues, we developed a tangible tabletop interface that enables visually impaired users to build tangible maps on their own, using a new type of physical icon called Tangible Reels. Tangible Reels are composed of a sucker pad that ensures stability, with a retractable reel that renders digital lines tangible. In order to construct a map, audio instructions guide the user to precisely place Tangible Reels onto the table and create links between them. During subsequent exploration, the device provides the names of the points and lines that the user touches. A pre-study confirmed that Tangible Reels are stable and easy to manipulate, and that visually impaired users can understand maps that are built with them. A follow-up experiment validated that the designed system, including non-visual interactions, enables visually impaired participants to quickly build and explore maps of various complexities.
SESSION: What lies beyond? Design and Infrastructure through a Critical Lens
Breaking Down While Building Up: Design and Decline in Emerging Infrastructures
This paper asks what can we learn from breaking down systems to understand the development of systems. Through ethnographic fieldwork around a large-scale infrastructure development project in the ocean sciences experiencing a scale downward and threat of further defunding, I highlight four often overlooked components of innovation and development that have important implications for the HCI community. The first debunks the mythical liquidation and restart of Western development’s “fail fast, fail often” mantra by tracing the complexities of breaking down an infrastructure, highlighting that the end of a technology is entrenched in longer-lived social, political and organizational consequences. The second dives deeper into these social consequences, as formalized structures are broken down and new temporary and contingent working orders surface to fill their place. Third, I signal the critical consequences of the thoughtful practices of assessment and evaluation of both human and material resources that occur during the downturn of systems. Last, I discuss the deeply personal connections that amplify through processes of breaking down systems.
Logistics as Care and Control: An Investigation into the UNICEF Supply Division
This paper investigates emerging practices and infrastructures in global humanitarian relief to argue for logistics as an essential but often neglected component of ICTD and broader HCI work. Logistics — the artful coordination of human and material flows — leverages scholarship on “coordination,” “articulation” and “infrastructure” to provide insight into the complex role of new IT systems (and HCI as a field) in the global circulation of goods and relations. Drawing on fieldwork with the UNICEF Supply Division, we argue that contemporary logistics operates simultaneously as a form of care and control. We demonstrate that logisticians at Supply traverse messy and dynamic information and material infrastructures, and that effective logistical work must marry and bridge these worlds. Our work extends ICTD and postcolonial computing research by casting light on the nature, experience and ambivalence of the global flows that enable and support HCI work in development and postcolonial settings.
The Ins and Outs of HCI for Development
We present an empirical analysis of HCI for development (HCI4D), a growing research area aimed at understanding and designing technologies for under-served, under-resourced, and under-represented populations around the world. We first present findings from our survey of 259 HCI4D publications from the past six years and summarize how this research has evolved, with an overview of the geographies it covers, technologies it targets, and its varied epistemological and methodological underpinnings. We then discuss qualitative findings from interviews we conducted with 11 experienced HCI4D researchers, reflecting on the ground covered so far Including computing and research trends, community-building efforts, and thoughts about ‘development’ – as well as challenges that lie ahead and suggestions for future growth and diversification. We conclude by summarizing the contributions our paper makes to HCI researchers inside the HCI4D community as well as those outside of it, with the goal of enriching discussions on how HCI can further benefit populations around the world.
Design(ing) ‘Here’ and ‘There’: Tech Entrepreneurs, Global Markets, and Reflexivity in Design Processes
HCI shapes in important ways dominant notions of what counts as innovation and where (good) design is located. In this paper, we argue for the continuous expansion of the body of critical and reflexive work that asks both researcher and designer to reflect on their values of design in the world. Drawing from ethnographic research in Accra, Ghana and Shenzhen, China, we illustrate how design is as much about making artifacts as it is about producing national identity, reputation, and economic gain. Technology entrepreneurs take from and resist the discourse of their cities as emerging sites of Silicon-Valley type innovation. They render the narrative of “catching up with the west” overly simplistic, ahistorical and blind to situated practices of design. This view, we argue, is critical for interrogating our views of design especially as it becomes more central in the contemporary global economy.
SESSION: Design, Labour and the Invisible Perils of Crowdsourcing
“Why would anybody do this?”: Understanding Older Adults’ Motivations and Challenges in Crowd Work
Diversifying participation in crowd work can benefit the worker and requester. Increasing numbers of older adults are online, but little is known about their awareness of or how they engage in mainstream crowd work. Through an online survey with 505 seniors, we found that most have never heard of crowd work but would be motivated to complete tasks by earning money or working on interesting or stimulating tasks. We follow up results from the survey with interviews and observations of 14 older adults completing crowd work tasks. While our survey data suggests that financial incentives are encouraging, in-depth interviews reveal that a combination of personal and social incentives may be stronger drivers of participation, but only if older adults can overcome accessibility issues and understand the purpose of crowd work. This paper contributes insights into how crowdsourcing sites could better engage seniors and other users.
The Knowledge Accelerator: Big Picture Thinking in Small Pieces
Crowdsourcing offers a powerful new paradigm for online work. However, real world tasks are often interdependent, requiring a big picture view of the difference pieces involved. Existing crowdsourcing approaches that support such tasks — ranging from Wikipedia to flash teams — are bottlenecked by relying on a small number of individuals to maintain the big picture. In this paper, we explore the idea that a computational system can scaffold an emerging interdependent, big picture view entirely through the small contributions of individuals, each of whom sees only a part of the whole. To investigate the viability, strengths, and weaknesses of this approach we instantiate the idea in a prototype system for accomplishing distributed information synthesis and evaluate its output across a variety of topics. We also contribute a set of design patterns that may be informative for other systems aimed at supporting big picture thinking in small pieces.
Taking a HIT: Designing around Rejection, Mistrust, Risk, and Workers’ Experiences in Amazon Mechanical Turk
Online crowd labor markets often address issues of risk and mistrust between employers and employees from the employers’ perspective, but less often from that of employees. Based on 437 comments posted by crowd workers (Turkers) on the Amazon Mechanical Turk (AMT) participation agreement, we identified work rejection as a major risk that Turkers experience. Unfair rejections can result from poorly-designed tasks, unclear instructions, technical errors, and malicious Requesters. Because the AMT policy and platform provide little recourse to Turkers, they adopt strategies to minimize risk: avoiding new and known bad Requesters, sharing information with other Turkers, and choosing low-risk tasks. Through a series of ideas inspired by these findings-including notifying Turkers and Requesters of a broken task, returning rejected work to Turkers for repair, and providing collective dispute resolution mechanisms-we argue that making reducing risk and building trust a first-class design goal can lead to solutions that improve outcomes around rejected work for all parties in online labor markets.
SESSION: HCI and Physiological Interactions
Framework for Electroencephalography-based Evaluation of User Experience
Measuring brain activity with electroencephalography (EEG) is mature enough to assess mental states. Combined with existing methods, such tool can be used to strengthen the understanding of user experience. We contribute a set of methods to estimate continuously the user’s mental workload, attention and recognition of interaction errors during different interaction tasks. We validate these measures on a controlled virtual environment and show how they can be used to compare different interaction techniques or devices, by comparing here a keyboard and a touch-based interface. Thanks to such a framework, EEG becomes a promising method to improve the overall usability of complex computer systems.
Intelligent Agents and Networked Buttons Improve Free-Improvised Ensemble Music-Making on Touch-Screens
We present the results of two controlled studies of free-improvised ensemble music-making on touch-screens. In our system, updates to an interface of harmonically-selected pitches are broadcast to every touch-screen in response to either a performer pressing a GUI button, or to interventions from an intelligent agent. In our first study, analysis of survey results and performance data indicated significant effects of the button on performer preference, but of the agent on performance length. In the second follow-up study, a mixed-initiative interface, where the presence of the button was interlaced with agent interventions, was developed to leverage both approaches. Comparison of this mixed-initiative interface with the always-on button-plus-agent condition of the first study demonstrated significant preferences for the former. The different approaches were found to shape the creative interactions that take place. Overall, this research offers evidence that an intelligent agent and a networked GUI both improve aspects of improvised ensemble music-making.
SESSION: In-Air Gesture
M.Gesture: An Acceleration-Based Gesture Authoring System on Multiple Handheld and Wearable Devices
Gesture-based interaction is still underutilized in the mobile context despite the large amount of attention it has been given. Using accelerometers that are widely available in mobile devices, we developed M.Gesture, a software system that supports accelerometer-based gesture authoring on single or multiple mobile devices. The development was based on a formative study that showed users’ preferences for subtle, simple motions and synchronized, multi-device gestures. M.Gesture adopts an acceleration data space and interface components based on mass-spring analogy and combines the strengths of both demonstration-based and declarative approaches. Also, gesture declaration is done by specifying a mass-spring trajectory with planes in the acceleration space. For iterative gesture modification, multi-level feedbacks are provided as well. The results of evaluative studies have shown good usability and higher recognition performance than that of dynamic time warping for simple gesture authoring. Later, we discuss the benefits of applying a physical metaphor and hybrid approach.
Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems
When users want to interact with an in-air gesture system, they must first address it. This involves finding where to gesture so that their actions can be sensed, and how to direct their input towards that system so that they do not also affect others or cause unwanted effects. This is an important problem which lacks a practical solution. We present an interaction technique which uses multimodal feedback to help users address in-air gesture systems. The feedback tells them how (“do that”) and where (“there”) to gesture, using light, audio and tactile displays. By doing that there, users can direct their input to the system they wish to interact with, in a place where their gestures can be sensed. We discuss the design of our technique and three experiments investigating its use, finding that users can “do that” well (93.2%-99.9%) while accurately (51mm-80mm) and quickly (3.7s) finding “there”.
EMPress: Practical Hand Gesture Classification with Wrist-Mounted EMG and Pressure Sensing
Practical wearable gesture tracking requires that sensors align with existing ergonomic device forms. We show that combining EMG and pressure data sensed only at the wrist can support accurate classification of hand gestures. A pilot study with unintended EMG electrode pressure variability led to exploration of the approach in greater depth. The EMPress technique senses both finger movements and rotations around the wrist and forearm, covering a wide range of gestures, with an overall 10-fold cross validation classification accuracy of 96%. We show that EMG is especially suited to sensing finger movements, that pressure is suited to sensing wrist and forearm rotations, and their combination is significantly more accurate for a range of gestures than either technique alone. The technique is well suited to existing wearable device forms such as smart watches that are already mounted on the wrist.
Skeletons and Silhouettes: Comparing User Representations at a Gesture-based Large Display
Mid-air gestures offer a promising way to interact with large public displays. User representations are important to attract people to such displays, convey interactivity and provide meaningful gesture feedback. We evaluated two forms of user representation, an abstract skeleton and a silhouette, at a large public information display. Results from 56 days, with 190 sessions involving 483 detected people, indicate the silhouette attracted more passers-by to interact and, of these, more engaged in serious browsing interactions. By contrast, the skeleton representation had more playful interactions. Our work contributes to the understanding of the implications of these choices of user representation.
Proactive Sensing for Improving Hand Pose Estimation
We propose a novel sensing technique called proactive sensing. Proactive sensing continually repositions a camera-based sensor as a way to improve hand pose estimation. Our core contribution is a scheme that effectively learns how to move the sensor to improve pose estimation confidence while requiring no ground truth hand poses. We demonstrate this concept using a low-cost rapid swing arm system built around the state-of-the-art commercial sensing system Leap Motion. The results from our user study show that proactive sensing helps estimate users’ hand poses with higher confidence compared to both static and random sensing. We further present an online model update to improve performance for each user.
SESSION: Curation and Algorithms
First I “like” it, then I hide it: Folk Theories of Social Feeds
Many online platforms use curation algorithms that are opaque to the user. Recent work suggests that discovering a filtering algorithm’s existence in a curated feed influences user experience, but it remains unclear how users reason about the operation of these algorithms. In this qualitative laboratory study, researchers interviewed a diverse, non-probability sample of 40 Facebook users before, during, and after being presented alternative displays of Facebook’s News Feed curation algorithm’s output. Interviews revealed 10 “folk theories’ of automated curation, some quite unexpected. Users who were given a probe into the algorithm’s operation via an interface that incorporated “seams,’ visible hints disclosing aspects of automation operations, could quickly develop theories. Users made plans that depended on their theories. We conclude that foregrounding these automated processes may increase interface design complexity, but it may also add usability benefits.
Accounting for Taste: Ranking Curators and Content in Social Networks
Ranking users in social networks is a well-studied problem, typically solved by algorithms that leverage network structure to identify influential users and recommend people to follow. In the last decade, however, curation — users sharing and promoting content in a network — has become a central social activity, as platforms like Facebook, Twitter, Pinterest, and GitHub drive growth and engagement by connecting users through content and content to users. While existing algorithms reward users that are highly active with higher rankings, they fail to account for users’ curatorial taste. This paper introduces CuRank, an algorithm for ranking users and content in social networks by explicitly modeling three characteristics of a good curator: discerning taste, high activity, and timeliness. We evaluate CuRank on datasets from two popular social networks — GitHub and Vine — and demonstrate its efficacy at ranking content and identifying good curators.
How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface
The rising prevalence of algorithmic interfaces, such as curated feeds in online news, raises new questions for designers, scholars, and critics of media. This work focuses on how transparent design of algorithmic interfaces can promote awareness and foster trust. A two-stage process of how transparency affects trust was hypothesized drawing on theories of information processing and procedural justice. In an online field experiment, three levels of system transparency were tested in the high-stakes context of peer assessment. Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust. Attitudes of individuals whose expectations were met did not vary with transparency. Results are discussed in terms of a dual process model of attitude change and the depth of justification of perceived inconsistency. Designing for trust requires balanced interface transparency – not too little and not too much.
Communities Found by Users — not Algorithms: Comparing Human and Algorithmically Generated Communities
Many algorithms have been created to automatically detect community structures in social networks. These algorithms have been studied from the perspective of optimisation extensively. However, which community finding algorithm most closely matches the human notion of communities? In this paper, we conduct a user study to address this question. In our experiment, users collected their own Facebook network and manually annotated it, indicating their social communities. Given this annotation, we run state-of-the-art community finding algorithms on the network and use Normalised Mutual Information (NMI) to compare annotated communities with automatically detected ones. Our results show that the Infomap algorithm has the greatest similarity to user defined communities, with Girvan-Newman and Louvain algorithms also performing well.
Hashtag Drift: Tracing the Evolving Uses of Political Hashtags Over Time
Sociologists and political scientists have drawn attention to the importance of hashtags as tools for political organizing and protest online. Examining a set of political hashtags on Tumblr, we discover linguistic changes in the ways that the hashtags are used over time. As time passes, the community uses a more diverse vocabulary of hashtags in conjunction with the central political tag, suggesting that the meaning of the tag expands or proliferates — a phenomenon we dub hashtag drift. We also find evidence that this occurs not just on the level of the community but at the level of individual users who, as time progresses, use the hashtag in conjunction with a more diverse vocabulary of other tags. This study sheds light on the dynamics of political engagement through hashtag activism.
SESSION: Contextual Awareness
The Impact of the Encoding View in Location-Based Reminders: Improving Prospective Remembering
Proliferation of small computing-enabled devices coupled with the richness of communication infrastructure has made location-based computing a reality. Location-based reminders (LBRs) provide alarms and notices to users based on their location. These are typically used to remind us to complete a to-do task at a particular location. The current design of LBRs fails to take advantage of prospective memory theory. We hypothesize that if the encoding stage provides a closer match to the retrieval stage, then location recognition and task recall in LBRs will improve at retrieval time. We conducted a between subjects experiment and measured performance as location recognition and task recall. We found strong evidence in support that a first-person view benefits prospective remembering the most. We close with a discussion of the implications for the design of the encoding stage for future LBR systems.
Technology and the Politics of Mobility: Evidence Generation in Accessible Transport Activism
Digital technologies offer the possibility of community empowerment via the reconfiguration of public services. This potential relies on actively involved citizens engaging with decision makers to pursue civic goals. In this paper we study one such group of involved citizens, examining the evidencing practices of a rare disease charity campaigning for accessible public transport. Through fieldwork and interviews, we highlight the ways in which staff and volunteers assembled and presented different forms of evidence, in doing so reframing what is conceived as ‘valid knowledge’. We note the challenges this group faced in capturing experiential knowledge around the accessibility barriers of public transport, and the trade-offs that are made when presenting evidence to policy and decision makers. We offer a number of design considerations for future HCI research, focusing on how digital technology might be configured more appropriately to support campaigning around the politics of mobility.
Supporting Opportunities for Context-Aware Social Matching: An Experience Sampling Study
Mobile social matching systems aim to bring people together in the physical world by recommending people nearby to each other. Going beyond simple similarity and proximity matching mechanisms, we explore a proposed framework of relational, social and personal context as predictors of match opportunities to map out the design space of opportunistic social matching systems. We contribute insights gained from a study combining Experience Sampling Method (ESM) with 85 students of a U.S. university and interviews with 15 of these participants. A generalized linear mixed model analysis (n=1704) showed that personal context (mood and busyness) as well as sociability of others nearby are the strongest predictors of contextual match interest. Participant interviews suggest operationalizing relational context using social network rarity and discoverable rarity, and incorporating skill level and learning/teaching needs for activity partnering. Based on these findings we propose passive context-awareness for opportunistic social matching.
Helping Computers Understand Geographically-Bound Activity Restrictions
The lack of certain types of geographic data prevents the development of location-aware technologies in a number of important domains. One such type of “unmapped” geographic data is space usage rules (SURs), which are defined as geographically-bound activity restrictions (e.g. “no dogs”, “no smoking”, “no fishing”, “no skateboarding”). Researchers in the area of human-computer interaction have recently begun to develop techniques for the automated mapping of SURs with the aim of supporting activity planning systems (e.g. one-touch “Can I Smoke Here?” apps, SUR-aware vacation planning tools). In this paper, we present a novel SUR mapping technique — SPtP — that outperforms state-of-the-art approaches by 30% for one of the most important components of the SUR mapping pipeline: associating a point observation of a SUR (e.g. a ‘no smoking’ sign) with the corresponding polygon in which the SUR applies (e.g. the nearby park or the entire campus on which the sign is located). This paper also contributes a series of new SUR benchmark datasets to help further research in this area.
SESSION: Distance Still Matters
RAMPARTS: Supporting Sensemaking with Spatially-Aware Mobile Interactions
Synchronous colocated collaborative sensemaking requires that analysts share their information and insights with each other. The challenge is to know when is the right time to share what information without disrupting the present state of analysis. This is crucial in ad-hoc sensemaking sessions with mobile devices because small screen space limits information display. To address these tensions, we propose and evaluate RAMPARTS – a spatially aware sensemaking system for collaborative crime analysis that aims to support faster information sharing, clue-finding, and analysis. We compare RAMPARTS to an interactive tabletop and a paper-based method in a controlled laboratory study. We found that RAMPARTS significantly decreased task completion time compared to paper, without affecting cognitive load or task completion time adversely compared to an interactive tabletop. We conclude that designing for ad-hoc colocated sensemaking on mobile devices could benefit from spatial awareness. In particular, spatial awareness could be used to identify relevant information, support diverse alignment styles for visual comparison, and enable alternative rhythms of sensemaking.
Far but Near or Near but Far?: The Effects of Perceived Distance on the Relationship between Geographic Dispersion and Perceived Diversity
Geographic dispersion has been proposed as one means to promote cooperation and coordination in teams high in perceived diversity. However, research has found mixed support for this assertion. This study proposes that the inclusion of perceived distance helps to explain these mixed results. To test this assertion, we examined 121 teams-62 collocated and 59 geographically dispersed. Results demonstrate that perceived distance explains when geographic dispersion benefits teams high in perceived diversity. Results also indicate that the type of perceived diversity matters (surface-level vs. deep-level diversity). This study contributes to our understanding of distance and diversity in teams.
Ritual Machines I & II: Making Technology at Home
Changing patterns of both work-related mobility and do-mestic arrangements mean that ‘mobile workers’ face chal-lenges to support and engage in family life whilst travelling for work. Phatic devices offer some potential to provide connection at a distance alongside existing communications infrastructure. Through a bespoke design process, incorpo-rating phases of design ethnography, critical technical prac-tice and provotyping we have developed Ritual Machines I and II as material explorations of mobile workers’ lives and practices. In doing this we sought to reflect upon the prac-tices through which families accomplish mobile living, the values they place in technology for doing ‘family’ at a distance and to draw insights in to the potential roles of digital technology in supporting them. We frame the design of our phatic devices in discussion of processes of bespoke design, offer advice on supporting mobile workers when travelling and articulate the values of making a technology at home when designing for domestic and mobile settings.
Office Social: Presentation Interactivity for Nearby Devices
Slide presentations have long been stuck in a one-to-many paradigm, limiting audience engagement. Based on the concept of smartphone-based remote control of slide navigation, we present Office Social-a PowerPoint plugin and companion smartphone app that allows audience members qualified access to slides for personal review and, when the presenter enables it, public control over slide navigation. We studied the longitudinal use of Office Social across four meetings of a workgroup. We found that shared access and regulated control facilitated various forms of public and personal audience engagement. We discuss how enabling ad-hoc aggregation of co-proximate devices reduces ‘interaction costs’ and leads to both opportunities and challenges for presentation situations.
Gazed and Confused: Understanding and Designing Shared Gaze for Remote Collaboration
People utilize eye gaze as an important cue for monitoring attention and coordinating awareness. This study investigates how remote pairs make use of a graphical representation of their partner’s eye-gaze during a tightly-coupled collaborative task. Our results suggest that reproducing shared gaze in a remote collaboration setting makes pairs more accurate when referring to linguistically complex objects by facilitating the production of efficient forms of deictic references. We discuss how the availability of gaze influences coordination strategies and implications for the design of shared gaze in remote collaboration systems.
SESSION: Enabling End-Users and Designers
Using and Exploring Hierarchical Data in Spreadsheets
More and more data nowadays exist in hierarchical formats such as JSON due to the increasing popularity of web applications and web services. While many end-user systems support getting hierarchical data from databases without programming, they provide very little support for using hierarchical data beyond turning the data into a flat string or table. In this paper, we present a spreadsheet tool for using and exploring hierarchical datasets. We introduce novel interaction techniques and algorithms to manipulate and visualize hierarchical data in a spreadsheet using the data’s relative hierarchical relationships with the data in its adjacent columns. Our tool leverages the data’s structural information to support selecting, grouping, joining, sorting and filtering hierarchical data in spreadsheets. Our lab study showed that our tool helped spreadsheet users complete data exploration tasks nearly two times faster than using Excel and even outperform programmers in most tasks.
Airways: Optimization-Based Planning of Quadrotor Trajectories according to High-Level User Goals
In this paper we propose a computational design tool that allows end-users to create advanced quadrotor trajectories with a variety of application scenarios in mind. Our algorithm allows novice users to create quadrotor based use-cases without requiring deep knowledge in either quadrotor control or the underlying constraints of the target domain. To achieve this goal we propose an optimization-based method that generates feasible trajectories which can be flown in the real world. Furthermore, the method incorporates high-level human objectives into the planning of flight trajectories. An easy to use 3D design tool allows for quick specification and editing of trajectories as well as for intuitive exploration of the resulting solution space. We demonstrate the utility of our approach in several real-world application scenarios, including aerial-videography, robotic light-painting and drone racing.
SelPh: Progressive Learning and Support of Manual Photo Color Enhancement
Color enhancement is a very important aspect of photo editing. Even when photographers have tens of or hundreds of photographs, they must enhance each photo one by one by manually tweaking sliders in software such as brightness and contrast, because automatic color enhancement is not always satisfactory for them. To support this repetitive manual task, we present self-reinforcing color enhancement, where the system implicitly and progressively learns the user’s preferences by training on their photo editing history. The more photos the user enhances, the more effectively the system supports the user. We present a working prototype system called SelPh, and then describe the algorithms used to perform the self-reinforcement. We conduct a user study to investigate how photographers would use a self-reinforcing system to enhance a collection of photos. The results indicate that the participants were satisfied with the proposed system and strongly agreed that the self-reinforcing approach is preferable to the traditional workflow.
A Live, Multiple-Representation Probabilistic Programming Environment for Novices
We present a live, multiple-representation novice environment for probabilistic programming based on the Infer.NET language. When compared to a text-only editor in a controlled experiment on 16 participants, our system showed a significant reduction in keystrokes during introductory probabilistic programming exercises, and subsequently, a significant improvement in program description and debugging tasks as measured by task time, keystrokes and deletions.
SESSION: Interventions to Design Theory
Dynamics, Multiplicity and Conceptual Blends in HCI
Discussions on what makes user interfaces “natural” or “intuitive” have led researchers to apply Fauconnier and Turner’s theory of Conceptual Blends to explain how users rely on familiar and real-world concepts when they learn to use new digital technologies — as a blend of experiences from the –physical and the “digital” world. This pursuit has multiple challenges of which we address four: The continuous dynamic development of experiences; the multiplicity and complexity involved; the distinction between “real” and “virtual” experiences, and finally applying descriptive concepts predictively. Based on our background in activity theoretical HCI we discuss two cases to nuance the discussion of conceptual blends and HCI. We provide an understanding of conceptual blends beyond one-to-one static blends, and immediately recognizable concepts. We focus on multiplicity, dynamics and learning, and in that we provide a more advanced methodological scaffolding of analyses of conceptual blends, hence we propose that designers need to seed blends in design.
From Research Prototype to Research Product
Prototypes and prototyping have had a long and important history in the HCI community and have played a highly significant role in creating technology that is easier and more fulfilling to use. Yet, as focus in HCI is expanding to investigate complex matters of human relationships with technology over time in the intimate and contested contexts of everyday life, the notion of a ‘prototype’ may not be fully sufficient to support these kinds of inquiries. We propose the research product as an extension and evolution of the research prototype to support generative inquiries in this emerging research area. We articulate four interrelated qualities of research products inquiry-driven, finish, fit, and independent and draw on these qualities to describe and analyze five different yet related design research cases we have collectively conducted over the past six years. We conclude with a discussion of challenges and opportunities for crafting research products and the implications they suggest for future design-oriented HCI research.
Designing Media Architecture: Tools and Approaches for Addressing the Main Design Challenges
Media Architecture is reaching a level of maturity at which we can identify tools and approaches for addressing the main challenges for HCI practitioners working in this field. While previous influential contributions within Media Architecture have identified challenges for designers and offered case studies of specific approaches, here, we (1) provide guidance on how to tackle the domain-specific challenges of Media Architecture design — pertaining to the interface, integration, content, context, process, prototyping, and evaluation — on the basis of the development of numerous installations over the course of seven years, and thorough studies of related work, and (2) present five categories of tools and approaches — software tools, projection, 3D models, hardware prototyping, and evaluation tools — developed to address these challenges in practice, exemplified through six concrete examples from real-life cases.
SESSION: HCI and Gender
An Archive of Their Own: A Case Study of Feminist HCI and Values in Design
Rarely are computing systems developed entirely by members of the communities they serve, particularly when that community is underrepresented in computing. Archive of Our Own (AO3), a fan fiction archive with nearly 750,000 users and over 2 million individual works, was designed and coded primarily by women to meet the needs of the online fandom community. Their design decisions were informed by existing values and norms around issues such as accessibility, inclusivity, and identity. We conducted interviews with 28 users and developers, and with this data we detail the history and design of AO3 using the framework of feminist HCI and focusing on the successful incorporation of values into design. We conclude with considering examples of complexity in values in design work: the use of design to mitigate tensions in values and to influence value formation or change.
Finding Gender-Inclusiveness Software Issues with GenderMag: A Field Investigation
Gender inclusiveness in computing settings is receiving a lot of attention, but one potentially critical factor has mostly been overlooked — software itself. To help close this gap, we recently created GenderMag, a systematic inspection method to enable software practitioners to evaluate their software for issues of gender-inclusiveness. In this paper, we present the first real-world investigation of software practitioners’ ability to identify gender-inclusiveness issues in software they create/maintain using this method. Our investigation was a multiple-case field study of software teams at three major U.S. technology organizations. The results were that, using GenderMag to evaluate software, these software practitioners identified a surprisingly high number of gender-inclusiveness issues: 25% of the software features they evaluated had gender-inclusiveness issues.
HCI and Intimate Care as an Agenda for Change in Women’s Health
Designing for women’s healthcare remains an underexplored area of HCI, particularly outside informational systems for maternal health. Drawing on a case study of a body disruption – urinary incontinence in women – we illustrate the experience of women’s health both from the perspective of the patient and the therapist. We show how knowledge, esteem and agency play crucial roles in remedial women’s care practices, as well as preventative. In describing these challenges we deliberate on possible futures of women’s health that take advantage of the many advances in design and technology from across the spectrum of HCI research. We show how with some care and courage HCI has the potential to transform women’s experience within this setting.
A Feminist HCI Approach to Designing Postpartum Technologies: “When I first saw a breast pump I was wondering if it was a joke”
In recent years, the CHI community has begun to discuss how HCI research could improve the experience of motherhood. In this paper, we take up the challenge of designing for this complex life phase and present an analysis of data collected from a design process that included over 1,000 mother-submitted ideas to improve the breast pump, a technology that allows mothers around the world to collect and store their breast milk. In addition to presenting a range of ideas to improve this specific technology, we discuss environmental, legal, social, and emotional dimensions of the postpartum period that suggest opportunities for a range of additional supportive technologies. We close with insights linking our findings to ongoing discussions related to Feminist HCI theory, crowdsourcing, and participatory design.
SESSION: Complex Tasks and Learning in Crowdsourcing
Toward a Learning Science for Complex Crowdsourcing Tasks
We explore how crowdworkers can be trained to tackle complex crowdsourcing tasks. We are particularly interested in training novice workers to perform well on solving tasks in situations where the space of strategies is large and workers need to discover and try different strategies to be successful. In a first experiment, we perform a comparison of five different training strategies. For complex web search challenges, we show that providing expert examples is an effective form of training, surpassing other forms of training in nearly all measures of interest. However, such training relies on access to domain expertise, which may be expensive or lacking. Therefore, in a second experiment we study the feasibility of training workers in the absence of domain expertise. We show that having workers validate the work of their peer workers can be even more effective than having them review expert examples if we only present solutions filtered by a threshold length. The results suggest that crowdsourced solutions of peer workers may be harnessed in an automated training pipeline.
Learning From the Crowd: Observational Learning in Crowdsourcing Communities
Crowd work provides solutions to complex problems effectively, efficiently, and at low cost. Previous research showed that feedback, particularly correctness feedback can help crowd workers improve their performance; yet such feedback, particularly when generated by experts, is costly and difficult to scale. In our research we investigate approaches to facilitating continuous observational learning in crowdsourcing communities. In a study conducted with workers on Amazon Mechanical Turk, we asked workers to complete a set of tasks identifying nutritional composition of different meals. We examined workers’ accuracy gains after being exposed to expert-generated feedback and to two types of peer-generated feedback: direct accuracy assessment with explanations of errors, and a comparison with solutions generated by other workers. The study further confirmed that expert-generated feedback is a powerful mechanism for facilitating learning and leads to significant gains in accuracy. However, the study also showed that comparing one’s own solutions with a variety of solutions suggested by others and their comparative frequencies leads to significant gains in accuracy. This solution is particularly attractive because of its low cost, minimal impact on time and cost of job completion, and high potential for adoption by a variety of crowdsourcing platforms.
Atelier: Repurposing Expert Crowdsourcing Tasks as Micro-internships
Expert crowdsourcing marketplaces have untapped potential to empower workers’ career and skill development. Currently, many workers cannot afford to invest the time and sacrifice the earnings required to learn a new skill, and a lack of experience makes it difficult to get job offers even if they do. In this paper, we seek to lower the threshold to skill development by repurposing existing tasks on the marketplace as mentored, paid, real-world work experiences, which we refer to as micro-internships. We instantiate this idea in Atelier, a micro-internship platform that connects crowd interns with crowd mentors. Atelier guides mentor-intern pairs to break down expert crowdsourcing tasks into milestones, review intermediate output, and problem-solve together. We conducted a field experiment comparing Atelier’s mentorship model to a non-mentored alternative on a real-world programming crowdsourcing task, finding that Atelier helped interns maintain forward progress and absorb best practices.
Supporting Collaborative Writing with Microtasks
This paper presents the MicroWriter, a system that decomposes the task of writing into three types of microtasks to produce a single report: 1) generating ideas, 2) labeling ideas to organize them, and 3) writing paragraphs given a few related ideas. Because each microtask can be completed individually with limited awareness of what has been already done and what others are doing, this decomposition can change the experience of collaborative writing. Prior work has used microtasking to support collaborative writing with unaffiliated crowd workers. To instead study its impact on collaboration among writers with context and investment in the writing project, we asked six groups of co-workers (or 19 people in total) to use the MicroWriter in a synchronous, collocated setting to write a report about a shared work goal. Our observations suggest ways that recent advances in microtasking and crowd work can be used to support collaborative writing within preexisting groups.
SESSION: Game and Design
Designing Brutal Multiplayer Video Games
Non-digital forms of play that allow players to direct brute force directly upon each other, such as martial arts, boxing and full contact team sports, are very popular. However, inter-player brutality has largely been unexplored as a feature of digital gaming. In this paper, we describe the design and study of 2 multi-player games that encourage players to use brute force directly against other players. Balance of Power is a tug-of-war style game implemented with Xbox Kinect, while Bundle is a playground-inspired chasing game implemented with smartphones. Two groups of five participants (n=10) played both games while being filmed, and were subsequently interviewed. A thematic analysis identified five key components of the brutal multiplayer video game experience, which informs a set of seven design considerations. This work aims to inspire the design of engaging game experiences based on awareness and enjoyment of our own and others’ physicality.
Thighrim and Calf-Life: A Study of the Conversion of Off-the-Shelf Video Games into Exergames
Exergames are a fun and engaging way to participate in physical activity. Exergame users consistently require new content to maintain interest in the activity. One way to provide users with high quality content with minimal development work is to convert existing off-the-shelf digital games into exergames by using the game’s “modding” interface. To explore the potential of converted exergames for inspiring high exertion levels we performed a conversion on two popular games: Half-Life 2 and The Elder Scrolls V: Skyrim. The conversions were performed in two stages. The first stage mimics existing conversion techniques and a second stage provides added incentive for players to reach higher exertion levels. A study of 18 participants found that the resulting games support anti-sedentary levels of exertion while falling slightly below national recommendations for cardiovascular exercise. Adding exercise to the games did not affect players’ enjoyment.
SESSION: Crowdsourcing and Creation: Large-scale Ideas and Content Production
Enabling Designers to Foresee Which Colors Users Cannot See
Users frequently experience situations in which their ability to differentiate screen colors is affected by a diversity of situations, such as when bright sunlight causes glare, or when monitors are dimly lit. However, designers currently have no way of choosing colors that will be differentiable by users of various demographic backgrounds and abilities and in the wide range of situations where their designs may be viewed. Our goal is to provide designers with insight into the effect of real-world situational lighting conditions on people’s ability to differentiate colors in applications and imagery. We therefore developed an online color differentiation test that includes a survey of situational lighting conditions, verified our test in a lab study, and deployed it in an online environment where we collected data from around 30,000 participants. We then created ColorCheck, an image-processing tool that shows designers the proportion of the population they include (or exclude) by their color choices.
Scaffolding Community Documentary Film Making using Commissioning Templates
Crowdsourced video is now a viable tool with which broadcasters and communities alike can produce authentic, high quality video content. However, the literacy, language, skills and tools to produce a documentary through commissioning content are currently difficult to acquire. We explore opening up the documentary film commissioning process to community contributors by developing a framework which instructs, guides and informs non-professional contributors in capturing the content required for making videos. Through the results of an in-the-wild deployment we discuss how our framework scaffolds content creation, the capture of high quality footage and coordination amongst teams of contributors. We then discuss how this can inform community media creation in the future.
Comparing Different Sensemaking Approaches for Large-Scale Ideation
Large-scale idea generation platforms often expose ideators to previous ideas. However, research suggests people generate better ideas if they see abstracted solution paths (e.g., descriptions of solution approaches generated through human sensemaking) rather than being inundated with all prior ideas. Automated and semi-automated methods can also offer interpretations of earlier ideas. To benefit from sensemaking in practice with limited resources, ideation platform developers need to weigh the cost-quality tradeoffs of different methods for surfacing solution paths. To explore this, we conducted an online study where 245 participants generated ideas for two problems in one of five conditions: 1) no stimuli, 2) exposure to all prior ideas, or solution paths extracted from prior ideas using 3) a fully automated workflow, 4) a hybrid human-machine approach, and 5) a fully manual approach. Contrary to expectations, human-generated paths did not improve ideation (as meas-ured by fluency and breadth of ideation) over simply showing all ideas. Machine-generated paths sometimes significantly improved fluency and breadth of ideation over no ideas (although at some cost to idea quality). These findings suggest that automated sensemaking can improve idea generation, but we need more research to understand the value of human sensemaking for crowd ideation.
Improving Comprehension of Numbers in the News
How many guns are there in the USA? What is the incidence of breast cancer? Is a billion dollar budget cut large or small? Advocates of scientific and civic literacy are concerned with improving how people estimate and comprehend risks, measurements, and frequencies, but relatively little progress has been made in this direction. In this article we describe and test a framework to help people comprehend numerical measurements in everyday settings through simple sentences, termed perspectives, that employ ratios, ranks, and unit changes to make them easier to understand. We use a crowdsourced system to generate perspectives for a wide range of numbers taken from online news articles. We then test the effectiveness of these perspectives in three randomized, online experiments involving over 3,200 participants. We find that perspective clauses substantially improve people’s ability to recall measurements they have read, estimate ones they have not, and detect errors in manipulated measurements. We see this as the first of many steps in leveraging digital platforms to improve numeracy among online readers.
SESSION: Embodied Interaction
Sketching Shape-changing Interfaces: Exploring Vocabulary, Metaphors Use, and Affordances
Shape-changing interfaces allow designers to create user interfaces that physically change shape. However, presently, we lack studies of how such interfaces are designed, as well as what high-level strategies, such as metaphors and affordances, designers use. This paper presents an analysis of sketches made by 21 participants designing either a shape-changing radio or a shape-changing mobile phone. The results exhibit a range of interesting design elements, and the analysis points to a need to further develop or revise existing vocabularies for sketching and analyzing movement. The sketches show a prevalent use of metaphors, say, for communicating volume though big-is-on and small-is-off, as well as a lack of conventions. Furthermore, the affordances used were curiously asymmetrical compared to those offered by non-shape-changing interfaces. We conclude by offering implications on how our results can influence future research on shape-changing interfaces.
Understanding Affordance, System State, and Feedback in Shape-Changing Buttons
Research on shape-changing interfaces has explored various technologies, parameters for shape changes, and transformations between shapes. While much is known about how to implement these variations, it is unclear what affordance they provide, how users understand their relation to the underlying system state, and how feedback via shape change is perceived. We investigated this by studying how 15 participants perceived and used 13 shape-changing buttons. The buttons covered several aspects of affordance, system state, and feedback, including invite-to-touch movements, two styles of transition animation, and two actuation technologies. Participants explored and interacted with the buttons while thinking aloud. The results show that affordances are hard to communicate clearly with shape change; while some movements invited actions, others were seen as a malfunction. The best clue as to button state was provided by the position of the button in combination with vibration. Linear transition animation for changes in button state was the best received form of shape-change feedback. We discuss also how these findings can inform the design of shape-changing interfaces more generally.
Materiable: Rendering Dynamic Material Properties in Response to Direct Physical Touch with Shape Changing Interfaces
Shape changing interfaces give physical shapes to digital data so that users can feel and manipulate data with their hands and bodies. However, physical objects in our daily life not only have shape but also various material properties. In this paper, we propose an interaction technique to represent material properties using shape changing interfaces. Specifically, by integrating the multi-modal sensation techniques of haptics, our approach builds a perceptive model for the properties of deformable materials in response to direct manipulation. As a proof-of-concept prototype, we developed preliminary physics algorithms running on pin-based shape displays. The system can create computationally variable properties of deformable materials that are visually and physically perceivable. In our experiments, users identify three deformable material properties (flexibility, elasticity and viscosity) through direct touch interaction with the shape display and its dynamic movements. In this paper, we describe interaction techniques, our implementation, future applications and evaluation on how users differentiate between specific properties of our system. Our research shows that shape changing interfaces can go beyond simply displaying shape allowing for rich embodied interaction and perceptions of rendered materials with the hands and body.
High-Low Split: Divergent Cognitive Construal Levels Triggered by Digital and Non-digital Platforms
The present research investigated whether digital and non-digital platforms activate differing default levels of cognitive construal. Two initial randomized experiments revealed that individuals who completed the same information processing task on a digital mobile device (a tablet or laptop computer) versus a non-digital platform (a physical print-out) exhibited a lower level of construal, one prioritizing immediate, concrete details over abstract, decontextualized interpretations. This pattern emerged both in digital platform participants’ greater preference for concrete versus abstract descriptions of behaviors as well as superior performance on detail-focused items (and inferior performance on inference-focused items) on a reading comprehension assessment. A pair of final studies found that the likelihood of correctly solving a problem-solving task requiring higher-level “gist” processing was: (1) higher for participants who processed the information for task on a non-digital versus digital platform and (2) heightened for digital platform participants who had first completed an activity activating an abstract mindset, compared to (equivalent) performance levels exhibited by participants who had either completed no prior activity or completed an activity activating a concrete mindset.
ShapeCanvas: An Exploration of Shape-Changing Content Generation by Members of the Public
Shape-changing displays – visual output surfaces with physically-reconfigurable geometry – provide new challenges for content generation. Content design must incorporate visual elements, physical surface shape, react to user input, and adapt these parameters over time. The addition of the ‘shape channel’ significantly increases the complexity of content design, but provides a powerful platform for novel physical design, animations, and physicalizations. In this work we use ShapeCanvas, a 4×4 grid of large actuated pixels, combined with simple interactions, to explore novice user behavior and interactions for shape-change content design. We deployed ShapeCanvas in a café for two and a half days and observed users generate 21 physical animations. These were categorized into seven categories and eight directly derived from people’s personal interest. This paper describes these experiences, the generated animations and provides initial insights into shape-changing content design.
SESSION: Big Data and Local Society
Finding the Way to OSM Mapping Practices: Bounding Large Crisis Datasets for Qualitative Investigation
OpenStreetMap (OSM) is the most widely used volunteer geographic information system. Although it is increasingly relied upon during humanitarian response as the most up-to-date, accurate, or accessible map of affected areas, the behavior of the mappers who contribute to it is not well understood. In this paper, we explore the work practices and interactions of volunteer mappers operating in the high-tempo, high-volume context of disasters. To do this, we built upon and expanded prior network analysis techniques to select high-value portions of the vast OSM data for further qualitative analysis. We then performed detailed content analysis of the identified activity and, where possible, conducted interviews with the participants. This research allowed the identification of seven distinct mapping practices that can be classified according to dimensions of time, space, and interpersonal interaction. Our work represents a baseline for future research about how OSM crisis mapping practices have evolved over time.
Infrastructure in the Wild: What Mapping in Post-Earthquake Nepal Reveals about Infrastructural Emergence
Disasters and their impacts have unavoidable spatial characteristics. As such, maps are necessary and omnipresent features of the information landscapes that surround and support disaster response. Professional and volunteer GIS services are increasingly in demand to support map-based information visualization during crises. This paper investigates the work of mapmakers working on the response to the 2015 Nepal earthquakes. In comparison to prior events, we found significantly more collaboration and spatial data sharing took place between map producers working across humanitarian organizations and parts of the Nepal government. Collaboration between mapping practitioners was supported by a complex and emergent information infrastructure composed of social and technical elements, some of which were brought through experience with prior disaster events, and some which were shaped anew by the availability and acceptance of open data sources. Our research investigates these elements of the spatial information infrastructure in post-earthquake Nepal to consider infrastructural emergence.
Why and How Traffic Safety Cultures Matter when Designing Advisory Traffic Information Systems
With an increased number of both cars and drivers in the world, it is of great importance to design well-functioning driver support systems for them, in order to reduce the number of accidents. Despite the fact that the growing markets can be found in Asia, most advisory traffic information systems (ATIS) are designed for, and adapted to, the western market, and its predominant traffic safety cultures (TSCs). However, traffic safety cultures differ between different parts of the world, and this in turn affects how drivers respond to advisory traffic information. In our study, we designed an ATIS to accommodate two different traffic safety cultures. Our findings show that although drivers belonging to both TSCs drove more safely with our ATIS than without, they still responded very differently to it, using it to support their different driving strategies. This implies that the traffic safety culture of the driver cannot be ignored; ATIS designers need to study and understand the TSC they are designing for.
It’s Just My History Isn’t It?: Understanding Smart Journaling Practices
Smart journals are both an emerging class of lifelogging applications and novel digital possessions, which are used to create and curate a personal record of one’s life. Through an in-depth interview study of analogue and digital journaling practices, and by drawing on a wide range of research around –technologies of memory?, we address fundamental questions about how people manage and value digital records of the past. Appreciating journaling as deeply idiographic, we map a broad range of user practices and motivations and use this understanding to ground four design considerations: recognizing the motivation to account for one’s life; supporting the authoring of a unique perspective and finding a place for passive tracking as a chronicle. Finally, we argue that smart journals signal a maturing orientation to issues of digital archiving.
SESSION: Touch Interaction
Expressy: Using a Wrist-worn Inertial Measurement Unit to Add Expressiveness to Touch-based Interactions
Expressiveness, which we define as the extent to which rich and complex intent can be conveyed through action, is a vital aspect of many human interactions. For instance, paint on canvas is said to be an expressive medium, because it affords the artist the ability to convey multifaceted emotional intent through intricate manipulations of a brush. To date, touch devices have failed to offer users a level of expressiveness in their interactions that rivals that experienced by the painter and those completing other skilled physical tasks. We investigate how data about hand movement — provided by a motion sensor, similar to those found in many smart watches or fitness trackers — can be used to expand the expressiveness of touch interactions. We begin by introducing a conceptual model that formalizes a design space of possible expressive touch interactions. We then describe and evaluate Expressy, an approach that uses a wrist-worn inertial measurement unit to detect and classify qualities of touch interaction that extend beyond those offered by today’s typical sensing hardware. We conclude by describing a number of sample applications, which demonstrate the enhanced, expressive interaction capabilities made possible by Expressy.
Partially-indirect Bimanual Input with Gaze, Pen, and Touch for Pan, Zoom, and Ink Interaction
Bimanual pen and touch UIs are mainly based on the direct manipulation paradigm. Alternatively we propose partially-indirect bimanual input, where direct pen input is used with the dominant hand, and indirect-touch input with the non-dominant hand. As direct and indirect inputs do not overlap, users can interact in the same space without interference. We investigate two indirect-touch techniques combined with direct pen input: the first redirects touches to the user’s gaze position, and the second redirects touches to the pen position. In this paper, we present an empirical user study where we compare both partially-indirect techniques to direct pen and touch input in bimanual pan, zoom, and ink tasks. Our experimental results show that users are comparatively fast with the indirect techniques, but more accurate as users can dynamically change the zoom-target during indirect zoom gestures. Further our studies reveal that direct and indirect zoom gestures have distinct characteristics regarding spatial use, gestural use, and bimanual parallelism.
Hammer Time!: A Low-Cost, High Precision, High Accuracy Tool to Measure the Latency of Touchscreen Devices
We report on the Latency Hammer, a low-cost yet highaccuracy and high-precision automated tool that measures the interface latency of touchscreen devices. The Hammer directly measures latency by triggering a capacitive touch event on a device using an electrically actuated touch simulator, and a photo sensor to monitor the screen for a visual response. This allows us to measure the full end-toend latency of a touchscreen system exactly as it would be experienced by a user. The Hammer does not require human interaction to perform a measurement, enabling the acquisition of large datasets. We present the operating principles of the Hammer, and discuss its design and construction; full design documents are available online. We also present a series of tools and equipment that were built to assess and validate the performance of the Hammer, and demonstrate that it provides reliable latency measurements.
Pre-Touch Sensing for Mobile Interaction
Touchscreens continue to advance including progress towards sensing fingers proximal to the display. We explore this emerging pre-touch modality via a self-capacitance touchscreen that can sense multiple fingers above a mobile device, as well as grip around the screen’s edges. This capability opens up many possibilities for mobile interaction. For example, using pre-touch in an anticipatory role affords an “ad-lib interface” that fades in a different UI–appropriate to the context–as the user approaches one-handed with a thumb, two-handed with an index finger, or even with a pinch or two thumbs. Or we can interpret pre-touch in a retroactive manner that leverages the approach trajectory to discern whether the user made contact with a ballistic vs. a finely-targeted motion. Pre-touch also enables hybrid touch + hover gestures, such as selecting an icon with the thumb while bringing a second finger into range to invoke a context menu at a convenient location. Collectively these techniques illustrate how pre-touch sensing offers an intriguing new back-channel for mobile interaction.
SESSION: Managing Design for Life Disruptions
Transition Resilience with ICTs: ‘Identity Awareness’ in Veteran Re-Integration
This paper reports on a qualitative interview study of ICT use amongst a population undergoing transition following a life disruption. We interviewed 13 veterans who were re-integrating into civil society. Veterans are unique in that they experience several transitions at once-that is, after returning home, they often suffer from PTSD, become homeless, change occupations, etc. Amongst other things, veterans often undergo identity crises as caused by the lack of continuity between military and civilian social structures. We show how veterans are resilient through their uses of ICTs when navigating identity crises. We find that they use ICTs to develop identity awareness-that is, they connect with a human infrastructure through which they can develop a “big picture” understanding of unfamiliar rules and norms and receive support when navigating civil society. We discuss the implications of our study and identify implications for design.
Digital Footprints and Changing Networks During Online Identity Transitions
Digital artifacts on social media can challenge individuals during identity transitions, particularly those who prefer to delete, separate from, or hide data that are representative of a past identity. This work investigates concerns and practices reported by transgender people who transitioned while active on Facebook. We analyze open-ended survey responses from 283 participants, highlighting types of data considered problematic when separating oneself from a past identity, and challenges and strategies people engage in when managing personal data in a networked environment. We find that people shape their digital footprints in two ways: by editing the self-presentational data that is representative of a prior identity, and by managing the configuration of people who have access to that self-presentation. We outline the challenging interplay between shifting identities, social networks, and the data that suture them together. We apply these results to a discussion of the complexities of managing and forgetting the digital past.
Legacy Contact: Designing and Implementing Post-mortem Stewardship at Facebook
Post-mortem profiles on social network sites serve as both an archive of the deceased person’s life and a gathering place for friends and loved ones. Many existing systems utilize inheritance as a model for post-mortem data management. However, the social and networked nature of personal data on social media, as well as the memorializing practices in which friends engage, indicate that other approaches are necessary. In this paper, we articulate the design choices made throughout the development of Legacy Contact, a post-mortem data management solution designed and deployed at Facebook. Building on the duties and responsibilities identified by Brubaker et al., we describe how Legacy Contact was designed to honor last requests, provide information surrounding death, preserve the memory of the deceased, and facilitate memorializing practices. We provide details around the design of the Legacy Contact selection process, the functionality provided to legacy contacts after accounts have been memorialized, and changes made to post-mortem profiles.
“PS. I Love You”: Understanding the Impact of Posthumous Digital Messages
A number of digital platforms and services have recently emerged that allow users to create posthumous forms of communication, effectively arranging for the delivery of messages from “beyond the grave”. Despite some evidence of interest and popularity of these services, little is known about how posthumous messages may impact the people who receive them. We present a qualitative study that explores the type of experiences potentially triggered upon receiving such messages. Our findings firstly suggest that posthumous messaging services have the potential to alter the relationship between the bereaved and the deceased, and secondly provide insight into how users make sense of this altered relationship. Through the inference of a set of design considerations for posthumous communication services, we reveal a number of conflicts that are not easily solvable through technological means alone, and which may serve as starting points for further research. Our work extends the growing body of research that is concerned with digital interactions related to death and dying.
SESSION: Civic Tech, Participation and Society
Data and the City
We consider how data is produced and used in cities. We draw on our experiences working with city authorities, along with twenty interviews across four cities to understand the role that data plays in city government. Following the development and deployment of innovative data-driven technology projects in the cities, we look in particular at collaborations around open and crowdsourced data, issues with the politicisation of data, and problems in innovating within the highly regulated public sphere. We discuss what this means for cities, citizens, innovators, and for visions of big data in the smart city as a whole.
Reflections on Deploying Distributed Consultation Technologies with Community Organisations
In recent years there has been an increased focus upon developing platforms for community decision-making, and an awareness of the importance of handing over civic platforms to community organisations to oversee the process of decision-making at a local level. In this paper, we detail fieldwork from working with two community organisations who used our distributed situated devices as part of consultation processes. We focus on some of the mundane and often-untold aspects of this type of work: how questions for consultations were formed, how locations for devices were determined, and the ways in which the data collected fed into decision-making processes. We highlight a number of challenges for HCI and civic technology research going forward, related to the role of the researcher, the messiness of decision making in communities, and the ability of community organisations to influence how citizens participate in democratic processes.
Re-Making Places: HCI, ‘Community Building’ and Change
We present insights from an extended engagement and design intervention at an urban regeneration site in SE Lon-don. We describe the process of designing a walking trail and system for recording and playing back place-specific stories for those living and working on the housing estate, and show how this is set within a wider context of urban renewal, social/affordable housing and “community building”. Like prior work, the research reveals the frictions that arise in participatory engagements with heterogeneous actors. Here we illustrate how material interventions can re-arrange existing spatial configurations, making productive the plurality of accounts intrinsic in community life. Through this, we provide an orientation to HCI and design interventions that are concerned with civic engagement and participation in processes of making places.
Data, Design and Civics: An Exploratory Study of Civic Tech
Civic technology, or civic tech, encompasses a rich body of work, inside and outside HCI, around how we shape technology for, and in turn how technology shapes, how we govern, organize, serve, and identify matters of concern for communities. This study builds on previous work by investigating how civic leaders in a large US city conceptualize civic tech, in particular, how they approach the intersection of data, design and civics. We encountered a range of overlapping voices, from providers, to connectors, to volunteers of civic services and resources. Through this account, we identified different conceptions and expectation of data, design and civics, as well as several shared issues around pressing problems and strategic aspirations. Reflecting on this set of issues produced guiding questions, in particular about the current and possible roles for design, to advance civic tech.
SESSION: Players’ Motivations in Games
Fostering Intrinsic Motivation through Avatar Identification in Digital Games
Fostering intrinsic motivation with interactive applications can increase the enjoyment that people experience when using technology, but can also translate into more invested effort. We propose that identifying with an avatar in a game will increase the intrinsic motivation of the player. We analyzed data from 126 participants playing a custom endless runner game and show that similarity identification, embodied identification, and wishful identification increases autonomy, immersion, invested effort, enjoyment, and positive affect. We also show that greater identification translates into motivated behaviour as operationalized by the time that players spent in an unending version of the infinite runner. Important for the design of games for entertainment and serious purposes, we discuss how identification with an avatar can be facilitated to cultivate intrinsic motivation within and beyond games.
Negative Emotion, Positive Experience?: Emotionally Moving Moments in Digital Games
Emotions are key to the player experience (PX) and interest in the potential of games to provide unique emotional, sometimes uncomfortable experiences is growing. Yet there has been little empirical investigation of what game experiences players consider emotionally moving, their causes and effects, and whether players find these experiences rewarding at all. We analyzed 121 players’ accounts of emotionally moving game experiences in terms of the feelings and thoughts they evoked, different PX constructs, as well as game-related and personal factors contributing to these. We found that most players enjoyed and appreciated experiencing negatively valenced emotions, such as sadness. Emotions were evoked by a variety of interactive and non-interactive game aspects, such as in-game loss, character attachment and (lack of) agency, but also personal memories, and were often accompanied by (self-)reflection. Our findings highlight the potential of games to provide emotionally rewarding and thought-provoking experiences, as well as outline opportunities for future research and design of such experiences. They also showcase that negative affect may contribute to enjoyment, thereby extending our notion of positive player experience.
The Effects of Social Exclusion on Play Experience and Hostile Cognitions in Digital Games
The social nature of multiplayer games provides compelling play experiences that are dynamic, unpredictable, and satisfying; however, playing digital games with others can result in feeling socially excluded. There are several known harmful effects of ostracism, including on cognition and the interpretation of social information. To investigate the effects of social exclusion in the context of a multiplayer game, we developed and validated a social exclusion paradigm that we embedded in an online game. Called Operator Challenge, our paradigm influenced feelings of social exclusion and access to hostile cognitions (measured through a word-completion task). In addition, the degree of experienced belonging predicted player enjoyment, effort, and the number of hostile words completed; however, the experience measures did not mediate the relationship between belonging and access to hostile cognitions. Our work facilitates understanding the causes and effects of exclusion, which is important for the study of player experience in multiplayer games.
Designing Closeness to Increase Gamers’ Performance
Designers often make use of social comparisons to motivate people to perform better. In this paper, we present the concept of closeness to comparison to improve the efficacy of social comparison feedback. Specifically, we test two design strategies related to closeness: (1) comparing users to a target described as a similarly experienced player and (2) adjusting the visual representation of performance so player scores appear closer to the comparison target. We evaluate the effects of these strategies for social comparison on player performance in an online game. In a controlled experiment with 425 participants, both feedback techniques improved game performance, but only for experienced players. We conclude with design implications for helping designers create social comparisons that motivate higher game performance.
SESSION: Workplace Social Performance
What is Your Organization ‘Like’?: A Study of Liking Activity in the Enterprise
The ‘like’ button, introduced by Facebook several years ago, has become one of the most prominent icons of social media. Similarly to other popular social media features on the web, enterprises have also recently adopted it. In this paper, we present a first comprehensive study of liking activity in the enterprise. We studied the logs of an enterprise social media platform within a large global organization along a period of seven months, in which 393,720 ‘likes’ were performed. In addition, we conducted a survey of 571 users of the platform’s ‘like’ button. Our evaluation combines quantitative and qualitative analysis to inspect what employees like, why they use the ‘like’ button, and to whom they give their ‘likes’.
Find an Expert: Designing Expert Selection Interfaces for Formal Help-Giving
A critical aspect of formal help-giving tasks in the enterprise is finding the right expert. We built and evaluated a tool, Find an Expert, to examine what the most important expert selection criteria are for help-seekers and how to represent them in expert selection interfaces for formal help-giving tasks. We observed users’ expert selection decisions and found that the diversity of topic expertise and the amount of related experience were significantly important in helping users decide which expert to contact. Through self-reported data from users, we found that in addition to expertise and experience, expert accessibility indicators, like online availability and language proficiency, were considered important criteria for selecting experts. Finally, publicly-displayed crowdsourced ratings of experts, while deemed useful indicators of expert quality by help-seekers, raised concerns for experts. We conclude with suggestions regarding the design of expert selection interfaces for formal help-giving tasks.
The Role of ICT in Office Work Breaks
Break activities — deliberate and unexpected — are common throughout the working day, playing an important role in the wellbeing of workers. This paper investigates the role of increasingly pervasive ICT in creating new opportunities for breaks at work, what impact the technology has on management of boundaries at work, and the effects these changes have on personal wellbeing. We present a study of the routines of office-workers, where we used images from participants’ work-days to prompt and contextualize interviews with them. Analysis of coded photographs and interview data makes three contributions: an account of ubiquitous ICT creating new forms of micro-breaks, including the opportunity to employ previously wasted time; a description of the ways in which staff increasingly bring “home to work”; and a discussion of the emergence of “screen guilt”. We evaluate our findings in relation to previous studies, and leave three research implications and questions for future work in this domain.
Let’s Stitch Me and You Together!: Designing a Photo Co-creation Activity to Stimulate Playfulness in the Workplace
We present a photo co-creation activity, called “Stitched Groupies,” in a photo-taking and sharing platform deployed inside IBM. “Stitched Groupies” allow employees to take and combine photos with peers asynchronously across physical boundaries. In a 25-day exploratory field study with 50 users taking 68 half-photos (of which 52 were completed by others), we categorized themes such as Spliced Faces, Composed Scenes, Body Modifications, Inanimate Objects and Doppelgangers. Our results suggest that photo co-creation can stimulate playfulness and fun in the workplace.
SESSION: Patients’ Participation in Online and Offline Settings
The Quantified Patient in the Doctor’s Office: Challenges & Opportunities
While the Quantified Self and personal informatics fields have focused on the individual’s use of self-logged data about themselves, the same kinds of data could, in theory, be used to improve diagnosis and care planning. In this paper, we seek to understand both the opportunities and bottlenecks in the use of self-logged data for differential diagnosis and care planning during patient visits to both primary and secondary care. We first conducted a literature review to identify potential factors influencing the use of self-logged data in clinical settings. This informed the design of our experiment, in which we applied a vignette-based role-play approach with general practitioners and hospital specialists in the US and UK, to elicit reflections on and insights about using patient self-logged data. Our analysis reveals multiple opportunities for the use of self-logged data in the differential diagnosis workflow, identifying capture, representational, and interpretational challenges that are potentially preventing self-logged data from being effectively interpreted and applied by clinicians to derive a patient’s prognosis and plan of care.
Breaking the Sound Barrier: Designing for Patient Participation in Audiological Consultations
This paper explores how interactive technology can help overcome barriers to active patient participation in audiological consultations involving hearing aid tuning. We describe the design and evaluation of a prototype sound simulator intended to trigger reflection in patients regarding their hearing experiences, and help guide the tuning process. The prototype was tested in twelve consultations. Our findings suggest that it helped facilitate patient participation during the tuning process by: (1) encouraging an iterative, patient-driven approach; (2) stimulating context-specific feedback and follow-up questions; (3) helping patients make sense of medical information and treatment actions; (4) offering patient control over the process pace and what situations to optimize for; and (5) promoting reflections on daily hearing aid use. Post-consultation interviews revealed that the prototype was perceived useful in several ways. Our results highlight the benefit of flexible designs that can be appropriated to fit the spontaneous needs of patients and audiologists
Who’s the Doctor?: Physicians’ Perception of Internet Informed Patients in India
Internet health information seeking can potentially alter physician-patient interactions, which in turn can influence healthcare delivery. Investigating physicians’ perceptions about internet-informed patients is important for understanding this phenomenon in countries like India, where this is a relatively recent trend. We conducted a qualitative study to this effect, conceptualizing internet health information access as a disintermediation process, and examining this phenomenon through the dimensions of meanings ascribed, power dynamics and social norms. We found that physicians’ perceptions about internet informed patients and their interactions with these patients were largely adversarial. However, some physicians viewed the phenomenon as inevitable. They developed methods that leveraged patients’ internet access for the purpose of increasing patient awareness and self-efficacy. We conceptualize this new role of physicians as apomediation, and present recommendations for design and implementation of health information platforms in countries such as India, where power dynamics form a salient part of physician-patient interactions.
“Not Just a Receiver”: Understanding Patient Behavior in the Hospital Environment
Patient engagement leads to better health outcomes and experiences of health care. However, existing patient engagement systems in the hospital environment focus on the passive receipt of information by patients rather than the active contribution of the patient or caregiver as a partner in their care. Through interviews with hospitalized patients and their caregivers, we identify ways that patients and caregivers actively participate in their care. We describe the different roles patients and caregivers assume in interacting with their hospital care team. We then discuss how systems designed to support patient engagement in the hospital setting can promote active participation and help patients achieve better outcomes.
SESSION: User Experience and Performance
The Impact of Causal Attributions on System Evaluation in Usability Tests
Causal Attribution research deals with the explanations people find in situations of success and failure for why things happened the way they did, and the extent of control they feel to have over the situation. Attributing success and failure differently has an impact on our emotions, our motivation, and behavior. However, so far research on computer-related attributions has not answered the question whether different attribution patterns influence system evaluation in usability tests. This question formed the basis for our investigation. Two standardized questionnaires were used to measure users’ attribution patterns and users’ system evaluations. The usability tests were conducted in our laboratory with N=51 participants. At large, our results suggest that there are notable influences of users’ attribution patterns on their evaluation of system quality, especially in situations of success.
Personality of Interaction: Expressing Brand Personalities Through Interaction Aesthetics
Practicing designers must usually relate to branding in some manner. A designed artifact must support the brand in a constructive way and help establish positive brand experiences, which in turn have strategic value for the brand’s institution. While there is obvious application of visual branding knowledge to the visual form of interactive artifacts, interviews with expert practitioners reveal a lack of systematic means to craft an interaction aesthetic to support a brand. Our empirical study relates attributes of interactive experience to that of ‘brand personality’, a common way of quantifying how a brand should be perceived. We show that particular attributes of interactivity, such as whether an interaction has a continuous rather than discrete flow, are related to particular brand traits. Our empirical results establish a clear commercial significance for deeper, systematic ways of analyzing and critiquing interactive experiences.
Somaesthetic Appreciation Design
We propose a strong concept we name Somaesthetic Appreciation based on three different enquiries. First, our own autobiographical design enquiry, using Feldenkrais as a resource in our design process, bringing out the Soma Carpet and Breathing Light applications. Second, through bringing in others to experience our systems, engaging with and qualitatively analysing their experiences of our applications. In our third enquiry, we try to pin down what characterises and sets Somaesthetic Appreciation designs apart through comparing with and analysing others’ design inquiries as well as grounding them in the somaesthetic theories. We propose that the Somaesthetic Appreciation designs share a subtleness in how they encourage and spur bodily inquiry in their choice of interaction modalities, they require an intimate correspondence — feedback and interactions that follow the rhythm of the body, they entail a distinct manner of making space shutting out the outside world — metaphorically and literally — to allow users to turn their attention inwards, and they rely on articulation of bodily experiences to encourage learning and increased somatic awareness.
SESSION: Microtasks and Crowdsourcing
Chain Reactions: The Impact of Order on Microtask Chains
Microtasks are small units of work designed to be completed individually, eventually contributing to a larger goal. Although microtasks can be performed in isolation, in practice people often complete a chain of microtasks within a single session. Through a series of crowd-based studies, we look at how various microtasks can be chained together to improve efficiency and minimize mental demand, focusing on the writing domain. We find that participants completed low-complexity microtasks faster when they were preceded by the same type of microtask, whereas they found high-complexity microtasks less mentally demanding when pre-ceded by microtasks on the same content. Furthermore, participants were faster at starting high-complexity microtasks after completing lower-complexity microtasks, but completion time and quality were not affected. These findings provide insight into how microtasks can be ordered to optimize transitions from one microtask to another.
How One Microtask Affects Another
Microtask platforms are becoming commonplace tools for performing human research, producing gold-standard data, and annotating large datasets. These platforms connect requesters (researchers or companies) with large populations (crowds) of workers, who perform small tasks, typically taking less than five minutes each. A topic of ongoing research concerns the design of tasks that elicit high quality annotations. Here we identify a seemingly banal feature of nearly all crowdsourcing workflows that profoundly impacts workers’ responses. Microtask assignments typically consist of a sequence of tasks sharing a common format (e.g., circle galaxies in an image). Using image-labeling, a canonical microtask format, we show that earlier tasks can have a strong influence on responses to later tasks, shifting the distribution of future responses by 30-50% (total variational distance). Specifically, prior tasks influence the content that workers focus on, as well as the richness and specialization of responses. We call this phenomenon intertask effects. We compare intertask effects to framing, effected by stating the requester’s research interest, and find that intertask effects are on par or stronger. If uncontrolled, intertask effects could be a source of systematic bias, but our results suggest that, with appropriate task design, they might be leveraged to hone worker focus and acuity, helping to elicit reproducible, expert-level judgments. Intertask effects are a crucial aspect of human computation that should be considered in the design of any crowdsourced study.
Embracing Error to Enable Rapid Crowdsourcing
Microtask crowdsourcing has enabled dataset advances in social science and machine learning, but existing crowdsourcing schemes are too expensive to scale up with the expanding volume of data. To scale and widen the applicability of crowdsourcing, we present a technique that produces extremely rapid judgments for binary and categorical labels. Rather than punishing all errors, which causes workers to proceed slowly and deliberately, our technique speeds up workers’ judgments to the point where errors are acceptable and even expected. We demonstrate that it is possible to rectify these errors by randomizing task order and modeling response latency. We evaluate our technique on a breadth of common labeling tasks such as image verification, word similarity, sentiment analysis and topic classification. Where prior work typically achieves a 0.25x to 1x speedup over fixed majority vote, our approach often achieves an order of magnitude (10x) speedup.
Alloy: Clustering with Crowds and Computation
Crowdsourced clustering approaches present a promising way to harness deep semantic knowledge for clustering complex information. However, existing approaches have difficulties supporting the global context needed for workers to generate meaningful categories, and are costly because all items require human judgments. We introduce Alloy, a hybrid approach that combines the richness of human judgments with the power of machine algorithms. Alloy supports greater global context through a new “sample and search” crowd pattern which changes the crowd’s task from classifying a fixed subset of items to actively sampling and querying the entire dataset. It also improves efficiency through a two phase process in which crowds provide examples to help a machine cluster the head of the distribution, then classify low-confidence examples in the tail. To accomplish this, Alloy introduces a modular “cast and gather” approach which leverages a machine learning backbone to stitch together different types of judgment tasks.
SESSION: Software and Programming Tools
Towards Providing On-Demand Expert Support for Software Developers
Software development is an expert task that requires complex reasoning and the ability to recall language or API-specific details. In practice, developers often seek support from IDE tools, Web resources, or other developers to help fill in gaps in their knowledge on-demand. In this paper, we present two studies that seek to inform the design of future systems that use remote experts to support developers on demand. The first explores what types of questions developers would ask a hypothetical assistant capable of answering any question they pose. The second study explores the interactions between developers and remote experts in supporting roles. Our results suggest eight key system features needed for on-demand remote developer assistants to be effective, which has implications for future human-powered development tools.
The Social Side of Software Platform Ecosystems
Software ecosystems as a paradigm for large-scale software development encompass a complex mix of technical, business, and social aspects. While significant research has been conducted to understand both the technical and business aspects, the social aspects of software ecosystems are less well understood. To close this gap, this paper presents the results of an empirical study aimed at understanding the influence of social aspects on developers’ participation in software ecosystems. We conducted 25 interviews with mobile software developers and an online survey with 83 respondents from the mobile software development community. Our results point out a complex social system based on continued interaction and mutual support between different actors, including developers, friends, end users, developers from large companies, and online communities. These findings highlight the importance of social aspects in the sustainability of software ecosystems both during the initial adoption phase as well as for long-term permanence of developers.
Tales of Software Updates: The process of updating software
Updates alter the way software functions by fixing bugs, changing features, and modifying the user interface. Sometimes changes are welcome, even anticipated, and sometimes they are unwanted leading to users avoiding potentially unwanted updates. If users delay or do not install updates it can have serious security implications for their computer. Updates are one of the primary mechanisms for correcting discovered vulnerabilities, when a user does not update they remain vulnerable to an increasing number of attacks. In this work we detail the process users go through when updating their software, including both the positive and negative issues they experience. We asked 307 survey respondents to provide two contrasting software update stories. Using content analysis we analysed the stories and found that users go through six stages while updating: awareness, deciding to update, preparation, installation, troubleshooting, and post state. We further detail the issues respondents experienced during each stage and the impact on their willingness to update.
Trigger-Action Programming in the Wild: An Analysis of 200,000 IFTTT Recipes
While researchers have long investigated end-user programming using a trigger-action (if-then) model, the website IFTTT is among the first instances of this paradigm being used on a large scale. To understand what IFTTT users are creating, we scraped the 224,590 programs shared publicly on IFTTT as of September 2015 and are releasing this dataset to spur future research. We characterize aspects of these programs and the IFTTT ecosystem over time. We find a large number of users are crafting a diverse set of end-user programs—over 100,000 different users have shared programs. These programs represent a very broad array of connections that appear to fill gaps in functionality, yet users often duplicate others’ programs.
Using Runtime Traces to Improve Documentation and Unit Test Authoring for Dynamic Languages
Documentation and unit tests increase software maintainability, but real world software projects rarely have adequate coverage. We hypothesize that, in part, this is because existing authoring tools require developers to adjust their workflows significantly. To study whether improved interaction design could affect unit testing and documentation practice, we created an authoring support tool called Vesta. The main insight guiding Vesta’s interaction design is that developers frequently manually test the software they are building. We propose leveraging runtime information from these manual executions. Because developers naturally exercise the part of the code on which they are currently working, this information will be highly relevant to appropriate documentation and testing tasks. In a complex coding task, nearly all documentation created using Vesta was accurate, compared to only 60% of documentation created without Vesta, and Vesta was able to generate significant portions of all tests, even those written manually by developers without Vesta.
SESSION: Did you feel the vibration– Haptic Feedback Everywhere)
Cross-Field Aerial Haptics: Rendering Haptic Feedback in Air with Light and Acoustic Fields
We present a new method of rendering aerial haptic images that uses femtosecond-laser light fields and ultrasonic acoustic fields. In conventional research, a single physical quantity has been used to render aerial haptic images. In contrast, our method combines multiple fields (light and acoustic fields) at the same time. While these fields have no direct interference, combining them provides benefits such as multi-resolution haptic images and a synergistic effect on haptic perception. We conducted user studies with laser haptics and ultrasonic haptics separately and tested their superposition. The results showed that the acoustic field affects the tactile perception of the laser haptics. We explored augmented reality/virtual reality (AR/VR) applications such as providing haptic feedback of the combination of these two methods. We believe that the results of this study contribute to the exploration of laser haptic displays and expand the expression of aerial haptic displays based on other principles.
HapTurk: Crowdsourcing Affective Ratings of Vibrotactile Icons
Vibrotactile (VT) display is becoming a standard component of informative user experience, where notifications and feedback must convey information eyes-free. However, effective design is hindered by incomplete understanding of relevant perceptual qualities, together with the need for user feedback to be accessed in-situ. To access evaluation streamlining now common in visual design, we introduce proxy modalities as a way to crowdsource VT sensations by reliably communicating high-level features through a crowd-accessible channel. We investigate two proxy modalities to represent a high-fidelity tactor: a new VT visualization, and low-fidelity vibratory translations playable on commodity smartphones. We translated 10 high-fidelity vibrations into both modalities, and in two user studies found that both proxy modalities can communicate affective features, and are consistent when deployed remotely over Mechanical Turk. We analyze fit of features to modalities, and suggest future improvements.
ActiVibe: Design and Evaluation of Vibrations for Progress Monitoring
Smartwatches and activity trackers are becoming prevalent, providing information about health and fitness, and offering personalized progress monitoring. These wearable devices often offer multimodal feedback with embedded visual, audio, and vibrotactile displays. Vibrations are particularly useful when providing discreet feedback, without users having to look at a display or anyone else noticing, thus preserving the flow of the primary activity. Yet, current use of vibrations is limited to basic patterns, since representing more complex information with a single actuator is challenging. Moreover, it is unclear how much the user–s current physical activity may interfere with their understanding of the vibrations. We address both issues through the design and evaluation of ActiVibe, a set of vibrotactile icons designed to represent progress through the values 1 to 10. We demonstrate a recognition rate of over 96% in a laboratory setting using a commercial smartwatch. ActiVibe was also evaluated in situ with 22 participants for a 28-day period. We show that the recognition rate is 88.7% in the wild and give a list of factors that affect the recognition, as well as provide design guidelines for communicating progress via vibrations.
Motion Guidance Sleeve: Guiding the Forearm Rotation through External Artificial Muscles
Online fitness videos make it possible and popular to do exercise at home. However, it is not easy to notice the details of motions by merely watching training videos. We propose a new type of motion guidance system that simulates the way that the human body moves as driven by muscle contractions. We have designed external artificial muscles on a sleeve to create a pulling sensation that can guide the forearm’s pronation (internal rotation) and the forearm’s supination (external rotation). The sleeve consists of stepper motors to provide pulling force, fishing lines and elastic bands to imitate muscle contraction to drive the forearm to rotate instinctively. We present two preliminary experiments. The first one shows that this system can effectively guide the forearm to rotate in the correct direction. The second one shows that users can be guided to the targeted angle by utilizing a tactile cue. We also report users’ feedback through the experiments and provide design recommendations and directions for future research.
GauntLev: A Wearable to Manipulate Free-floating Objects
A tool able to generate remote forces would allow us to handle dangerous or fragile materials without contact or occlusions. Acoustic levitation is a suitable technology since it can trap particles in air or water. However, no approach has tried to endow humans with an intertwined way of controlling it. Previously, the acoustic elements were static, had to surround the particles and only translation was possible. Here, we present the basic manoeuvres that can be performed when levitators are attached to our moving hands. A Gauntlet of Levitation and a Sonic Screwdriver are presented with their manoeuvres for capturing, moving, transferring and combining particles. Manoeuvres can be performed manually or assisted by a computer for repeating patterns, stabilization and enhanced accuracy or speed. The presented prototypes still have limited forces but symbolize a milestone in our expectations of future technology.
SESSION: Designing for Attention and Multitasking
Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces
We present a computational model to predict users’ spatio-temporal visual attention on WIMP-style (windows, icons, menus, pointer) graphical user interfaces. Like existing models of bottom-up visual attention in computer vision, our model does not require any eye tracking equipment. Instead, it predicts attention solely using information available to the interface, specifically users’ mouse and keyboard input as well as the UI components they interact with. To study our model in a principled way, we further introduce a method to synthesize user interface layouts that are functionally equivalent to real-world interfaces, such as from Gmail, Facebook, or GitHub. We first quantitatively analyze attention allocation and its correlation with user input and UI components using ground-truth gaze, mouse, and keyboard data of 18 participants performing a text editing task. We then show that our model predicts attention maps more accurately than state-of-the-art methods. Our results underline the significant potential of spatio-temporal attention modeling for user interface evaluation, optimization, or even simulation.
Now Check Your Input: Brief Task Lockouts Encourage Checking, Longer Lockouts Encourage Task Switching
Data-entry is a common activity that is usually performed accurately. When errors do occur though, people are poor at spotting them even if they are told to check their input. We considered whether making people pause for a brief moment before confirming their input would make them more likely to check it. We ran a lab experiment to test this idea. We found that task lockouts encouraged checking. Longer lockout durations made checking more likely. We ran a second experiment on a crowdsourcing platform to find out whether lockouts would still be effective in a less controlled setting. We discovered that longer lockouts induced workers to switch to other activities. This made the lockouts less effective. To be useful in practice, the duration of lockouts needs to be carefully calibrated. If lockouts are too brief they will not encourage checking. If they are too long they will induce switching.
Getting Users’ Attention in Web Apps in Likable, Minimally Annoying Ways
Web applications often need to present the user new information in the context of their current activity. Designers rely on a range of UI elements and visual techniques to present the new content to users, such as pop-ups, message icons, and marquees. Web designers need to select which technique to use depending on the centrality of the information and how quickly they need a reaction. However, designers often rely on intuition and anecdotes rather than empirical evidence to drive their decision-making as to which presentation technique to use. This work represents an attempt to quantify these presentation style decisions. We present a large (n=1505) user study that compares 15 visual attention-grabbing techniques with respect to reaction time, noticeability, annoyance, likability, and recall. We suggest glowing shadows and message icons with badges, as well as more possibilities for future work.
Window Shopping: A Study of Desktop Window Switching
Desktop users frequently open and switch between multiple windows. Here we present an experiment comparing 3 window switching interfaces: the Cards interface spreads windows out like a vertical stack of cards with the most recent window at the front; the Mosaic interface places each window in a grid ordered by recency; and, the Exposé interface provides an map-like overview based on the relative size and position of windows. Experimental results suggest that the Mosaic interface scales, enabling faster window selection than the Cards interface and less erroneous window selection than the Exposé interface.
SESSION: Politics on Social Media
Constructing the Visual Online Political Self: An Analysis of Instagram Use by the Scottish Electorate
This paper presents an investigation of how the Scottish electorate utilised photo-sharing on social media as a means of participation in the democratic process and for political self-expression in the periods immediately prior to two recent major democratic votes: the 2014 Scottish independence referendum, and the 2015 UK general election. We extend previous HCI literature on the growing use of social media in a political context and contribute specifically on understanding the emergent use of visual media by citizens when engaging with political issues and democratic process. Through a qualitative analysis of images shared on the platform Instagram, we demonstrate that the Scottish electorate did indeed used image-sharing for political self-expression — posting a variety of visual content, representative of a diversity of political opinion. We conclude that users utilised Instagram as a platform to craft and present their “political selves”. We raise questions for future research around power and inequality on such platforms as well as their capability of providing a persistent forum for debate.
#Snowden: Understanding Biases Introduced by Behavioral Differences of Opinion Groups on Social Media
We present a study of 10-month Twitter discussions on the controversial topic of Edward Snowden. We demonstrate how behavioral differences of opinion groups can distort the presence of opinions on a social media platform. By studying the differences between a numerical minority (anti-Snowden) and a majority (pro-Snowden) group, we found that the minority group engaged in a “shared audiencing” practice with more persistent production of original tweets, focusing increasingly on inter-personal interactions with like-minded others. The majority group engaged in a “gatewatching” practice by disseminating information from the group, and over time shifted further from making original comments to retweeting others’. The findings show consistency with previous social science research on how social environment shapes majority and minority group behaviors. We also highlight that they can be further distorted by the collective use of social media design features such as the “retweet” button, by introducing the concept of “amplification'” to measure how a design feature biases the voice of an opinion group. Our work presents a warning to not oversimplify analysis of social media data for inferring social opinions.
ICT Use by Prominent Activists in Republika Srpska
Bosnia-Herzegovina and its administrative unit or “entity”, Republika Srpska are divided, transitional post-war societies. The aim of this paper is to present a preliminary analysis of regional activists’ use of information and communication technology (ICT) and to identify improvement potential. Empirical investigations of social media use and qualitative interviews with the country’s activists indicate strong interest in ICT. Benefits for the use of ICT by activists include more efficient access to their target group, easier information sharing with the general population, and quicker reaction to spontaneous “offline” activities. Simultaneously, data points to problems such as limited budgets and know-how, intensive outsourcing practices, and a significant lack of awareness regarding data security. Activists see improvement potential in areas of training on content optimization, campaign management, ICT use and maintenance, security, and privacy. Additionally, there is potential to improve upon the sustainability of activist’s work and patterns related to their ICT outsourcing.
Gender and Ideology in the Spread of Anti-Abortion Policy
In the past few years an unprecedented wave of anti-abortion policies were introduced and enacted in state governments in the U.S., affecting millions of constituents. We study this rapid spread of policy change as a function of the underlying ideology of constituents. We examine over 200,000 public messages posted on Twitter surrounding abortion in the year 2013, a year that saw 82 new anti-abortion policies enacted. From these posts, we characterize people’s expressions of opinion on abortion and show how these expressions align with policy change on these issues. We detail a number of ideological differences between constituents in states enacting anti versus pro-abortion policies, such as a tension between the moral values of purity versus fairness, and a differing emphasis on the fetus versus the pregnant woman. We also find significant differences in how males versus females discuss the issue of abortion, including greater emphasis on health and religion by males. Using these measures to characterize states, we can construct models to explain the spread of abortion policy from state to state and project which types of abortion policies a state will introduce. Models defining state similarity using our Twitter-based measures improved policy projection accuracy by 7.32% and 12.02% on average over geographic and poll-based ideological similarity, respectively. Additionally, models constructed from the expressions of male-only constituents perform better than models from the expressions of female-only constituents, suggesting that the ideology of men is more aligned with the recent spread of anti-abortion legislation than that of women.
SESSION: Gesture Elicitation and Interaction
Between-Subjects Elicitation Studies: Formalization and Tool Support
Elicitation studies, where users supply proposals meant to effect system commands, have become a popular method for system designers. But the method to date has assumed a within-subjects procedure and statistics. Despite the benefits of examining the relative agreement of independent groups (e.g., men versus women, children versus adults, novices versus experts, etc.), the lack of appropriate tools for between-subjects agreement rate analysis have prevented so far such comparative investigations. In this work, we expand the elicitation method to between-subjects designs. We introduce a new measure for evaluating coagreement between groups and a new statistical test for agreement rate analysis that reports the exact p-value to evaluate the significance of the difference between agreement rates calculated for independent groups. We show the usefulness of our tools by re-examining previously published gesture elicitation data, for which we discuss significant differences in agreement for technical and non-technical participants, men and women, and different acquisition technologies. Our new tools will enable practitioners to properly analyze their user-elicited data resulted from complex experimental designs with multiple independent groups and, consequently, will help them understand agreement data and verify hypotheses about agreement at more sophisticated levels of analysis.
User Elicitation on Single-hand Microgestures
Gestural interaction has become increasingly popular, as enabling technologies continue to transition from research to retail. The mobility of miniaturized (and invisible) technologies introduces new uses for gesture recognition. This paper investigates single-hand microgestures (SHMGs), detailed gestures in a small interaction space. SHMGs are suitable for the mobile and discrete nature of interactions for ubiquitous computing. However, there has been a lack of end-user input in the design of such gestures. We performed a user-elicitation study with 16 participants to determine their preferred gestures for a set of referents. We contribute an analysis of 1,632 gestures, the resulting gesture set, and prevalent conceptual themes amongst the elicited gestures. These themes provide a set of guidelines for gesture designers, while informing the designs of future studies. With the increase in hand-tracking and electronic devices in our surroundings, we see this as a starting point for designing gestures suitable to portable ubiquitous computing.
PathSync: Multi-User Gestural Interaction with Touchless Rhythmic Path Mimicry
In this paper, we present PathSync, a novel, distal and multi-user mid-air gestural technique based on the principle of rhythmic path mimicry; by replicating the movement of a screen-represented pattern with their hand, users can intuitively interact with digital objects quickly, and with a high level of accuracy. We present three studies that each contribute (1) improvements to how correlation is calculated in path-mimicry techniques necessary for touchless interaction, (2) a validation of its efficiency in comparison to existing techniques, and (3) a demonstration of its intuitiveness and multi-user capacity ‘in the wild’. Our studies consequently demonstrate PathSync’s potential as an immediately legitimate alternative to existing techniques, with key advantages for public display and multi-user applications.
Machine Learning of Personal Gesture Variation in Music Conducting
This note presents a system that learns expressive and idiosyncratic gesture variations for gesture-based interaction. The system is used as an interaction technique in a music conducting scenario where gesture variations drive music articulation. A simple model based on Gaussian Mixture Modeling is used to allow the user to configure the system by providing variation examples. The system performance and the influence of user musical expertise is evaluated in a user study, which shows that the model is able to learn idiosyncratic variations that allow users to control articulation, with better performance for users with musical expertise.
Fingers of a Hand Oscillate Together: Phase Syncronisation of Tremor in Hover Touch Sensing
When using non-contact finger tracking, fingers can be classified as to which hand they belong to by analysing the phase relation of physiological tremor. In this paper, we show how 3D capacitive sensors can pick up muscle tremor in fingers above a device. We develop a signal processing pipeline based on nonlinear phase synchronisation that can reliably group fingers to hands and experimentally validate our technique. This allows significant new gestural capabilities for 3D finger sensing without additional hardware
SESSION: Supporting Player Performance
The Mimesis Effect: The Effect of Roles on Player Choice in Interactive Narrative Role-Playing Games
We present a study that investigates the heretofore unexplored relationship between a player’s sense of her narrative role in an interactive narrative role-playing game and the options she selects when faced with choice structures during gameplay. By manipulating a player’s knowledge over her role, and examining in-game options she preferred in choice structures, we discovered what we term the Mimesis Effect: when players were explicitly given a role, we found a significant relationship between their role and their in-game actions; participants role-play even if not instructed to, exhibiting a preference for actions consistent with their role. Further, when players were not explicitly given a role, participants still role-played — they were consistent with an implicit role — but did not agree on which role to implicitly be consistent with. We discuss our findings and broader implications of our work to both game development and games research.
Scaffolding Player Location Awareness through Audio Cues in First-Person Shooters
Digital games require players to learn various skills, which is often accomplished through play itself. In multiplayer games, novices can feel overwhelmed if competing against better players, and can fail to improve, which may lead to unsatisfying play and missed social play opportunities. To help novices learn the requisite skills, we first determined how experts accomplish an important task in multiplayer FPS games — locating their opponent. After determining that an understanding of audio cues and how to leverage them was critical, we designed and evaluated two systems for introducing this skill of locating opponents through audio cues — a training system, and a modified game interface. We found that both systems improved accuracy and confidence, but that the training system led to more audio cues being recognized. Our work may help people of disparate skill play together, by scaffolding novices to learn and use a strategy commonly employed by experts.
How Disclosing Skill Assistance Affects Play Experience in a Multiplayer First-Person Shooter Game
In social play settings, it can be difficult for people with different skill levels to play a game together. Player balancing that provides skill assistance for the weaker player can allow for enjoyable play experiences; however, previous research (and conventional wisdom) has suggested that skill assistance should be kept hidden to avoid perceptions of unfairness. We carried out a study to test how disclosing skill assistance affects player experience. We found — surprisingly — that disclosing assistance did not harm play experience; players were more influenced by the benefits of equalized performance resulting from assistance than by their knowledge of the assist. We introduce the idea of attribution biases to help explain why awareness was not harmful — people tend to take credit for their successes, but attribute failures externally. We discuss how game designers can incorporate skill assistance to build multiplayer games that improve experiences for a wide range of players.
Using an International Gaming Tournament to Study Individual Differences in MOBA Expertise and Cognitive Skills
In this study we evaluated a novel approach for examining the link between gaming expertise and cognitive skills, and the value of recruiting and running participants at a MOBA gaming tournament. Participants completed a set of cognitive tasks that measured spatial working and long term (location) memory, basic cognitive processing, and gaming experience. Comparable reliability on the working memory task and results in line with previous research on the location memory task indicated the data collected was valid and reliable. We observed a significant relation between gaming experience and response time on the location memory task. We discuss that conducting gaming research at a tournament is a valid way of collecting data for a gaming expertise study while providing a range of gaming expertise that may not be available when recruiting at college campuses. Furthermore, our results extend previous gaming research that suggests that individual differences in gaming experience are correlated with the speed of recalling spatial information from long term memory.
SESSION: End-User Programming
Crossed Wires: Investigating the Problems of End-User Developers in a Physical Computing Task
Considerable research has focused on the problems that end users face when programming software, in order to help them overcome their difficulties, but there is little research into the problems that arise in physical computing when end users construct circuits and program them. In an empirical study, we observed end-user developers as they connected a temperature sensor to an Arduino microcontroller and visualized its readings using LEDs. We investigated how many problems participants encountered, the problem locations, and whether they were overcome. We show that most fatal faults were due to incorrect circuit construction, and that often problems were wrongly diagnosed as program bugs. Whereas there are development environments that help end users create and debug software, there is currently little analogous support for physical computing tasks. Our work is a first step towards building appropriate tools that support end-user developers in overcoming obstacles when constructing physical computing artifacts.
LondonTube: Overcoming Hidden Dependencies in Cloud-Mobile-Web Programming
Many disciplines, including health science, increasingly demand custom applications that synthesize cloud, mobile and web functionality. But creating even simple apps is difficult. Why? In this paper, guided by Cognitive Dimensions, we explore the design space for relevant programming notations and supporting tools, and we pinpoint what we hypothesize to be specific obstacles in the creation of cloud-mobile-web apps. Among these is the prevalence of hidden dependencies within code of apps. Based on this analysis, we propose a new notation called LondonTube aimed at making these hidden dependencies visible, thereby helping health scientists to create apps for themselves. A study showed that LondonTube reduced the time to create a cloud-mobile-web app by a factor of over 20, and it reduced questions about hidden dependencies.
Foraging Among an Overabundance of Similar Variants
Foraging among too many variants of the same artifact can be problematic when many of these variants are similar. This situation, which is largely overlooked in the literature, is commonplace in several types of creative tasks, one of which is exploratory programming. In this paper, we investigate how novice programmers forage through similar variants. Based on our results, we propose a refinement to Information Foraging Theory (IFT) to include constructs about variation foraging behavior, and propose refinements to computational models of IFT to better account for foraging among variants.
Chronicler: Interactive Exploration of Source Code History
Exploring source code history is an important task for software maintenance. Traditionally, source code history is navigated on the granularity of individual files. This is not fine-grained enough to support users in exploring the evolution of individual code elements. We suggest to consider the history of individual elements within the tree structure inherent to source code. A history graph created from these trees then enables new ways to explore events of interest defined by structural changes in the source code. We present Tree Flow, a visualization of these structural changes designed to enable users to choose the appropriate level of detail for the task at hand. In a user study, we show that both Chronicler and the history aware timeline, two prototype systems combining history graph navigation with a traditional source code view, outperform the more traditional history navigation on a file basis and users strongly prefer Chronicler for the exploration of source code.
SESSION: Health Support
AugKey: Increasing Foveal Throughput in Eye Typing with Augmented Keys
Eye-typing is an important tool for people with physical disabilities and, for some, it is their main form of communication. By observing expert typists using physical keyboards, we notice that visual throughput is considerably reduced in current eye-typing solutions. We propose AugKey to improve throughput by augmenting keys with a prefix, to allow continuous text inspection, and suffixes to speed up typing with word prediction. AugKey limits the visual information to the foveal region to minimize eye movements (i.e., reduce eye work). We have applied AugKey to a dwell-time keyboard and compared its performance with two conditions with no augmented feedback: a keyboard with and one without word prediction. Results show that AugKey can be about 28% faster than no word prediction and 20% faster than traditional word prediction, with a smaller workload index.
“Counting on the Group”: Reconciling Online and Offline Social Support among Older Informal Caregivers
Awareness of the huge amount of work faced by relatives in caring for a person suffering from a loss of autonomy has led to research focusing on ways to ease the burden on informal caregivers. Among them, services and devices aimed at providing social support and fighting the isolation that may be caused by the caregiving tasks appear important. However, little is known about the social support informal caregivers actually value and look for in practice. To fill this gap, we conducted a multi-sited study, focusing on older informal caregivers, because they are numerous and have lower experience with technology. Our study highlights that being part of a group is a key element in helping informal caregivers to feel that they are not alone, continue leisure activities, learn from others and sustain participation in organized activities. Through this understanding, we discuss design opportunities in a sociotechnical approach complementing online and offline social support.
A Sociotechnical Mechanism for Online Support Provision
Social support can significantly improve health outcomes for individuals living with disease, and online forums have emerged as an important vehicle for social support. Whereas research has focused on the delivery and use of social support, little is known about how these communities are sustained. We describe one sociotechnical mechanism that enables sustainable communities to provide social support to a large number of people. We focus upon thirteen disease-specific discussion forums hosted by the WebMD online health community. In these forums, small, densely connected cores of members who maintain strong relationships generate the majority of support for others. Through content analysis we find they provide informational support to a large number of more itinerant members, but provide one another with community support. Based on these observations, we describe a sociotechnical mechanism of online support that is distinct from non-support oriented communities, and has implications for the design of self-sustaining online support systems.
HaptiColor: Interpolating Color Information as Haptic Feedback to Assist the Colorblind
Most existing colorblind aids help their users to distinguish and recognize colors but not compare them. We present HaptiColor, an assistive wristband that encodes discrete color information into spatiotemporal vibrations to support colorblind users to recognize and compare colors. We ran three experiments: the first found the optimal number and placement of motors around the wrist-worn prototype, and the second tested the optimal way to represent discrete points between the vibration motors. Results suggested that using three vibration motors and pulses of varying duration to encode proximity information in spatiotemporal patterns is the optimal solution. Finally, we evaluated the HaptiColor prototype and encodings with six colorblind participants. Our results show that the participants were able to easily understand the encodings and perform color comparison tasks accurately (94.4% to 100%).
SESSION: Participating in Well-Being and Family
Shared Language and the Design of Home Healthcare Technology
Words and language are central to most human communication. This paper explores the importance of language for the participatory design of smart home technologies for healthcare. We argue that to effectively involve a broad range of users in the design of new technologies, it is important to actively develop a shared language that is accessible to and owned by all stakeholders, and that facilitates productive dialogues among them. Our discussion is grounded firstly in work with end users, in which problematic language emerged as a key barrier to participation and effective design. Three specific categories of language barriers are identified: jargon, ambiguity, and emotive words. Building on this we undertook a workshop and focus group, respectively involving researchers developing smart health technologies and users, where the focus was on generating a shared language. We discuss this process, including examples that emerged of alternative terminology and specific strategies for creating a shared language.
Children’s Perspectives on Ethical Issues Surrounding Their Past Involvement on a Participatory Design Team
Participatory Design (PD) gives users a voice in the design of technologies they are meant to use. When PD methods are adapted for research with children, design teams need to address additional issues of ethical accountability (e.g., adult-child power relations). While researchers have taken measures to ensure ethical accountability in PD research with children, to our knowledge there has been no work examining how former child design partners view ethical issues surrounding their participation. In this work we ask: How do children view ethical issues around their role on Participatory Design teams? We present findings from surveys and interviews with 12 former child design partners. Findings, identified by the former participants themselves, outline: (i) balancing attribution and anonymity, (ii) promoting ongoing consent and dissent, and (iii) cultivating a balanced design partnership. From these findings we recommend practices for researchers and designers of children’s technologies that align with participant views.
The Evolution of Engagements and Social Bonds During Child-Parent Co-design
Partnering with parents and children in the design process can be important for producing technologies that take into consideration the rich context of family life. However, to date, few studies have examined the actual process of designing with families and their children. Without understanding the process, we risk making poor design choices in user-interactive experiences that take into account important family dynamics. The purpose of this investigation is to understand how parent-child relationships in families shape co-design processes and how they are reshaped through co-design. We document the evolutionary process and outcomes that exist in co-design partnerships between researchers and families. We found that parents’ engagement patterns shifted more slowly than that of children’s from observing and facilitating to design partnering practices. Our analysis suggests the importance of establishing and nurturing social bonds among parents, children, and researchers in the co-design process.
Youth Advocacy in SNAs: Challenges for Addressing Health Disparities
Social networking applications (SNAs) have been touted as promising platforms for activism: they provide a platform by which voices can be heard and collective action mobilized. Yet, little work has studied the suitability of existing SNAs for enabling youth advocacy efforts. We conducted an intensive 5-week qualitative study with 10th graders to understand how existing SNAs support and inhibit youth advocacy. We contribute to the field of Human-Computer Interaction (HCI) by explicating several themes regarding the barriers youth face when using SNAs for advocacy, features in existing SNAs that are not suitable for youth advocacy, and the peer pressure youth perceive when advocating for serious issues in these environments. We conclude with recommendations for how existing SNA features could be reformed to better support youth advocacy.
ThoughtCloud: Exploring the Role of Feedback Technologies in Care Organisations
ThoughtCloud is a lightweight, situated, digital feedback system designed to allow voluntary and community sector care organisations to gather feedback and opinions from those who use their services. In this paper we describe the design and development of ThoughtCloud and its evaluation through a series of deployments with two organisations. Using the system, organisations were able to pose questions about the activities that they provide and gather data in the form of ratings, video or audio messages. We conducted observations of ThoughtCloud in use, analysed feedback received, and conducted interviews with those who ‘commissioned’ feedback around the value of comments received about their organisation. Our findings highlight how simple, easily deployable digital systems can support new feedback processes within care organisations and provide opportunities for understanding the personal journeys and experiences of vulnerable individuals who use these care services.
SESSION: Input Technology
Make It Big!: The Effect of Font Size and Line Spacing on Online Readability
We report from an eye-tracking experiment with 104 participants who performed reading tasks on the most popular text-heavy website of the Web: Wikipedia. Using a hybrid-measures design, we compared objective and subjective readability and comprehension of the articles for font sizes ranging from 10 to 26 points, and line spacings ranging from 0.8 to 1.8 (font: Arial). Our findings provide evidence that readability, measured via mean fixation duration, increased significantly with font size. Further, comprehension questions had significantly more correct responses for font sizes 18 and 26. For line spacing, we found marginal effects, suggesting that the two tested extremes (0.8 and 1.8) impair readability. These findings provide evidence that text-heavy websites should use fonts of size 18 or larger and use default line spacing when the goal is to make a web page easy to read and comprehend. Our results significantly differ from previous recommendations, presumably, because this is the first work to cover font sizes beyond 14 points.
Fitts’ Law and the Effects of Input Mapping and Stiffness on Flexible Display Interactions
In this paper, we report on an investigation of Fitts’ law using flexible displays. Participants performed a one-dimensional targeting task as described by the ISO 9421-9 standard. In the experiment, we compared two methods of bend input: position control and rate control of a cursor. Participants performed the task with three levels of device stiffness. Results show that bend input is highly correlated with Fitts’ law for both position and rate control. Position control produced significantly higher throughput values than rate control. Our experiment also revealed that, when the amount of force applied was controlled, device stiffness did not have a significant effect on performance.
SESSION: Comprehension through Visualization
Towards Understanding Human Similarity Perception in the Analysis of Large Sets of Scatter Plots
We present a study aimed at understanding how human observers judge scatter plot similarity when presented with a large set of iconic scatter plot representations. The work we present involves 18 participants with a scientific background in a similarity perception study. The study asks participants to group a carefully selected set of plots according to their subjective perceptual judgement of similarity, and it integrates the results into a consensus similarity grouping. We then use this consensus grouping to generate insights on similarity perception. The main output of this work is a list of concepts we derive to describe major perceptual features, and a description of how these concepts relate and rank. We also evaluate scagnostics (scatter plot diagnostics), a popular and established set of scatter plot descriptors, and show that they do not reliably reproduce our participants judgements. Finally, we discuss the major implications of this study and how these results can be used for future research.
Telling Stories about Dynamic Networks with Graph Comics
In this paper, we explore graph comics as a medium to communicate changes in dynamic networks. While previous re- search has focused on visualizing dynamic networks for data exploration, we want to see if we can take advantage of the visual expressiveness and familiarity of comics to present and explain temporal changes in networks to an audience. To understand the potential of comics as a storytelling medium, we first created a variety of comics during a 3 month structured design process, involving domain experts from public education and neuroscience. This process led to the definition of 8 design factors for creating graph comics and propose design solutions for each. Results from a qualitative study suggest that a general audience is quickly able understand complex temporal changes through graph comics, provided with minimal textual annotations and no training.
SESSION: Haptic Sensation Meets Screens
Direct Manipulation in Tactile Displays
Tactile displays have predominantly been used for information transfer using patterns or as assistive feedback for interactions. With recent advances in hardware for conveying increasingly rich tactile information that mirrors visual information, and the increasing viability of wearables that remain in constant contact with the skin, there is a compelling argument for exploring tactile interactions as rich as visual displays. Direct Manipulation underlies much of the advances in visual interactions. In this work, we introduce the concept of a Direct Manipulation-enabled Tactile display (DMT). We define the concepts of a tactile screen, tactile pixel, tactile pointer, and tactile target which enable tactile pointing, selection and drag & drop. We build a proof of concept tactile display and study its precision limits. We further develop a performance model for DMTs based on a tactile target acquisition study. Finally, we study user performance in a real-world DMT menu application. The results show that users are able to use the application with relative ease and speed.
HapThimble: A Wearable Haptic Device towards Usable Virtual Touch Screen
A virtual touch screen concept using an optical see-through head-mounted display has been suggested. With a virtual touch screen, the user’s direct-touch interactions are allowed in much the same way as a conventional touch screen, but the absence of haptic feedback and physical constraint leads to poor user performance. To overcome this issue, we developed a wearable haptic device, called HapThimble. It provides various types of haptic feedback (tactile, pseudo-force, and vibrotactile) to the user’s fingertip and mimics physical buttons based on force-penetration depth curves. We conducted three experiments with HapThimble. The first experiment confirmed that HapThimble could increase a users’ performance when conducting clicking and dragging tasks. The second experiment revealed that users could differentiate between six types of haptic feedback, rendered based on different force-penetration depth curves obtained using HapThimble. Last, we conducted a test to investigate the similarity between the physical buttons and the mimicked haptic buttons and obtained a 90.3% success rate.
Haptic Edge Display for Mobile Tactile Interaction
Current mobile devices do not leverage the rich haptic channel of information that our hands can sense, and instead focus primarily on touch based graphical interfaces. Our goal is to enrich the user experience of these devices through bi-directional haptic and tactile interactions (display and control) around the edge of hand-held devices. We propose a novel type of haptic interface, a Haptic Edge Display, consisting of actuated pins on the side of a display, to form a linear array of tactile pixels (taxels). These taxels are implemented using small piezoelectric actuators, which can be made cheaply and have ideal characteristics for mobile devices. We developed two prototype Haptic Edge Displays, one with 24 actuated pins (3.75mm in pitch) and a second with 40 pins (2.5mm in pitch). This paper describes several novel haptic interactions for the Haptic Edge Display including dynamic physical affordances, shape display, non-dominant hand interactions, and also in-pocket “pull’ style haptic notifications. In a laboratory experiment we investigated the limits of human perception for Haptic Edge Displays, measuring the just-noticeable difference for pin width and height changes for both in-hand and simulated in-pocket conditions.
Tactile Presentation to the Back of a Smartphone with Simultaneous Screen Operation
The most common method of presenting tactile stimuli to touch screens has been to directly attach a tactile display to the screens. This requires a transparent tactile display so that the view is not obstructed. In contrast, transparency is not required if the tactile stimuli are presented on the back of the device. However, stimulating the entire palm is not appropriate because touch screens are typically used by only one finger. To overcome these limitations, we propose a new method in which tactile feedback is delivered to a single finger on the back of a touch screen. We used an electro-tactile display because it is small and dense. The tactile display presents touch stimuli as mirror images of the shapes on the touch screen. By comparing cases in which the device was operated by one or two hands, we found that shape discrimination is possible using this method.
SESSION: Smartphone Authentication
Free-Form Gesture Authentication in the Wild
Free-form gesture passwords have been introduced as an alternative mobile authentication method. Text passwords are not very suitable for mobile interaction, and methods such as PINs and grid patterns sacrifice security over usability. However, little is known about how free-form gestures perform in the wild. We present the first field study (N=91) of mobile authentication using free-form gestures, with text passwords as a baseline. Our study leveraged Experience Sampling Methodology to increase ecological validity while maintaining control of the experiment. We found that, with gesture passwords, participants generated new passwords and authenticated faster with comparable memorability while being more willing to retry. Our analysis of the gesture password dataset indicated biases in user-chosen distribution tending towards common shapes. Our findings provide useful insights towards understanding mobile device authentication and gesture-based authentication.
SnapApp: Reducing Authentication Overhead with a Time-Constrained Fast Unlock Option
We present SnapApp, a novel unlock concept for mobile devices that reduces authentication overhead with a time-constrained quick-access option. SnapApp provides two unlock methods at once: While PIN entry enables full access to the device, users can also bypass authentication with a short sliding gesture (“Snap”). This grants access for a limited amount of time (e.g. 30 seconds). The device then automatically locks itself upon expiration. Our concept further explores limiting the possible number of Snaps in a row, and configuring blacklists for app use during short access (e.g. to exclude banking apps). We discuss opportunities and challenges of this concept based on a 30-day field study with 18 participants, including data logging and experience sampling methods. Snaps significantly reduced unlock times, and our app was perceived to offer a good tradeoff. Conceptual challenges include, for example, supporting users in configuring their blacklists.
Do Users’ Perceptions of Password Security Match Reality?
Although many users create predictable passwords, the extent to which users realize these passwords are predictable is not well understood. We investigate the relationship between users’ perceptions of the strength of specific passwords and their actual strength. In this 165-participant online study, we ask participants to rate the comparative security of carefully juxtaposed pairs of passwords, as well as the security and memorability of both existing passwords and common password-creation strategies. Participants had serious misconceptions about the impact of basing passwords on common phrases and including digits and keyboard patterns in passwords. However, in most other cases, participants’ perceptions of what characteristics make a password secure were consistent with the performance of current password-cracking tools. We find large variance in participants’ understanding of how passwords may be attacked, potentially explaining why users nonetheless make predictable passwords. We conclude with design directions for helping users make better passwords.
On-Demand Biometrics: Fast Cross-Device Authentication
We explore the use of a new way to log into a web service, such as email or social media. Using on-demand biometrics, users sign in from a browser on a computer using just their name, which sends a request to their phone for approval. Users approve this request by authenticating on their phone using their fingerprint, which completes the login in the browser. On-demand biometrics thus replace passwords or temporary access codes found in two-step verification with the ease of use of biometrics. We present the results of an interview study on the use of on-demand biometrics with a live login backend. Participants perceived our system as convenient and fast to use and also expressed their trust in fingerprint authentication to keep their accounts safe. We motivate the design of on-demand biometrics, present an analysis of participants’ use and responses around general account security and authentication, and conclude with implications for designing fast and easy cross-device authentication.
SESSION: Shape Changing Displays
TableHop: An Actuated Fabric Display Using Transparent Electrodes
We present TableHop, a tabletop display that provides controlled self-actuated deformation and vibro-tactile feedback to an elastic fabric surface while retaining the ability for high-resolution visual projection. The surface is made of a highly stretchable pure spandex fabric that is electrostatically actuated using electrodes mounted on its top or underside. It uses transparent indium tin oxide electrodes and high-voltage modulation to create controlled surface deformations. Our setup actuates pixels and creates deformations in the fabric up to +/- 5 mm. Since the electrodes are transparent, the fabric surface functions as a diffuser for rear-projected visual images, and avoid occlusion by users or actuators. Users can touch and interact with the fabric to experience expressive interactions as with any fabric based shape-changing interface. By using frequency modulation in the high-voltage circuit, it can also create localized tactile sensations on the user’s fingertip when touching the surface. We provide simulation and experimental results for the shape of the deformation and frequency of the vibration of the surface. These results can be used to build prototypes of different sizes and form-factors. We present a working prototype of TableHop that has 30×40 cm2 surface area and uses a grid of 3×3 transparent electrodes. It uses a maximum of 9.46 mW and can create tactile vibrations of up to 20 Hz. TableHop can be scaled to make large interactive surfaces and integrated with other objects and devices. TableHop will improve user interaction experience on 2.5D deformable displays.
An Evaluation of Shape Changes for Conveying Emotions
In this paper, we explore how shape changing interfaces might be used to communicate emotions. We present two studies, one that investigates which shapes users might create with a 2D flexible surface, and one that studies the efficacy of the resulting shapes in conveying a set of basic emotions. Results suggest that shape parameters are correlated to the positive or negative character of an emotion, while parameters related to movement are correlated with arousal level. In several cases, symbolic shape expressions based on clear visual metaphors were used. Results from our second experiment suggest participants were able to recognize emotions given a shape with a good accuracy within 28% of the dimensions of the Circumplex Model. We conclude that shape and shape changes of a 2D flexible surface indeed appear able to convey emotions in a way that is worthy of future exploration.
Emergeables: Deformable Displays for Continuous Eyes-Free Mobile Interaction
In this paper we present the concept of Emergeables — mobile surfaces that can deform or ‘morph’ to provide fully-actuated, tangible controls. Our goal in this work is to provide the flexibility of graphical touchscreens, coupled with the affordance and tactile benefits offered by physical widgets. In contrast to previous research in the area of deformable displays, our work focuses on continuous controls (e.g., dials or sliders), and strives for fully-dynamic positioning, providing versatile widgets that can change shape and location depending on the user’s needs. We describe the design and implementation of two prototype emergeables built to demonstrate the concept, and present an in-depth evaluation that compares both with a touchscreen alternative. The results show the strong potential of emergeables for on-demand, eyes-free control of continuous parameters, particularly when comparing the accuracy and usability of a high-resolution emergeable to a standard GUI approach. We conclude with a discussion of the level of resolution that is necessary for future emergeables, and suggest how high-resolution versions might be achieved.
DefSense: Computational Design of Customized Deformable Input Devices
We present a novel optimization-based algorithm for the design and fabrication of customized, deformable input devices, capable of continuously sensing their deformation. We propose to embed piezoresistive sensing elements into flexible 3D printed objects. These sensing elements are then utilized to recover rich and natural user interactions at runtime. Designing such objects is a challenging and hard problem if attempted manually for all but the simplest geometries and deformations. Our method simultaneously optimizes the internal routing of the sensing elements and computes a mapping from low-level sensor readings to user-specified outputs in order to minimize reconstruction error. We demonstrate the power and flexibility of the approach by designing and fabricating a set of flexible input devices. Our results indicate that the optimization-based design greatly outperforms manual routings in terms of reconstruction accuracy and thus interaction fidelity.
SESSION: Fat Fingers, Small Watches
WatchWriter: Tap and Gesture Typing on a Smartwatch Miniature Keyboard with Statistical Decoding
We present WatchWriter, a finger operated keyboard that supports both touch and gesture typing with statistical decoding on a smartwatch. Just like on modern smartphones, users type one letter per tap or one word per gesture stroke on WatchWriter but in a much smaller spatial scale. WatchWriter demonstrates that human motor control adaptability, coupled with modern statistical decoding and error correction technologies developed for smartphones, can enable a surprisingly effective typing performance despite the small watch size. In a user performance experiment entirely run on a smartwatch, 36 participants reached a speed of 22-24 WPM with near zero error rate.
Exploring Non-touchscreen Gestures for Smartwatches
Although smartwatches are gaining popularity among mainstream consumers, the input space is limited due to their small form factor. The goal of this work is to explore how to design non-touchscreen gestures to extend the input space of smartwatches. We conducted an elicitation study eliciting gestures for 31 smartwatch tasks. From this study, we demonstrate that a consensus exists among the participants on the mapping of gesture to command and use this consensus to specify a user-defined gesture set. Using gestures collected during our study, we define a taxonomy describing the mapping and physical characteristics of the gestures. Lastly, we provide insights to inform the design of non-touchscreen gestures for smartwatch interaction.
WearWrite: Crowd-Assisted Writing from Smartwatches
The physical constraints of smartwatches limit the range and complexity of tasks that can be completed. Despite interface improvements on smartwatches, the promise of enabling productive work remains largely unrealized. This paper presents WearWrite, a system that enables users to write documents from their smartwatches by leveraging a crowd to help translate their ideas into text. WearWrite users dictate tasks, respond to questions, and receive notifications of major edits on their watch. Using a dynamic task queue, the crowd receives tasks issued by the watch user and generic tasks from the system. In a week-long study with seven smartwatch users supported by approximately 29 crowd workers each, we validate that it is possible to manage the crowd writing process from a watch. Watch users captured new ideas as they came to mind and managed a crowd during spare moments while going about their daily routine. WearWrite represents a new approach to getting work done from wearables using the crowd.
Serendipity: Finger Gesture Recognition using an Off-the-Shelf Smartwatch
Previous work on muscle activity sensing has leveraged specialized sensors such as electromyography and force sensitive resistors. While these sensors show great potential for detecting finger/hand gestures, they require additional hardware that adds to the cost and user discomfort. Past research has utilized sensors on commercial devices, focusing on recognizing gross hand gestures. In this work we present Serendipity, a new technique for recognizing unremarkable and fine-motor finger gestures using integrated motion sensors (accelerometer and gyroscope) in off-the-shelf smartwatches. Our system demonstrates the potential to distinguish 5 fine-motor gestures like pinching, tapping and rubbing fingers with an average f1-score of 87%. Our work is the first to explore the feasibility of using solely motion sensors on everyday wearable devices to detect fine-grained gestures. This promising technology can be deployed today on current smartwatches and has the potential to be applied to cross-device interactions, or as a tool for research in fields involving finger and hand motion.
B2B-Swipe: Swipe Gesture for Rectangular Smartwatches from a Bezel to a Bezel
We present B2B-Swipe, a single-finger swipe gesture for a rectangular smartwatch that starts at a bezel and ends at a bezel to enrich input vocabulary. There are 16 possible B2B-Swipes because a rectangular smartwatch has four bezels. Moreover, B2B-Swipe can be implemented with a single-touch screen with no additional hardware. Our study shows that B2B-Swipe can co-exist with Bezel Swipe and Flick, with an error rate of 3.7% under the sighted condition and 8.0% under the eyes-free condition. Furthermore, B2B-Swipe is potentially accurate (i.e., the error rates were 0% and 0.6% under the sighted and eyes-free conditions) if the system uses only B2B-Swipes for touch gestures.
SESSION: Online Communities – Identities and Behaviors
Anonymity, Intimacy and Self-Disclosure in Social Media
Self-disclosure is rewarding and provides significant benefits for individuals, but it also involves risks, especially in social media settings. We conducted an online experiment to study the relationship between content intimacy and willingness to self-disclose in social media, and how identification (real name vs. anonymous) and audience type (social ties vs. people nearby) moderate that relationship. Content intimacy is known to regulate self-disclosure in face-to-face communication: people self-disclose less as content intimacy increases. We show that such regulation persists in online social media settings. Further, although anonymity and an audience of social ties are both known to increase self-disclosure, it is unclear whether they (1) increase self-disclosure baseline for content of all intimacy levels, or (2) weaken intimacy’s regulation effect, making people more willing to disclose intimate content. We show that intimacy always regulates self-disclosure, regardless of settings. We also show that anonymity mainly increases self-disclosure baseline and (sometimes) weakens the regulation. On the other hand, an audience of social ties increases the baseline but strengthens the regulation. Finally, we demonstrate that anonymity has a more salient effect on content of negative valence.The results are critical to understanding the dynamics and opportunities of self-disclosure in social media services that vary levels of identification and types of audience.
Look Before You Leap: Improving the Users’ Ability to Detect Fraud in Electronic Marketplaces
Reputation systems in current electronic marketplaces can easily be manipulated by malicious sellers in order to appear more reputable than appropriate. We conducted a controlled experiment with 40 UK and 41 German participants on their ability to detect malicious behavior by means of an eBay-like feedback profile versus a novel interface involving an interactive visualization of reputation data. The results show that participants using the new interface could better detect and understand malicious behavior in three out of four attacks (the overall detection accuracy 77% in the new vs. 56% in the old interface). Moreover, with the new interface, only 7% of the users decided to buy from the malicious seller (the options being to buy from one of the available sellers or to abstain from buying), as opposed to 30% in the old interface condition.
SESSION: Affording Collective Action in Social Media
Mediating the Undercurrents: Using Social Media to Sustain a Social Movement
While studies of social movements have mostly examined prevalent public discourses, undercurrents’ the backstage practices consisting of meaning-making processes, narratives, and situated work-have received less attention. Through a qualitative interview study with sixteen participants, we examine the role of social media in supporting the undercurrents of the Umbrella Movement in Hong Kong. Interviews focused on an intense period of the movement exemplified by sit-in activities inspired by Occupy Wall Street in the USA. Whereas the use of Facebook for public discourse was similar to what has been reported in other studies, we found that an ecology of social media tools such as Facebook, WhatsApp, Telegram, and Google Docs mediated undercurrents that served to ground the public discourse of the movement. We discuss how the undercurrents sustained and developed public discourses in concrete ways.
Designing Cyberbullying Mitigation and Prevention Solutions through Participatory Design With Teenagers
While social media platforms enable individuals to easily communicate and share experiences, they have also emerged as a tool for cyberbullying. Teenagers represent an especially vulnerable population for negative emotional responses to cyberbullying. At the same time, attempts to mitigate or prevent cyberbullying from occurring in these networked spaces have largely failed because of the complexity and nuance with which young people bully others online. To address challenges related to designing for cyberbullying intervention and mitigation, we detail findings from participatory design work with two groups of high school students in spring 2015. Over the course of five design sessions spanning five weeks, participants shared their experiences with cyberbullying and iteratively designed potential solutions. We provide an in-depth discussion of the range of cyberbullying mitigation solutions participants designed. We focus on challenges participants’ identified in designing for cyberbullying support and prevention and present a set of five potential cyberbullying mitigation solutions based on the results of the design sessions.
Understanding Social Media Disclosures of Sexual Abuse Through the Lenses of Support Seeking and Anonymity
Support seeking in stigmatized contexts is useful when the discloser receives the desired response, but it also entails social risks. Thus, people do not always disclose or seek support when they need it. One such stigmatized context for support seeking is sexual abuse. In this paper, we use mixed methods to understand abuse-related posts on reddit. First, we take a qualitative approach to understand post content. Then we use quantitative methods to investigate the use of “throwaway” accounts, which provide greater anonymity, and report on factors associated with support seeking and first-time disclosures. In addition to significant linguistic differences between throwaway and identified accounts, we find that those using throwaway accounts are significantly more likely to engage in seeking support. We also find that men are significantly more likely to use throwaway accounts when posting about sexual abuse. Results suggest that subreddit moderators and members who wish to provide support pay attention to throwaway accounts, and we discuss the importance of context-specific anonymity in support seeking.
Dear Diary: Teens Reflect on Their Weekly Online Risk Experiences
In our study, 68 teens spend two months reflecting on their weekly online experiences and report 207 separate risk events involving information breaches, online harassment, sexual solicitations, and exposure to explicit content. We conduct a structured, qualitative analysis to characterize the salient dimensions of their risk experiences, such as severity, level of agency, coping strategies, and whether the teens felt like the situation had been resolved. Overall, we found that teens can potentially benefit from lower risk online situations, which allow them to develop crucial interpersonal skills, such as boundary setting, conflict resolution, and empathy. We can also use the dimensions of risk described in this paper to identify potentially harmful risk trajectories before they become high-risk situations. Our end goal is to find a way to empower and protect teens so that they can benefit from online engagement.
SESSION: Designing New Player Experiences
Contextual Autonomy Support in Video Game Play: A Grounded Theory
Autonomy experience constitutes a core part of the intrinsic motivation of playing games. While research has explored how autonomy is afforded by a game’s design, little is known about the role of the social context of play. Particularly, engaging with serious games or gamified applications is often obligatory, which may thwart autonomy. To tease out contextual factors that affect autonomy, we conducted a qualitative interview study that compared game-play experience in leisure and work contexts. We found that leisure contexts, particularly solitary play, support autonomy through a time and space shielded from outer demands, the license to (dis)engage with and configure the situation to fit one’s spontaneous interests, and a lack of social and material consequence. Thwarted autonomy occurs both in leisure and work contexts when players’ spontaneous interests mismatch socially demanded gameplay. We discuss implications for entertainment and applied gaming.
Sensation: Measuring the Effects of a Human-to-Human Social Touch Based Controller on the Player Experience
We observe an increasing interest on usage of full-body interaction in games. However, human-to-human social touch interaction has not been implemented as a sophisticated gaming apparatus. To address this, we designed the Sensation, a device for detecting touch patterns between players, and introduce the game, Shape Destroy, which is a collaborative game designed to be played with social touch. To understand if usage of social touch has a meaningful contribution to the overall player experience in collaborative games we conducted a user study with 30 participants. Participants played the same game using i) the Sensation and ii) a gamepad, and completed a set of questionnaires aimed at measuring the immersion levels. As a result, the collected data and our observations indicated an increase in general, shared, ludic and affective involvement with significant differences. Thus, human-to-human touch can be considered a promising control method for collaborative physical games.
“I Love All the Bits”: The Materiality of Boardgames
This paper presents findings from a study of boardgamers which stress the importance of the materiality of modern boardgames. It demonstrates that materiality is one of four significant factors in the player experience of tabletop gaming and describes four domains of materiality in boardgaming settings. Further, building on understanding of non-use in HCI, it presents boardgames as a unique situation of parallel use, in which users simultaneously engage with a single game in both digital and material, non-digital environments.
Destructive Games: Creating Value by Destroying Valuable Physical Objects
While personal fabrication tools, such as laser cutters and milling machines, are intended for construction, we are exploring their use for destruction. We present a series of games that result in valuable physical objects being destroyed objects owned by the players. Interestingly, we found that we can design these games to be desirable to play, despite the loss of the object, by instead producing social value. As part of a user study, twelve students played a destructive game in which a laser cutter cut up their own money bills. Surprisingly, 8 out of 12 participants would play again. They shared their post-game stories with us.
SESSION: Usability and User Burden
Understanding the Relationship between Frustration and the Severity of Usability Problems: What can Psychophysiological Data (Not) Tell Us?
Frustration is used as a criterion for identifying usability problems (UPs) and for rating their severity in a few of the existing severity scales, but it is not operationalized. No research has systematically examined how frustration varies with the severity of UPs. We aimed to address these issues with a hybrid approach, using Self-Assessment Manikin, comments elicited with Cued-Recall Debrief, galvanic skin responses (GSR) and gaze data. Two empirical studies involving a search task with a website known to have UPs were conducted to substantiate findings and improve on the methodological framework, which could facilitate usability evaluation practice. Results showed no correlation between GSR peaks and severity ratings, but GSR peaks were correlated with frustration scores — a metric we developed. The Peak-End rule was partially verified. The problematic evaluator effect was the limitation as it confounded the severity ratings of UPs. Future work is aimed to control this effect and to develop a multifaceted severity scale.
Developing and Validating the User Burden Scale: A Tool for Assessing User Burden in Computing Systems
Computing systems that place a high level of burden on their users can have a negative affect on initial adoption, retention, and overall user experience. Through an iterative process, we have developed a model for user burden that consists of six constructs: 1) difficulty of use, 2) physical, 3) time and social, 4) mental and emotional, 5) privacy, and 6) financial. If researchers and practitioners can have an understanding of the overall level of burden systems may be having on the user, they can have a better sense of whether and where to target future design efforts that can reduce those burdens. To help assist with understanding and measuring user burden, we have also developed and validated a measure of user burden in computing systems called the User Burden Scale (UBS), which is a 20-item scale with 6 individual sub-scales representing each construct. This paper presents the process we followed to develop and validate this scale for use in evaluating user burden in computing systems. Results indicate that the User Burden Scale has good overall inter-item reliability, convergent validity with similar scales, and concurrent validity when compared to systems abandoned vs. those still in use.
COGCAM: Contact-free Measurement of Cognitive Stress During Computer Tasks with a Digital Camera
Contact-free camera-based measurement of cognitive stress opens up new possibilities for human-computer interaction with applications in remote learning, stress monitoring, and optimization of workload for user experience. The autonomic nervous system controls the inter-beat intervals of the heart and breathing patterns, and these signals change under cognitive stress. We built a participant-independent cognitive stress recognition model based on photoplethysmographic signals measured remotely at a distance of 3 meters. We tested the model on naturalistic responses from 10 individuals completing randomized-order computer-based tasks (ball control and card sorting). The system successfully detected increased stress during the tasks, which were consistent with self-report measures. Changes in heart rate variability were more discriminative indicators of cognitive stress than were heart rate and breathing rate.
When Bad Feels Good: Assistance Failures and Interface Preferences
User interfaces often attempt to assist users by automating elements of interaction, but these attempts will periodically fail — impeding user performance. To understand the design implications of correct and incorrect assistance, we conducted an experiment in which subjects selected their preferred of two interfaces (neutral and snapping) for a series of 10 drag-and-drop tasks. With neutral the dragged object moved pixel-by-pixel, and with snapping the object snapped to a grid. Snapping trials were engineered to provide controlled levels of objective performance gains and losses with respect to neutral: gains were achieved when the target was aligned with the grid, and losses were achieved through misalignment — which required subjects to drop the object, hold a key, and complete the task using a finer movement resolution. Results showed a significant preference for the snapping interface, even when losses impaired performance.
Using fNIRS in Usability Testing: Understanding the Effect of Web Form Layout on Mental Workload
Amongst the many tasks in our lives, we encounter web forms on a regular basis, whether they are mundane like registering for a website, or complex and important like tax returns. There are many aspects of Usability, but one concern for user interfaces is to reduce mental workload and error rates. Whilst most assessment of mental workload is subjective and retrospective reporting by users, we examine the potential of functional Near Infrared Spectroscopy (fNIRS) as a tool for objectively and concurrently measuring mental workload during usability testing. We use this technology to evaluate the design of three different form layouts for a car insurance claim process, and show that a form divided into subforms increases mental workload, contrary to our expectations. We conclude that fNIRS is highly suitable for objectively examining mental workload during usability testing, and will therefore be able to provide more detailed insight than summative retrospective assessments. Further, for the fNIRS community, we show that the technology can easily move beyond typical psychology tasks, and be used for more natural study tasks.
SESSION: Reflection on UX Design
Stereotypes and Politics: Reflections on Personas
Using personas in requirement analysis and software development is becoming more and more common. The potential and problems with this method of user representation are discussed controversially in HCI research. While personas might help focus on the audience, prioritize, challenge assumptions, and prevent self-referential design, the success of the method depends on how and on what basis the persona descriptions are developed, perceived, and employed. Personas run the risk of reinscribing existing stereotypes and following more of an I-methodological than a user-centered approach. This paper gives an overview of the academic discourse regarding benefits and downfalls of the persona method. A semi-structured interview study researched how usability experts perceive and navigate the controversies of this discourse. The qualitative analysis showed that conflicting paradigms are embedded in the legitimization practices of HCI in the political realities of computer science and corporate settings leading to contradictions and compromises.
Pushing the Limits of Design Fiction: The Case For Fictional Research Papers
This paper considers how design fictions in the form of ‘imaginary abstracts’ can be extended into complete ‘fictional papers’. Imaginary abstracts are a type of design fiction that are usually included within the content of ‘real’ research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a ‘non-fiction’ research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI’s burgeoning design discourse with compelling new methods.
“It’s More of a Mindset Than a Method”: UX Practitioners’ Conception of Design Methods
There has been increasing interest in the work practices of user experience (UX) designers, particularly in relation to approaches that support adoption of human-centered principles in corporate environments. This paper addresses the ways in which UX designers conceive of methods that support their practice, and the methods they consider necessary as a baseline competency for beginning user experience designers. Interviews were conducted with practitioners in a range of companies, with differing levels of expertise and educational backgrounds represented. Interviewees were asked about their use of design methods in practice, and the methods they considered to be core of their practice; in addition, they were asked what set of methods would be vital for beginning designers joining their company. Based on these interviews, I evaluate practitioner conceptions of design methods, proposing an appropriation-oriented mindset that drives the use of tool knowledge, supporting designers’ practice in a variety of corporate contexts. Opportunities are considered for future research in the study of UX practice and training of students in human-computer interaction programs.
Why Design Method Development is Not Always Carried Out as User-Centered Design
In a series of interviews and observations conducted over the past two years, we examined how designers have created, adopted, and evolved design methods into practice. These studies have led us to question the processes used and assumptions held by those who have been involved in developing new design methods. Our studies have shown that even though user-centered design is advocated by most researchers and practitioners, when it comes to their own way of developing design methods for others, it is not done using a user-centered approach. However, we found interesting differences among the three categories of interviewees; practitioners, researchers and practitioner/researchers.
SESSION: Display and Visualizations
‘A bit like British Weather, I suppose’: Design and Evaluation of the Temperature Calendar
In this paper we present the design and evaluation of the Temperature Calendar — a visualization of temperature variation within a workplace over the course of the past week. This highlights deviation from organizational temperature policy, and aims to bring staff “into the loop” of understanding and managing heating, and so reduce energy waste. The display was deployed for three weeks in five public libraries. Analysis of interaction logs, questionnaires and interviews shows that staff used the displays to understand heating in their buildings, and took action reflecting this new understanding. Bringing together our results, we discuss design implications for workplace displays, and an analysis of carbon emissions generated in constructing and operating our design. More in general, the findings helped us to reflect on the role of policy on energy consumption, and the potential for the HCI community to engage with its application, as well as its definition or modification.
iVoLVER: Interactive Visual Language for Visualization Extraction and Reconstruction
We present the design and implementation of iVoLVER, a tool that allows users to create visualizations without textual programming. iVoLVER is designed to enable flexible acquisition of many types of data (text, colors, shapes, quantities, dates) from multiple source types (bitmap charts, webpages, photographs, SVGs, CSV files) and, within the same canvas, supports transformation of that data through simple widgets to construct interactive animated visuals. Aside from the tool, which is web-based and designed for pen and touch, we contribute the design of the interactive visual language and widgets for extraction, transformation, and representation of data. We demonstrate the flexibility and expressive power of the tool through a set of scenarios, and discuss some of the challenges encountered and how the tool fits within the current infovis tool landscape.
SESSION: Reward me! Motivating and Incentivising Crowdsourcing
Novices Who Focused or Experts Who Didn’t?
Crowd feedback services offer a new method for acquiring feedback during design. A key problem is that the services only return the feedback without any cues about the people who provided it. In this paper, we investigate two cues of a feedback provider — the effort invested in a feedback task and expertise in the domain. First, we tested how positive and negative cues of a provider’s effort and expertise affected perceived quality of the feedback. Results showed both cues affected perceived quality, but primarily when the cues were negative. The results also showed that effort cues affected perceived quality as much as expertise. In a second study, we explored the use of behavioral data for modeling effort for feedback tasks. For a binary classification, the models achieved up to 92% accuracy relative to human raters. This result validates the feasibility of implementing effort cues in crowd services. The contributions of this work will enable increased transparency in crowd feedback services, benefiting both designers and feedback providers.
Curiosity Killed the Cat, but Makes Crowdwork Better
Crowdsourcing systems are designed to elicit help from humans to accomplish tasks that are still difficult for computers. How to motivate workers to stay longer and/or perform better in crowdsourcing systems is a critical question for designers. Previous work have explored different motivational frameworks, both extrinsic and intrinsic. In this work, we examine the potential for curiosity as a new type of intrinsic motivational driver to incentivize crowd workers. We design crowdsourcing task interfaces that explicitly incorporate mechanisms to induce curiosity and conduct a set of experiments on Amazon’s Mechanical Turk. Our experiment results show that curiosity interventions improve worker retention without degrading performance, and the magnitude of the effects are influenced by both personal characteristics of the worker and the nature of the task.
Pay It Backward: Per-Task Payments on Crowdsourcing Platforms Reduce Productivity
Paid crowdsourcing marketplaces have gained popularity by using piecework, or payment for each microtask, to incentivize workers. This norm has remained relatively unchallenged. In this paper, we ask: is the pay-per-task method the right one? We draw on behavioral economic research to examine whether payment in bulk after every ten tasks, saving money via coupons instead of earning money, or material goods rather than money will increase the number of completed tasks. We perform a twenty-day, between-subjects field experiment (N=300) on a mobile crowdsourcing application and measure how often workers responded to a task notification to fill out a short survey under each incentive condition. Task completion rates increased when paying in bulk after ten tasks: doing so increased the odds of a response by 1.4x, translating into 8% more tasks through that single intervention. Payment with coupons instead of money produced a small negative effect on task completion rates. Material goods were the most robust to decreasing participation over time.
Investigating the Impact of ‘Emphasis Frames’ and Social Loafing on Player Motivation and Performance in a Crowdsourcing Game
With an increasing reliance on crowdsourcing games as data-gathering tools, it is imperative to understand how to motivate and sustain high levels of voluntary contribution. To this end, the present work directly compared the impact of various “emphasis frames,” highlighting distinct intrinsic motivational factors, used to describe an online game in which players provide descriptive metadata “tags” for digitized images. An initial study showed that, compared to frames emphasizing personal enjoyment or altruistic motivations, a frame emphasizing a “growing community of players” solicited significantly fewer contributions. A second study tested the hypothesis that this lower level of contribution resulted from social loafing (the tendency to exert less effort in collective tasks in which contributions are anonymous and pooled). Results revealed that, compared to a no-frame control condition, a frame emphasizing the preponderance of other players reduced contribution levels and game replay likelihood, whereas a frame emphasizing the scarcity of fellow players increased contribution and replay levels. Various strategies for counteracting social loafing in crowdsourcing contexts are discussed.
SESSION: Making Interfaces Work for Each Individual
We Need Numbers!: Heuristic Evaluation during Demonstrations (HED) for Measuring Usability in IT System Procurement
We introduce a new usability inspection method called HED (heuristic evaluation during demonstrations) for measuring and comparing usability of competing complex IT systems in public procurement. The method presented enhances traditional heuristic evaluation to include the use context, comprehensive view of the system, and reveals missing functionality by using user scenarios and demonstrations. HED also quantifies the results in a comparable way. We present findings from a real-life validation of the method in a large-scale procurement project of a healthcare and social welfare information system. We analyze and compare the performance of HED to other usability evaluation methods used in procurement. Based on the analysis HED can be used to evaluate the level of usability of an IT system during procurement correctly, comprehensively and efficiently.
Interface Design Optimization as a Multi-Armed Bandit Problem
“Multi-armed bandits” offer a new paradigm for the AI-assisted design of user interfaces. To help designers understand the potential, we present the results of two experimental comparisons between bandit algorithms and random assignment. Our studies are intended to show designers how bandits algorithms are able to rapidly explore an experimental design space and automatically select the optimal design configuration. Our present focus is on the optimization of a game design space. The results of our experiments show that bandits can make data-driven design more efficient and accessible to interface designers, but that human participation is essential to ensure that AI systems optimize for the right metric. Based on our results, we introduce several design lessons that help keep human design judgment in the loop. We also consider the future of human-technology teamwork in AI-assisted design and scientific inquiry. Finally, as bandits deploy fewer low-performing conditions than typical experiments, we discuss ethical implications for bandits in large-scale experiments in education.
Anchored Customization: Anchoring Settings to the Application Interface to Afford Customization
The settings panel is the standard customization mechanism used in software applications today, yet it has undergone minimal design improvement since its introduction in the 1980s. Entirely disconnected from the application UI, these panels require users to rely on often-cryptic text labels to identify the settings they want to change. We propose the Anchored Customization approach, which anchors settings to conceptually related elements of the application UI. Our Customization Layer prototype instantiates this approach: users can see which UI elements are customizable, and access their associated settings. We designed three variants of Customization Layer based on multi-layered interfaces, and implemented these variants on top of a popular web application for task management, Wunderlist. Two experiments (Mechanical Turk and face-to-face) with a total of 60 participants showed that the two minimalist variants were 35% faster than Wunderlist’s settings panel. Our approach provides significant benefits for users while requiring little extra work from designers and developers of applications.
Heterogeneity in Customization of Recommender Systems By Users with Homogenous Preferences
Recommender systems must find items that match the heterogeneous preferences of its users. Customizable recommenders allow users to directly manipulate the system’s algorithm in order to help it match those preferences. However, customizing may demand a certain degree of skill and new users particularly may struggle to effectively customize the system. In user studies of two different systems, I show that there is considerable heterogeneity in the way that new users will try to customize a recommender, even within groups of users with similar underlying preferences. Furthermore, I show that this heterogeneity persists beyond the first few interactions with the recommender. System designs should consider this heterogeneity so that new users can both receive good recommendations in their early interactions as well as learn how to effectively customize the system for their preferences.
SESSION: Everyday Objects as Interaction Surfaces
TouchTokens: Guiding Touch Patterns with Passive Tokens
TouchTokens make it possible to easily build interfaces that combine tangible and gestural input using passive tokens and a regular multi-touch surface. The tokens constrain users’ grasp, and thus, the relative spatial configuration of fingers on the surface, theoretically making it possible to design algorithms that can recognize the resulting touch patterns. We performed a formative user study to collect and analyze touch patterns with tokens of varying shape and size. The analysis of this pattern collection showed that individual users have a consistent grasp for each token, but that this grasp is user-dependent and that different grasp strategies can lead to confounding patterns. We thus designed a second set of tokens featuring notches that constrain users’ grasp. Our recognition algorithm can classify the resulting patterns with a high level of accuracy (>95%) without any training, enabling application designers to associate rich touch input vocabularies with command triggers and parameter controls.
Designing a Willing-to-Use-in-Public Hand Gestural Interaction Technique for Smart Glasses
Smart glasses suffer from obtrusive or cumbersome interaction techniques. Studies show that people are not willing to publicly use, for example, voice control or mid-air gestures in front of the face. Some techniques also hamper the high degree of freedom of the glasses. In this paper, we derive design principles for socially acceptable, yet versatile, interaction techniques for smart glasses based on a survey of related work. We propose an exemplary design, based on a haptic glove integrated with smart glasses, as an embodiment of the design principles. The design is further refined into three interaction scenarios: text entry, scrolling, and point-and-select. Through a user study conducted in a public space we show that the interaction technique is considered unobtrusive and socially acceptable. Furthermore, the performance of the technique in text entry is comparable to state-of-the-art techniques. We conclude by reflecting on the advantages of the proposed design.
Project Jacquard: Interactive Digital Textiles at Scale
Project Jacquard presents manufacturing technologies that enable deploying invisible ubiquitous interactivity at scale. We propose novel interactive textile materials that can be manufactured inexpensively using existing textile weaving technology and equipment.
The development of touch-sensitive textiles begins with the design and engineering of a new highly conductive yarn. The yarns and textiles can be produced by standard textile manufacturing processes and can be dyed to any color, made with a number of materials, and designed to a variety of thicknesses and textures to be consistent with garment designers’ needs.
We describe the development of yarn, textiles, garments, and user interactivity; we present the opportunities and challenges of creating a manufacturable interactive textile for wearable computing.
GaussMarbles: Spherical Magnetic Tangibles for Interacting with Portable Physical Constraints
This work develops a system of spherical magnetic tangibles, GaussMarbles, that exploits the unique affordances of spherical tangibles for interacting with portable physical constraints. The proposed design of each magnetic sphere includes a magnetic polyhedron in the center. The magnetic polyhedron provides bi-polar magnetic fields, which are expanded in equal dihedral angles as robust features for tracking, allowing an analog Hall-sensor grid to resolve the near-surface 3D position accurately in real-time. Possible interactions between the magnetic spheres and portable physical constraints in various levels of embodiment were explored using several example applications.
GaussRFID: Reinventing Physical Toys Using Magnetic RFID Development Kits
We present GaussRFID, a hybrid RFID and magnetic-field tag sensing system that supports interactivity when embedded in retrofitted or new physical objects. The system consists of two major components – GaussTag, a magnetic-RFID tag that is combined with a magnetic unit and an RFID tag, and GaussStage, which is a tag reader that is combined with an analog Hall-sensor grid and an RFID reader. A GaussStage recognizes the ID, 3D position, and partial 3D orientation of a GaussTag near the sensing platform, and provides simple interfaces for involving physical constraints, displays and actuators in tangible interaction designs. The results of a two-day toy-hacking workshop reveal that all six groups of 31 participants successfully modified physical toys to interact with computers using the GaussRFID system.
SESSION: Fingers and Technology
The Flat Finger: Exploring Area Touches on Smartwatches
Smartwatches are emerging device category that feature highly limited input and display surfaces. We explore how touch contact areas, such as lines generated by flat fingers, can be used to increase input expressivity in these diminutive systems in three ways. Firstly, we present four design themes that emerged from an ideation workshop in which five designers proposed concepts for smartwatch touch area interaction. Secondly, we describe a sensor unit and study that captured user performance with 31 area touches and contrasted this against standard targeting performance. Finally, we describe three demonstration applications that instantiate ideas from the workshop and deploy the most reliably and rapidly produced area touches. We report generally positive user reactions to these demonstrators: the area touch interactions were perceived as quick, convenient and easy to learn and remember. Together this work characterizes how designers can use area touches in watch UIs, which area touches are most appropriate and how users respond to this interaction style.
The Performance and Preference of Different Fingers and Chords for Pointing, Dragging, and Object Transformation
The development of robust methods to identify which finger is causing each touch point, called “finger identification,” will open up a new input space where interaction designers can associate system actions to different fingers. However, relatively little is known about the performance of specific fingers as single touch points or when used together in a “chord.” We present empirical results for accuracy, throughput, and subjective preference gathered in five experiments with 48 participants exploring all 10 fingers and 7 two-finger chords. Based on these results, we develop design guidelines for reasonable target sizes for specific fingers and two-finger chords, and a relative ranking of the suitability of fingers and two-finger chords for common multi-touch tasks. Our work contributes new knowledge regarding specific finger and chord performance and can inform the design of future interaction techniques and interfaces utilizing finger identification.
How We Type: Movement Strategies and Performance in Everyday Typing
This paper revisits the present understanding of typing, which originates mostly from studies of trained typists using the ten-finger touch typing system. Our goal is to characterise the majority of present-day users who are untrained and employ diverse, self-taught techniques. In a transcription task, we compare self-taught typists and those that took a touch typing course. We report several differences in performance, gaze deployment and movement strategies. The most surprising finding is that self-taught typists can achieve performance levels comparable with touch typists, even when using fewer fingers. Motion capture data exposes 3 predictors of high performance: 1) unambiguous mapping (a letter is consistently pressed by the same finger), 2) active preparation of upcoming keystrokes, and 3) minimal global hand motion. We release an extensive dataset on everyday typing behavior.
Finger-Aware Shortcuts
We evaluate and demonstrate finger, hand, and posture identification as keyboard shortcuts. By detecting the hand and finger used to press a key, and open or closed hand postures, a key press can have multiple command mappings. A formative study reveals performance and preference patterns when using different fingers and postures to press a key. The results are used to develop a computer vision algorithm to identify fingers and hands on a keyboard captured by a built-in lap top camera and reflector. This algorithm is built into a background service to enable system-wide finger-aware shortcut keys in any application. A controlled experiment uses the service to compare the performance of Finger-Aware Shortcuts with existing methods. The results show Finger-Aware Shortcuts are comparable with a common class of shortcuts using multiple modifier keys. Finally, application demonstrations illustrate different use cases and mappings for Finger-Aware Shortcuts and extend the idea to two-handed key presses, continuous parameter control, and menu selection.
SESSION: Privacy over Time and Relationships
Autonomous and Interdependent: Collaborative Privacy Management on Social Networking Sites
Although information sharing on social networking sites (SNSs) usually involves multiple stakeholders, limited attention has been paid so far to conceptualizing users’ information practices as a collaborative process. To fill this gap in the literature, we develop a survey study to examine collaborative privacy management strategies involving co-owners of shared content. By conducting two online surveys (N = 304, 427) with different samples, our findings show how individuals protect online privacy collaboratively and how their autonomous decision making regarding privacy management is shaped by the interdependent use of SNSs with their social connections. We discuss theoretical implications to privacy research and suggest design guidelines for better supporting users’ needs for collaborating with their social ties to achieve collective privacy goals.
“We’re on the Same Page”: A Usability Study of Secure Email Using Pairs of Novice Users
Secure email is increasingly being touted as usable by novice users, with a push for adoption based on recent concerns about government surveillance. To determine whether secure email is ready for grassroots adoption, we employ a laboratory user study that recruits pairs of novice users to install and use several of the latest systems to exchange secure messages. We present both quantitative and qualitative results from 25 pairs of novice users as they use Pwm, Tutanota, and Virtru. Participants report being more at ease with this type of study and better able to cope with mistakes since both participants are “on the same page”. We find that users prefer integrated solutions over depot-based solutions, and that tutorials are important in helping first-time users. Hiding the details of how a secure email system provides security can lead to a lack of trust in the system. Participants expressed a desire to use secure email, but few wanted to use it regularly and most were unsure of when they might use it.
Enhancing Lifelogging Privacy by Detecting Screens
Low-cost, lightweight wearable cameras let us record (or ‘lifelog’) our lives from a ‘first-person’ perspective for purposes ranging from fun to therapy. But they also capture private information that people may not want to be recorded, especially if images are stored in the cloud or visible to other people. For example, recent studies suggest that computer screens may be lifeloggers’ single greatest privacy concern, because many people spend a considerable amount of time in front of devices that display private information. In this paper, we investigate using computer vision to automatically detect computer screens in photo lifelogs. We evaluate our approach on an existing in-situ dataset of 36 people who wore cameras for a week, and show that our technique could help manage privacy in the upcoming era of wearable cameras.
Sharing Steps in the Workplace: Changing Privacy Concerns Over Time
Personal health technologies are increasingly introduced in workplace settings. Yet little is known about workplace implementations of activity tracker use and the kind of experiences and concerns employees might have when engaging with these technologies in practice. We report on an observational study of a Danish workplace participating in a step counting campaign. We find that concerns of employees who choose to participate and those who choose not to differ. Moreover, privacy concerns of participants develop and change over time. Our findings challenge the assumption that consumers are becoming more comfortable with perceived risks associated with wearable technologies, instead showing how users can be initially influenced by the strong positive rhetoric surrounding these devices, only to be surprised by the necessity to renegotiate boundaries of disclosure in practice.
You Can’t Watch This!: Privacy-Respectful Photo Browsing on Smartphones
We present an approach to protect photos on smartphones from unwanted observations by distorting them in a way that makes it hard or impossible to recognize their content for an onlooker who does not know the photographs. On the other hand, due to the chosen way of distortion, the device owners who know the original images have no problems recognizing photos. We report the results of a user study (n=18) that showed very high usability properties for all tested graphical filters (only 11 out of 216 distorted photos were not correctly identified by their owners). At the same time, two of the filters significantly reduced the observability of the image contents.
SESSION: Supporting Player Social Experiences
Revisiting Computer-Mediated Intimacy: In-Game Marriage and Dyadic Gameplay in Audition
Existing studies in the field of HCI and CSCW have pointed to the significance to investigate computer-mediated intimacy and brought together concerns in ubiquitous computing, affective technologies, and experience design. However, existing conceptualizations of intimacy in collaborative online systems are largely based on empirical studies of systems that have similar social dynamics and user groups, which could lead to a bias in investigating intimacy. Using Audition, a dance battle Multiplayer Online Game with a popular marriage system, as our field site, we focus on dyadic intimacy in a non-violent online social space that has many young non-Caucasian and female users. We contribute to both confirming and further advancing existing theories of computer-mediated intimacy using this new dataset. We also suggest promising future directions for exploring the subjective intimate experiences in a scientifically defensible way.
Ping to Win?: Non-Verbal Communication and Team Performance in Competitive Online Multiplayer Games
Non-verbal communication plays a large role in online competitive multiplayer games, as team members attempt to coordinate with each other without distraction to achieve victory. Some games enable this communication through “pings,” alerts that are easy to activate and provide auditory and visual cues for teammates. In this paper, we review the literature on gestures and non-verbal communication and, through an empirical analysis of 84,489 players across 10,293 matches in the popular game, League of Legends, illustrate ping use in multiplayer games and test the impact of ping actions on performance in teams. We show that the amount of pings depends on player role and in-game activity and that pings by players have a positive but concave relationship with player performance. These findings demonstrate the importance of non-verbal communication and interruption on the performance of virtual team members. We conclude by discussing the implications of these results for theorizing and designing sociotechnical systems that rely on users to engage in synchronous, collaborative work in shared visual spaces.
The Proficiency-Congruency Dilemma: Virtual Team Design and Performance in Multiplayer Online Games
Multiplayer online battle arena games provide an excellent opportunity to study team performance. When designing a team, players must negotiate a proficiency-congruency dilemma between selecting roles that best match their experience and roles that best complement the existing roles on the team. We adopt a mixed-methods approach to explore how players negotiate this dilemma. Using data from League of Legends, we define a similarity space to operationalize team design constructs about role proficiency, generality, and congruency. We collect publicly available data from 3.36 million players to test the influence of these constructs on team performance. We also conduct focus groups with novice and elite players to understand how players’ team design practices vary with expertise. We find that the two factors, player proficiency and team congruency, both increase team performance, with the former having a stronger impact. We also find that elite players are better at balancing the two factors than the novice players. These findings have implications for players, designers, and theorists about how to recommend team designs that jointly prioritize individuals’ expertise and teams’ compatibility.
Design and Evaluation of a Multi-Player Mobile Game for Icebreaking Activity
In collaboration between strangers, group formation and familiarization often take a lot of time. To facilitate this, icebreaking activities are commonly utilized, aiming at a positive and relaxing social atmosphere. To explore how interactive technology could serve as a tool in such social activity, we developed Who’s Next, a multiplayer quiz-based mobile game intended to break the ice in a group of strangers. The design utilizes the information asymmetry between people, aiming to encourage joint activity between them. We conducted six evaluation sessions where four to six participants in each played the game together and were interviewed. Who’s Next was found to be a promising support for icebreaking. It was considered to offer a comfortable way of sharing information about oneself and getting to know newly-met strangers. We conclude that interactive technology could successfully support the facilitator role in encouraging interaction and creating a relaxed atmosphere between strangers.
SESSION: How Does It Look? Evaluating Visual Design
An EEG-based Approach for Evaluating Graphic Icons from the Perspective of Semantic Distance
Graphic icons play an increasingly important role in interface design due to the proliferation of digital devices in recent years. Their ability to express information in a universal fashion allows us to immediately interact with new applications, systems, and devices. Icons can, however, cause user confusion and frustration if designed poorly. Several studies have evaluated icons using behavioral-performance metrics such as reaction time as well as self-report methods. However, determining the usability of icons based on behavioral measures alone is not straightforward, because users’ interpretations of the meaning of icons involve various cognitive processes and perceptual mechanisms. Moreover, these perceptual mechanisms are affected not only by the icons themselves, but by usage scenarios. Thus, we need a means of sensitively and continuously measuring users’ different cognitive processes when they are interacting with icons. In this study, we propose an EEG-based approach to icon evaluation, in which users’ EEG signals are measured in multiple usage scenarios. Based on a combination of EEG and behavioral results, we provide a novel interpretation of the participants’ perception during these tasks, and identify some important implications for icon design.
Aesthetic Appeal and Visual Usability in Four Icon Design Eras
Technological artefacts express time periods in their visual design. Due time, visual culture changes and thus affects the design of pictorial representations in technological products, such as icons in user interfaces. Previous research of temporal aspects in human-computer interaction has been focusing on particular interaction situations, but not on the effects of design eras on user experience. The influence of icon design styles of different eras on aesthetic and usability experiences was studied with the method of primed product comparisons. Affective preferences and their processing times were analysed in order to examine visual usability in terms of semantic distance and aesthetic appeal of icons from different design eras. Aesthetic and usability preferences of icons from different eras varied, which allowed the investigation of the process in which users experience icons. This examination results in elaborating the process, for example the relationship between cognitive processing fluency, familiarity, and beauty.
The Effect of Thermal Stimuli on the Emotional Perception of Images
Thermal stimulation is a feedback channel that has the potential to influence the emotional response of people to media such as images. While previous work has demonstrated that thermal stimuli might have an effect on the emotional perception of images, little is understood about the exact emotional responses different thermal properties and presentation techniques can elicit towards images. This paper presents two user studies that investigate the effect thermal stimuli parameters (e.g. intensity) and timing of thermal stimuli presentation have on the emotional perception of images. We found that thermal stimulation increased valence and arousal in images with low valence and neutral to low arousal. Thermal augmentation of images also reduced valence and arousal in high valence and arousal images. We discovered that depending on when thermal augmentation is presented, it can either be used to create anticipation or enhance the inherent emotion an image is capable of evoking.
Using Crowd Sourcing to Measure the Effects of System Response Delays on User Engagement
It is well established that delays in system response time negatively impact productivity, error rates and user satisfaction. What is less clear is the degree to which these effects deter users from engaging with a system. Usability guidelines provide rough response time targets for minimizing these effects across various types of interactions. However, developers faced with technical limitations or cost constraints that prevent them from meeting such targets are given no data with which to estimate the impact that system response delays will have on user engagement. In this work, we demonstrate a methodology for using crowd sourcing platforms to examine (1) the relative impacts of different delay types and (2) the effects of marginal changes in system response times. We compare two common network delay types, those caused by limited bandwidth (increased download times) and those caused by network latency (lag in responsiveness), and present how these delays reduce engagement in the context of a crowd sourced image classification task. Furthermore, we model how financial incentives interact with system response delays to impact user engagement. Finally, we show how such models can be used to optimize the cost of system design choices.
SESSION: Participatory Design (PD) and Applications
Multi-lifespan Design Thinking: Two Methods and a Case Study with the Rwandan Diaspora
In recent years, the HCI community has recognized the need to address long(er) term information system design around on-going societal problems. Yet how to engage stakeholders effectively in multi-lifespan design thinking remains an open challenge. Toward that end, the work reported here extends an established envisioning method by introducing two new design methods, the multi-lifespan timeline and multi-lifespan co-design, with an emphasis on the element of (long) time. The new methods aim to stimulate participants’ visions of future information systems by: (a) enhancing participants’ understanding of longer timeframes (e.g., 100 years), and (b) guiding participants to effectively project themselves long into the future in their design thinking. We explored these multi-lifespan design methods in work with 51 Africans from Rwanda and the Great Lakes region living in the USA to understand the challenges and opportunities they envision for designing future information systems for transitional justice in Rwanda. Contributions are two-fold: (1) methodological innovation, and (2) a case study of multi-lifespan design thinking generated by diaspora members of post-conflict societies.
Participation Gestalt: Analysing Participatory Qualities of Interaction in Public Space
We introduce the participation gestalt framework for analysing participation in public interactive installations. Building on the concept of interaction gestalt, we define the participation gestalt as the unified perception and experience of participatory qualities as they unfold through interaction with the installation in a socio-cultural setting. The framework consists of five continua, mapping out the qualities of participation in relation to the degree of expressivity, exposure, investment, sociality and persistence that people experience when engaging in the interaction. Individually, the five qualities provide a vocabulary for analyzing an interactive installation. Combined, the five qualities constitute a participation gestalt framework by which HCI researchers can qualify how a certain forms of participation emerge around public installations. We exemplify the framework by analyzing four public installations in different socio-cultural contexts and examining their participation gestalt.
Designing Movement-based Play With Young People Using Powered Wheelchairs
Young people using powered wheelchairs have limited access to engaging leisure activities. We address this issue through a two-stage project; 1) the participatory development of a set of wheelchair-controlled, movement-based games (with 9 participants at a school that provides education for young people who have special needs) and 2) three case studies (4 participants) exploring player perspectives on a set of three wheelchair-controlled casual games. Our results show that movement-based playful experiences are engaging for young people using powered wheelchairs. However, the participatory design process and case studies also reveal challenges for game accessibility regarding the integration of movement in games, diversity of abilities among young people using powered wheelchairs, and the representation of disability in games. In our paper, we explore how to address those challenges in the development of accessible, empowering movement-based games, which is crucial to the wider participation of young people using powered wheelchairs in play.
Participatory Design through a Learning Science Lens
Participatory design is a growing practice in the field of Human Computer Interaction (HCI). This note is a review of how participatory design activities are a form of learning. The premise of this exploration is that participatory design is more than asking participants for their help in design. Instead, participatory design is a set of methods and practices used to scaffold the design experience, increasing participants’ reflection of their own knowledge and accounting for their previous knowledge so they can more fully engage in the design process. This active reflection and considerations of pervious experiences are closely tied to metacognition and a number of learning theories. Exploring previous studies provides examples of how learning theories are enacted through participatory design and how a greater awareness of these theories can inform the practice of participatory design.
SESSION: Health Support & Management
Speeching: Mobile Crowdsourced Speech Assessment to Support Self-Monitoring and Management for People with Parkinson’s
We present Speeching, a mobile application that uses crowdsourcing to support the self-monitoring and management of speech and voice issues for people with Parkinson’s (PwP). The application allows participants to audio record short voice tasks, which are then rated and assessed by crowd workers. Speeching then feeds these results back to provide users with examples of how they were perceived by listeners unconnected to them (thus not used to their speech patterns). We conducted our study in two phases. First we assessed the feasibility of utilising the crowd to provide ratings of speech and voice that are comparable to those of experts. We then conducted a trial to evaluate how the provision of feedback, using Speeching, was valued by PwP. Our study highlights how applications like Speeching open up new opportunities for self-monitoring in digital health and wellbeing, and provide a means for those without regular access to clinical assessment services to practice and get meaningful feedback on their speech.
Investigating the Heart Pump Implant Decision Process: Opportunities for Decision Support Tools to Help
Clinical decision support tools (DSTs) are computational systems that aid healthcare decision-making. While effective in labs, almost all these systems failed when they moved into clinical practice. Healthcare researchers speculated it is most likely due to a lack of user-centered HCI considerations in the design of these systems. This paper describes a field study investigating how clinicians make a heart pump implant decision with a focus on how to best integrate an intelligent DST into their work process. Our findings reveal a lack of perceived need for and trust of machine intelligence, as well as many barriers to computer use at the point of clinical decision-making. These findings suggest an alternative perspective to the traditional use models, in which clinicians engage with DSTs at the point of making a decision. We identify situations across patients’ healthcare trajectories when decision supports would help, and we discuss new forms it might take in these situations.
Finding Significant Stress Episodes in a Discontinuous Time Series of Rapidly Varying Mobile Sensor Data
Management of daily stress can be greatly improved by delivering sensor-triggered just-in-time interventions (JITIs) on mobile devices. The success of such JITIs critically depends on being able to mine the time series of noisy sensor data to find the most opportune moments. In this paper, we propose a time series pattern mining method to detect significant stress episodes in a time series of discontinuous and rapidly varying stress data. We apply our model to 4 weeks of physiological, GPS, and activity data collected from 38 users in their natural environment to discover patterns of stress in real life. We find that the duration of a prior stress episode predicts the duration of the next stress episode and stress in mornings and evenings is lower than during the day. We then analyze the relationship between stress and objectively rated disorder in the surrounding neighborhood and develop a model to predict stressful episodes.
Designing Guidelines for Mobile Health Technology: Managing Notification Interruptions in the ICU
Previous research on reducing unwanted interruptions in hospital intensive care units (ICU) have focused on providing context-aware solutions that consider factors such as location and activity of the person receiving the interruption. We seek to broaden an understanding of how to manage interruptions by using the Locales Framework to analyze data collected from a field study on mobile notification interruptions in the ICU. Based on our data along with previous literature on cognitive theories, mental models, strategies for managing interruptions, and principles of human factors, we propose five guidelines to aid in designing mobile technology interventions for the ICU.
SESSION: UX and Usability Methods
Momentary Pleasure or Lasting Meaning?: Distinguishing Eudaimonic and Hedonic User Experiences
User experience (UX) research has expanded our notion of what makes interactive technology good, often putting hedonic aspects of use such as fun, affect, and stimulation at the center. Outside of UX, the hedonic is often contrasted to the eudaimonic, the notion of striving towards one’s personal best. It remains unclear, however, what this distinction offers to UX research conceptually and empirically. We investigate a possible role for eudaimonia in UX research by empirically examining 266 reports of positive experiences with technology and analyzing its relation to established UX concepts. Compared to hedonic experiences, eudaimonic experiences were about striving towards and accomplishing personal goals through technology use. They were also characterized by increased need fulfillment, positive affect, meaning, and long-term importance. Taken together, our findings suggest that while hedonic UX is about momentary pleasures directly derived from technology use, eudaimonic UX is about meaning from need fulfilment.
Researcher-Centered Design of Statistics: Why Bayesian Statistics Better Fit the Culture and Incentives of HCI
A core tradition of HCI lies in the experimental evaluation of the effects of techniques and interfaces to determine if they are useful for achieving their purpose. However, our individual analyses tend to stand alone, and study results rarely accrue in more precise estimates via meta-analysis: in a literature search, we found only 56 meta-analyses in HCI in the ACM Digital Library, 3 of which were published at CHI (often called the top HCI venue). Yet meta-analysis is the gold standard for demonstrating robust quantitative knowledge. We treat this as a user-centered design problem: the failure to accrue quantitative knowledge is not the users’ (i.e. researchers’) failure, but a failure to consider those users’ needs when designing statistical practice. Using simulation, we compare hypothetical publication worlds following existing frequentist against Bayesian practice. We show that Bayesian analysis yields more precise effects with each new study, facilitating knowledge accrual without traditional meta-analyses. Bayesian practices also allow more principled conclusions from small-n studies of novel techniques. These advantages make Bayesian practices a likely better fit for the culture and incentives of the field. Instead of admonishing ourselves to spend resources on larger studies, we propose using tools that more appropriately analyze small studies and encourage knowledge accrual from one study to the next. We also believe Bayesian methods can be adopted from the bottom up without the need for new incentives for replication or meta-analysis. These techniques offer the potential for a more user- (i.e. researcher-) centered approach to statistical analysis in HCI.
Utilizing Employees as Usability Participants: Exploring When and When Not to Leverage Your Coworkers
Usability testing is an everyday practice for usability professionals in corporations. But, as in all experimental situations, who you study can be as important as what you study. In this Note we explore a common practice in the corporation: experimenting on the company’s employees. While fellow employees can be convenient and avoid issues such as confidentiality, we use two usability studies of mobile and web applications to show that employees spend less time-on-task on competitor websites than non-employees. Non-employees reliably rate competitor websites and apps higher than employees on both usability (on the 10-question SUS scale) and ease of use (on the 1-question SEQ scale). We conclude with recommendations for best practices for usability testing in the corporation.
SESSION: Backstage of Crowdsourcing Legitimacy, Performance and Crowd Support
The Power of Collective Endorsements: Credibility Factors in Medical Crowdfunding Campaigns
Traditional medical fundraising charities have been relying on third-party watchdogs and carefully crafting their reputation over time to signal their credibility to potential donors. As medical fundraising campaigns migrate to online platforms in the form of crowdfunding, potential donors can no longer rely on the organization’s traditional methods for achieving credibility. Individual fundraisers must establish credibility on their own. Potential donors, therefore, seek new factors to assess the credibility of crowdfunding campaigns. In this paper, we investigate current practices in assessing the credibility of online medical crowdfunding campaigns. We report results from a mixed-methods study that analyzed data from social media and semi-structured interviews. We discovered eleven factors associated with the perceived credibility of medical crowdfunding. Of these, three communicative/emotional factors were unique to medical crowdfunding. We also found a distinctive validation practice, the collective endorsement. Close-connections’ online presence and external online communities come together to form this collective endorsement in online medical fundraising campaigns. We conclude by describing how fundraisers can leverage collective endorsements to improve their campaigns’ perceived credibility.
Legitimacy Work: Invisible Work in Philanthropic Crowdfunding
Crowdfunding, the practice of funding a project by soliciting donations via the internet, allows organizations and individuals alike to raise funds for a variety of causes. In this paper, we present the results of a study of philanthropic crowdfunding, aimed at understanding some of the practices and needs associated with raising money for charitable causes. Our analysis highlights the diversity of stakeholders and roles in philanthropic crowdfunding and the immense amount of work associated with legitimizing many of these roles, including the fundraiser, organization, platform, and project. We introduce the construct of legitimacy work and discuss ways in which current crowdfunding systems both support and thwart this work.
Extracting Heart Rate from Videos of Online Participants
Crowdsourcing experiments online allows for low-cost data gathering with large participant pools; however, collecting data online does not give researchers access to certain metrics. For example, physiological measures such as heart rate (HR) can provide high-resolution data about the physical, emotional, and mental state of the participant. We investigate and characterize the feasibility of gathering HR from videos of online participants engaged in single user and social tasks. We show that room lighting, head motion, and network bandwidth influence measurement quality, but that instructing participants in good practices substantially improves measurement quality. Our work takes a step towards online physiological data collection.
Highly Successful Projects Inhibit Coordination on Crowdfunding Sites
Donors on crowdfunding sites must coordinate their actions to identify and collectively fund projects prior to their deadline. Some projects receive vast support immediately upon launch. Other seemingly worthwhile projects have more modest success or no success at raising funds. We examine how the presence of high-performing “superstar’ projects on a crowdfunding site affects donors’ ability to coordinate their actions and fund other less popular but still worthwhile projects on the site. In a lab experiment where users simulate the dynamics of a crowdfunding site, we found that superstar projects reduce the likelihood that other projects are funded by the crowd, even when the super project has no opportunity to steal away donations form other projects. We argue that this is due to superstar projects setting too high of a standard of what a “fundable” project looks like, leading donors to underestimate the amount of support within a crowd for less exceptional projects.
Stories We Tell About Labor: Turkopticon and the Trouble with “Design”
This paper argues that designers committed to advancing justice and other non-market values must attend not only to the design of objects, processes, and situations, but also to the wider economic and cultural imaginaries of design as a social role. The paper illustrates the argument through the case of Turkopticon, originally an activist tool for workers in Amazon Mechanical Turk (AMT), built by the authors and maintained since 2009. The paper analyzes public depictions of Turkopticon which cast designers as creative innovators and AMT workers as without agency or capacity to change their situation. We argue that designers’ elevated status as workers in knowledge economies can have practical consequences for the politics of their design work. We explain the consequences of this status for Turkopticon and how we adapted our approach in response over the long term. We argue for analyses of power in design work that account for and develop counters to hegemonic beliefs and practices about design as high-status labor.
SESSION: Expressive HCI
Storeoboard: Sketching Stereoscopic Storyboards
We present Storeoboard, a system for stereo-cinematic conceptualization, via storyboard sketching directly in stereo. The resurgence of stereoscopic media has motivated filmmakers to evolve a new stereo-cinematic vocabulary, as many principles for stereo 3D film are unique. Concepts like plane separation, parallax position, and depth budgets are missing from early planning due to the 2D nature of existing storyboards. Storeoboard is the first of its kind, allowing filmmakers to explore, experiment and conceptualize ideas in stereo early in the film pipeline, develop new stereo-cinematic constructs and foresee potential difficulties. Storeoboard is the design outcome of interviews and field work with directors, stereographers, and storyboard artists. We present our design guidelines and implementation of a tool combining stereo-sketching, depth manipulations and storyboard features into a coherent and novel workflow. We report on feedback from storyboard artists, industry professionals and the director of a live action, feature film on which Storeoboard was deployed.
Motion Amplifiers: Sketching Dynamic Illustrations Using the Principles of 2D Animation
We present a sketching tool for crafting animated illustrations that contain the exaggerated dynamics of stylized 2D animations. The system provides a set of motion amplifiers which implement a set of established principles of 2D animation. These amplifiers break down a complex animation effect into independent, understandable chunks. Each amplifier imposes deformations to an underlying grid, which in turn updates the corresponding strokes. Users can combine these amplifiers at will when applying them to an existing animation, promoting rapid experimentation. By leveraging the freeform nature of sketching, our system allows users to rapidly sketch, record motion, explore exaggerated dynamics using the amplifiers, and fine-tune their animations. Practical results confirm that users with no prior experience in animation can produce expressive animated illustrations quickly and easily.
Object-Oriented Drawing
We present Object-Oriented Drawing, which replaces most WIMP UI with Attribute Objects. Attribute Objects embody the attributes of digital content as UI objects that can be manipulated through direct touch gestures. In the paper, the fundamental UI concepts are presented, including Attribute Objects, which may be moved, cloned, linked, and freely associated with drawing objects. Other functionalities, such as attribute-level blending and undo, are also demonstrated. We developed a drawing application based on the presented concepts with simultaneous touch and pen input. An expert assessment of our application shows that direct physical manipulation of Attribute Objects enables a user to quickly perform interactions which were previously tedious, or even impossible, with a coherent and consistent interaction experience throughout the entire interface.
SESSION: Search and Discovery
Pick me!: Getting Noticed on Google Play
Almost any search on Google Play returns numerous app suggestions. The user quickly skims through the list and picks a few apps for a closer look. The vast majority of the apps regardless of how well-made they are go unnoticed. App icons uniquely represent each app in Google Play and help apps to get noticed, as we demonstrate in the paper. We reviewed the visual qualities of icons that could make them noticeable and likable. We then computationally measured two of the qualities visual saliency and complexity for 930 icons and linked the computed scores to app popularity (the number of app ratings and installs). The measures explained 38% of variance in the number of ratings, if app genre was accounted for. Not only does such result assert the link between icon properties and app popularity, it also highlights the automatic prediction of app popularity as a promising research direction. HCI researchers, app creators and Google Play (or another mobile marketplace) will benefit from the paper insights on what antecedes app success and how to measure the antecedents.
Diving in at the Deep End: The Value of Alternative In-Situ Approaches for Systematic Library Search
OPAC interfaces, still the dominant access point to library catalogs, support systematic search but are problematic for open-ended exploration and generally unpopular with visitors. As a result, libraries start subscribing to simplified search paradigms as exemplified by web-search systems. This is a problem considering that systematic search is a crucial skill in the light of today’s abundance of digital information. Inspired by novel approaches to facilitating search, we designed CollectionDiver, an installation for supporting systematic search in public libraries. The CollectionDiver combines tangible and large display direct-touch interaction with a visual representation of search criteria and filters. We conducted an in-situ qualitative study to compare participants’ search approaches on the CollectionDiver with those on the OPAC interface. Our findings show that while both systems support a similar search process, the CollectionDiver (1) makes systematic search more accessible, (2) motivates proactive search approaches by (3) adding transparency to the search process, and (4) facilitates shared search experiences. We discuss the CollectionDiver’s design concepts to stimulate new ideas toward supporting engaging approaches to systematic search in the library context and beyond.
Empath: Understanding Topic Signals in Large-Scale Text
Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like “bleed” and “punch” to generate the category violence). Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media. We show that Empath’s data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.
Peek-a-View: Smartphone Cover Interaction for Multi-Tasking
Most smartphones support multi-tasking with several means to switch between apps (e.g., a “recent apps” button or a “back” button). However, switching between apps is cumbersome when one has to do it frequently for example, when notifications keep interrupting one’s current task. We introduce Peek-a-View, a fully transparent flipping screen cover that can reduce task switching overhead by providing an additional virtual screen space for subtasks. We assessed its feasibility in handling notifications. Upon receiving a notification, users can peek into the content of the notification without actually switching apps by slightly lifting the cover. If necessary, users can completely flip the cover to switch to the app that fired the notification. Two user studies showed that flipping and peeking interaction provided improved performance and proved to be useful for tasks that involve subtasks.
SESSION: Interaction with Small Displays
Faster Command Selection on Touchscreen Watches
Small touchscreens worn on the wrist are becoming increasingly common, but standard interaction techniques for these devices can be slow, requiring a series of coarse swipes and taps to perform an action. To support faster command selection on watches, we investigate two related interaction techniques that exploit spatial memory. WristTap uses multitouch to allow selection in a single action, and TwoTap uses a rapid combination of two sequential taps. In three quantitative studies, we investigate the design and performance of these techniques in comparison to standard methods. Results indicate that both techniques are feasible, able to accommodate large numbers of commands, and fast users are able to quickly learn the techniques and reach performance of ~1.0 seconds per selection, which is approximately one-third of the time of standard commercial techniques. We also provide insights into the types of applications for which these techniques are well-suited, and discuss how the techniques could be extended.
Doppio: A Reconfigurable Dual-Face Smartwatch for Tangible Interaction
Doppio is a reconfigurable smartwatch with two touch sensitive display faces. The orientation of the top relative to the base and how the top is attached to the base, creates a very large interaction space. We define and enumerate possible configurations, transitions, and manipulations in this space. Using a passive prototype, we conduct an exploratory study to probe how people might use this style of smartwatch interaction. With an instrumented prototype, we conduct a controlled experiment to evaluate the transition times between configurations and subjective preferences. We use the combined results of these two studies to generate a set of characteristics and design considerations for applying this interaction space to smartwatch applications. These considerations are illustrated with a proof-of-concept hardware prototype demonstrating how Doppio interactions can be used for notifications, private viewing, task switching, temporary information access, application launching, application modes, input, and sharing the top.
Supporting Transitions to Expertise in Hidden Toolbars
Hidden toolbars are becoming common on mobile devices. These techniques maximize the space available for application content by keeping tools off-screen until needed. However, current designs require several actions to make a selection, and they do not provide shortcuts for users who have become familiar with the toolbar. To better understand the performance capabilities and tradeoffs involved in hidden toolbars, we outline a design space that captures the key elements of these controls, and report on an empirical evaluation of four designs. Two of our designs provide shortcuts that are based on the user’s spatial memory of item locations. The study found that toolbars with spatial-memory shortcuts had significantly better performance (700ms faster) than standard designs currently in use. Participants quickly learned the shortcut selection method (although switching to a memory-based method led to higher error rates than the visually-guided techniques). Participants strongly preferred one of the shortcut methods that allowed selections by swiping across the screen bezel at the location of the desired item. This work shows that shortcut techniques are feasible and desirable on touch devices, and shows that spatial memory can provide a foundation for designing shortcuts.
Investigating Effects of Post-Selection Feedback for Acquiring Ultra-Small Targets on Touchscreen
In this paper, we investigate the effects of post-selection feedback for acquiring ultra-small (2-4mm) targets on touchscreens. Post-selection feedback shows the contact point on touchscreen after a user lifts his/her fingers to increase users’ awareness of touching. Three experiments are conducted progressively using a single crosshair target, two reciprocally acquired targets and 2D random targets. Results show that in average post-selection feedback can reduce touch error rates by 78.4%, with a compromise of target acquisition time no more than 10%. In addition, we investigate participants’ adjustment behavior based on correlation between successive trials. We conclude that the benefit of post-selection feedback is the outcome of both improved understanding about finger/point mapping and the dynamic adjustment of finger movement enabled by the visualization of the touch point.
SESSION: How can Smartphones Fit Our Lives?
A Systematic Assessment of Smartphone Usage Gaps
Researchers who analyse smartphone usage logs often make the assumption that users who lock and unlock their phone for brief periods of time (e.g., less than a minute) are continuing the same “session” of interaction. However, this assumption is not empirically validated, and in fact different studies apply different arbitrary thresholds in their analysis. To validate this assumption, we conducted a field study where we collected user-labelled activity data through ESM and sensor logging. Our results indicate that for the majority of instances where users return to their smartphone, i.e., unlock their device, they in fact begin a new session as opposed to continuing a previous one. Our findings suggest that the commonly used approach of ignoring brief standby periods is not reliable, but optimisation is possible. We therefore propose various metrics related to usage sessions and evaluate various machine learning approaches to classify gaps in usage.
Journeys & Notes: Designing Social Computing for Non-Places
In this work we present a mobile application we designed and engineered to enable people to log their travels near and far, leave notes behind, and build a community around spaces in between destinations. Our design explores new ground for location-based social computing systems, identifying opportunities where these systems can foster the growth of on-line communities rooted at non-places. In our work we develop, explore, and evaluate several innovative features designed around four usage scenarios: daily commuting, long-distance traveling, quantified traveling, and journaling. We present the results of two small-scale user studies, and one large-scale, world-wide deployment, synthesizing the results as potential opportunities and lessons learned in designing social computing for non-places.
PowerShake: Power Transfer Interactions for Mobile Devices
Current devices have limited battery life, typically lasting less than one day. This can lead to situations where critical tasks, such as making an emergency phone call, are not possible. Other devices, supporting different functionality, may have sufficient battery life to enable this task. We present PowerShake; an exploration of power as a shareable commodity between mobile (and wearable) devices. PowerShake enables users to control the balance of power levels in their own devices (intra-personal transactions) and to trade power with others (inter-personal transactions) according to their ongoing usage requirements. This paper demonstrates Wireless Power Transfer (WPT) between mobile devices. PowerShake is: simple to perform on-the-go; supports ongoing/continuous tasks (transferring at ~3.1W); fits in a small form factor; and is compliant with electromagnetic safety guidelines while providing charging efficiency similar to other standards (48.2% vs. 51.2% in Qi). Based on our proposed technical implementation, we run a series of workshops to derive candidate designs for PowerShake enabled devices and interactions, and to bring to light the social implications of power as a tradable asset.
MyTime: Designing and Evaluating an Intervention for Smartphone Non-Use
Though many people report an interest in self-limiting certain aspects of their phone use, challenges adhering to self-defined limits are common. We conducted a design exercise and online survey to map the design space of interventions for smartphone non-use and distilled these into a small taxonomy of intervention categories. Using these findings, we implemented “MyTime,” an intervention to support people in achieving goals related to smartphone non-use. We conducted a deployment study with 23 participants over two weeks and found that participants reduced their time with the apps they feel are a poor use of time by 21% while their use of the apps they feel are a good use of time remained unchanged. We found that a small taxonomy describes users’ diverse set of desired behavior changes relating to smartphone non-use, and that these desired changes predict: 1) the hypothetical features they are interested in trying, 2) the extent to which they engage with these features in practice, and 3) their changes in behavior in response to the intervention. We link users’ desired behaviors to the categories of our design taxonomy, providing a foundation for a theoretical model of designing for smartphone non-use.
SESSION: Video Sharing
Motives and Concerns of Dashcam Video Sharing
Dashcams support continuous recording of external views that provide evidence in case of unexpected traffic-related accidents and incidents. Recently, sharing of dashcam videos has gained significant traction for accident investigation and entertainment purposes. Furthermore, there is a growing awareness that dashcam video sharing will greatly extend urban surveillance. Our work aims to identify the major motives and concerns behind the sharing of dashcam videos for urban surveillance. We conducted two survey studies (n=108, n=373) in Korea. Our results show that reciprocal altruism/social justice and monetary reward were the major motives and that participants were strongly motivated by altruism and social justice. Our studies have also identified major privacy concerns and found that groups with greater privacy concerns had lower altruism and justice motive, but had higher monetary motive. Our main findings have significant implications on the design of a dashcam video-sharing service.
Meerkat and Periscope: I Stream, You Stream, Apps Stream for Live Streams
We conducted a mixed methods study of the use of the Meerkat and Periscope apps for live streaming video and audio broadcasts from a mobile device. We crowdsourced a task to describe the content, setting, and other characteristics of 767 live streams. We also interviewed 20 frequent streamers to explore their motivations and experiences. Together, the data provide a snapshot of early live streaming use practices. We found a diverse range of activities broadcast, which interviewees said were used to build their personal brand. They described live streaming as providing an authentic, unedited view into their lives. They liked how the interaction with viewers shaped the content of their stream. We found some evidence for multiple live streams from the same event, which represent an opportunity for multiple perspectives on events of shared public interest.
The Tyranny of the Everyday in Mobile Video Messaging
This paper reports on how asynchronous mobile video messaging presents users with a challenge to doing ‘being ordinary’. 53 participants from three countries were recruited to try Skype Qik at launch for two weeks. Some participants embraced Skype Qik as a gift economy, emphasizing a special relationship enacted through crafted self-presentation. However, gift exchange makes up only a small proportion of conversation. Many participants struggled with the self-presentation obligations of video when attempting more everyday conversation. Faced with the ‘tyranny of the everyday’, many participants reverted to other systems where content forms reflected more lightweight exchange. We argue that designing for fluid control of the obligations of turn exchange is key to mobile applications intended to support everyday messaging.
Impact of Video Summary Viewing on Episodic Memory Recall: Design Guidelines for Video Summarizations
Reviewing lifelogging data has been proposed as a useful tool to support human memory. However, the sheer volume of data (particularly images) that can be captured by modern lifelogging systems makes the selection and presentation of material for review a challenging task. We present the results of a five-week user study involving 16 participants and over 69,000 images that explores both individual requirements for video summaries and the differences in cognitive load, user experience, memory experience, and recall experience between review using video summarisations and non-summary review techniques. Our results can be used to inform the design of future lifelogging data summarisation systems for memory augmentation.
SESSION: Privacy and Security Interfaces
The Anatomy of Smartphone Unlocking: A Field Study of Android Lock Screens
To prevent unauthorized parties from accessing data stored on their smartphones, users have the option of enabling a “lock screen” that requires a secret code (e.g., PIN, drawing a pattern, or biometric) to gain access to their devices. We present a detailed analysis of the smartphone locking mechanisms currently available to billions of smartphone users worldwide. Through a month-long field study, we logged events from a panel of users with instrumented smartphones (N=134). We are able to show how existing lock screen mechanisms provide users with distinct tradeoffs between usability (unlocking speed vs. unlocking frequency) and security. We find that PIN users take longer to enter their codes, but commit fewer errors than pattern users, who unlock more frequently and are very prone to errors. Overall, PIN and pattern users spent the same amount of time unlocking their devices on average. Additionally, unlock performance seemed unaffected for users enabling the stealth mode for patterns. Based on our results, we identify areas where device locking mechanisms can be improved to result in fewer human errors — increasing usability — while also maintaining security.
On Multiple Password Interference of Touch Screen Patterns and Text Passwords
The memorability of multiple passwords is an important topic for user authentication systems. With the advent of Android unlock pattern mechanism, research studies started investigating its usability and security features. This paper presents a study of recalling multiple passwords between text passwords and touch screen unlock patterns, as well as exploring whether users have difficulty in remembering those patterns after a period of time. In our study, participants create unlock patterns for various account scenarios. Our results reveal that participants in the unlock pattern condition with three accounts can outperform those in the text password condition (i.e., achieve higher success rates), not only in a one-hour session (short-term), but also after two weeks (long-term). However, there was no statistically significant difference between participants in the text password and unlock pattern condition in the long-term, when dealing with six accounts.
Keep on Lockin’ in the Free World: A Multi-National Comparison of Smartphone Locking
We present the results of an online survey of smartphone unlocking (N=8,286) that we conducted in eight different countries. The goal was to investigate differences in attitudes towards smartphone unlocking between different national cultures. Our results show that there are indeed significant differences across a range of categories. For instance, participants in Japan considered the data on their smartphones to be much more sensitive than those in other countries, and respondents in Germany were 4.5 times more likely than others to say that protecting data on their smartphones was important. The results of this study shed light on how motivations to use various security mechanisms are likely to differ from country to country.
SESSION: Detecting User Emotion
AniSAM & AniAvatar: Animated Visualizations of Affective States
Tools that provide visual feedback about emotions to the user in the form of an avatar or an emoticon have become increasingly important. While a great deal of effort has already been put into the reliable and accurate automatic detection of emotions, only very little is known about how this information about affective states should be displayed in a comprehensible way to the user. In the present study, three newly developed feedback tools were evaluated. The tools were developed on the basis of an existing non-verbal questionnaire to represent two dimensions of emotion (i.e. valence and arousal) based on the circumplex model of affect. A total number of 826 participants were tested, using different vignettes that describe situations with specific affective content. Employing three newly developed affective feedback tools (AniSAM, AniAvatar and MergedSAM), the ratings obtained were compared to ratings using the original SAM instrument, a well-established questionnaire to measure affect. Results indicated that the animated feedback increased the accuracy of the arousal representation. Furthermore, valence feedback was more accurate when provided with an animated manikin-based tool rather than an avatar-based tool. This provided first evidence for the usefulness of animated tools offering visual feedback on user emotion. All instruments need to undergo further development. AniSAM and AniAvatar can be downloaded for purposes of practical applications and further research.
Hot Under the Collar: Mapping Thermal Feedback to Dimensional Models of Emotion
There are inherent associations between temperature and emotion in language, cognition and subjective experience [22,42]. However, there exists no systematic mapping of thermal feedback to models of emotion that could be used by designers and users to convey a range of emotions in HCI. A common way of classifying emotions and quantifying emotional experience is through ratings along valence and arousal dimensions, originating from Russell’s circumplex model [32]. Therefore, the research in this paper mapped subjective ratings of a range of thermal stimuli to the circumplex model to understand the range of emotions that might be conveyed through thermal feedback. However, as the suitability of the model varies depending on the type of emotional stimuli [31], we also compared the goodness of fit of ratings between the circumplex and vector [8,31] models of emotion. The results showed that thermal feedback was interpreted as representing a limited range of emotions concentrated in just two quadrants or categories of the circumplex: high valence, low arousal and low valence, high arousal. Warm stimuli were perceived as more pleasant/positive than cool stimuli and altering either the rate or extent of temperature change affected both valence and arousal axes simultaneously. The results showed a significantly better fit to a vector model than to the circumplex.
UX Heatmaps: Mapping User Experience on Visual Interfaces
In this paper, we present an off-the-shelf UX evaluation tool which contextualizes users’ physiological and behavioral signals while interacting with a system. The proposed tool triangulates users’ gaze data with inferred users’ cognitive and emotional states to produce user experience (UX) heatmaps, which show where users were looking when they experienced specific cognitive and emotional states. Results show that for a given cognitive state (i.e., cognitive load), the proposed UX heatmap was able to effectively highlight the areas where users experienced different levels of cognitive load on an interface. The proposed tool enables the visual analysis of users’ various emotional and cognitive states for specific areas on a given interface, and also to compare users’ states across multiple interfaces, which should be useful for both UX researchers and practitioners.
SESSION: Diverse Disabilities and Technological Support
Universal Design Ballot Interfaces on Voting Performance and Satisfaction of Voters with and without Vision Loss
Voting is a glocalized event across countries, states and municipalities in which individuals of all abilities want to participate. To enable people with disabilities to participate accessible voting is typically implemented by adding assistive technologies to electronic voting machines to accommodate people with disabilities. To overcome the complexities and inequities in this practice, two interfaces, EZ Ballot, which uses a linear yes/no input system for all selections, and QUICK Ballot, which provides random access voting through direct selection, were designed to provide one system for all voters. This paper reports efficacy testing of both interfaces. The study demonstrated that voters with a range of visual abilities were able to use both ballots independently. While non-sighted voters made fewer errors on the linear ballot (EZ Ballot), partially-sighted and sighted voters completed the random access ballot (QUICK Ballot) in less time. In addition, a higher percentage of non-sighted participants preferred the linear ballot, and a higher percentage of sighted participants preferred the random ballot.
SayWAT: Augmenting Face-to-Face Conversations for Adults with Autism
During face-to-face conversations, adults with autism frequently use atypical rhythms and sounds in their speech (prosody), which can result in misunderstandings and miscommunication. SayWAT is a Wearable Assistive Technology that provides feedback to wearers about their prosody during face-to-face conversations. In this paper, we describe the design process that led to five design guidelines that governed the development of SayWAT and present results from two studies involving our prototype solution. Our results indicate that wearable assistive technologies can automatically detect atypical prosody and deliver feedback in real time without disrupting the wearer or the conversation partner. Additionally, we provide suggestions for wearable assistive technologies for social support.
The AT Effect: How Disability Affects the Perceived Social Acceptability of Head-Mounted Display Use
Wearable computing devices offer new possibilities to increase accessibility and independence for individuals with disabilities. However, the adoption of such devices may be influenced by social factors, and useful devices may not be adopted if they are considered inappropriate to use. While public policy may adapt to support accommodations for assistive technology, emerging technologies may be unfamiliar or unaccepted by bystanders. We surveyed 1200 individuals about the use of a head-mounted display in a public setting, examining how information about the user’s disability affected judgments of the social acceptability of the scenario. Our findings reveal that observers considered head-mounted display use more socially acceptable if the device was being used to support a person with a disability.
Tickers and Talker: An Accessible Labeling Toolkit for 3D Printed Models
Three-dimensional models are important learning resources for blind people. With advances in 3D printing, 3D models are becoming more available. However, unlike visual or tactile graphics, there is no standard accessible way to label components in 3D models. We present a labeling toolkit that enables users to add and access audio labels to 3D printed models. The toolkit includes Tickers, small 3D printed percussion instruments added to 3D models, and Talker, a signal processing application that detects and classifies Ticker sounds. To use the toolkit, a model designer adds Tickers to a model using 3D modeling software. A user then prints the model with Tickers and records audio labels for each Ticker. Finally, users can strum the Tickers and Talker will play the corresponding labels. We evaluated Tickers and Talker with three models in a study with nine blind participants. Our toolkit achieved an accuracy of 93% across all participants and models. We discuss design implications and future work for accessible 3D printed models.
SESSION: Robot Personalities
The Effect of Displaying System Confidence Information on the Usage of Autonomous Systems for Non-specialist Applications: A Lab Study
Autonomous systems are designed to take actions on behalf of users, acting autonomously upon data from sensors or online sources. As such, the design of interaction mechanisms that enable users to understand the operation of autonomous systems and flexibly delegate or regain control is an open challenge for HCI. Against this background, in this paper we report on a lab study designed to investigate whether displaying the confidence of an autonomous system about the quality of its work, which we call its confidence information, can improve user acceptance and interaction with autonomous systems. The results demonstrate that confidence information encourages the usage of the autonomous system we tested, compared to a situation where such information is not available. Furthermore, an additional contribution of our work is the method we employ to study users’ incentives to do work in collaboration with the autonomous system. In experiments comparing different incentive strategies, our results indicate that our translation of behavioural economics research methods to HCI can support the study of interactions with autonomous systems in the lab.
Why That Nao?: How Humans Adapt to a Conventional Humanoid Robot in Taking Turns-at-Talk
This paper explores how humans adapt to a conventional humanoid robot. Video data of participants playing a charade game with a Nao robot were analyzed from a multimodal conversation analysis perspective. Participants soon adjust aspects of turn-design such as word selection, turn length and prosody, thereby adapting to the robot’s limited perceptive abilities as they become apparent in the interaction. However, coordination of turns-at-talk remains troublesome throughout the encounter, as evidenced by overlapping turns and lengthy silences around possible turn endings. The study discusses how the robot design can be improved to support the problematic taking of turns-at-talk with humans. Two programming strategies to address the identified problems are presented: 1. to program the robot so that it will be systematically receptive at the equivalence to transition relevance places in human-human interaction, and 2. to make the robot preferably produce verbal actions that require a response in a conditional way, rather than making a response only possible.
ID-Match: A Hybrid Computer Vision and RFID System for Recognizing Individuals in Groups
Technologies that allow autonomous robots and computer systems to quickly recognize and interact with individuals in a group setting has the potential to enable a wide range of personalized experiences. However, existing solutions fail to both identify and locate individuals with enough speed to enable seamless interactions in very dynamic environments that require fast, implicit, non-intrusive, and ubiquitous recognition of users. In this work, we present a hybrid computer vision and RFID system that uses a novel reverse synthetic aperture technique to recover the relative motion paths of an RFID tags worn by people and correlate that to physical motion paths of individuals as measured with a 3D depth camera. Results show that our real-time system is capable of simultaneously recognizing and correctly assigning IDs to individuals within 4 seconds with 96.6% accuracy and groups of five people in 7 seconds with 95% accuracy. In order to test the effectiveness of this approach in realistic scenarios, groups of five participants play an interactive quiz game with an autonomous robot, resulting in an ID assignment accuracy of 93.3%.
Help Me Please: Robot Politeness Strategies for Soliciting Help From Humans
Robots that can leverage help from people could accomplish much more than robots that cannot. We present the results of two experiments that examine how robots can more effectively request help from people. Study 1 is a video prototype experiment (N=354), investigating the effectiveness of four linguistic politeness strategies as well as the effects of social status (equal, low), size of request (large, small), and robot familiarity (high, low) on people’s willingness to help a robot. The results of this study largely support Politeness Theory and the Computers as Social Actors paradigm. Study 2 is a physical human-robot interaction experiment (N=48), examining the impact of source orientation (autonomous, single operator, multiple operators) on people’s behavioral willingness to help the robot. People were nearly 50% faster to help the robot if they perceived it to be autonomous rather than being teleoperated. Implications for research design, theory, and methods are discussed.
SESSION: Problem-solving or not? The Boundaries of HCI Research
HCI Research as Problem-Solving
This essay contributes a meta-scientific account of human-computer interaction (HCI) research as problem-solving. We build on the philosophy of Larry Laudan, who develops problem and solution as the foundational concepts of science. We argue that most HCI research is about three main types of problem: empirical, conceptual, and constructive. We elaborate upon Laudan’s concept of problem-solving capacity as a universal criterion for determining the progress of solutions (outcomes): Instead of asking whether research is ‘valid’ or follows the ‘right’ approach, it urges us to ask how its solutions advance our capacity to solve important problems in human use of computers. This offers a rich, generative, and ‘discipline-free’ view of HCI and resolves some existing debates about what HCI is or should be. It may also help unify efforts across nominally disparate traditions in empirical research, theory, design, and engineering.
Anti-Solutionist Strategies: Seriously Silly Design Fiction
Much of the academic and commercial work which seeks to innovate around technology has been dismissed as “solutionist” because it solves problems that don’t exist or ignores the complexity of personal, political and environmental issues. This paper traces the “solutionism” critique to its origins in city planning and highlights the original concern with imaging and representation in the design process. It is increasingly cheap and easy to create compelling and seductive images of concept designs, which sell solutions and presume problems. We consider a range of strategies, which explicitly reject the search for “solutions”. These include design fiction and critical design but also less well-known techniques, which aim for unuseless, questionable and silly designs. We present two examples of “magic machine” workshops where participants are encouraged to reject realistic premises for possible technological interventions and create absurd propositions from lo-fi materials. We argue that such practices may help researchers resist the impulse towards solutionism and suggest that attention to representation during the ideation process is a key strategy for this.
Designing Speculative Civics
As human computer interaction design research continues to expand domains, civics is emerging as an important subject through which to explore how computation shapes our public lives. In this paper we present and reflect upon a series of research through design (RtD) projects that investigate speculative civic contexts. From this, we identify and discuss tactics that can be employed in RtD projects: RtD as Representations of Systems Yet-to-Come, RtD as Prototyping Systems and RtD as Use of a System. Then we identify and discuss thematic interpretations of civics that emerged through our designs: Mediated Civics, Computed Civics, and Proxied Civics. This work contributes to discourses of speculative design, research through design, and those of civics in human computer interaction design research.
Experimental Systems in Research through Design
Research through Design (RtD), a research approach that employs methods and approaches from design as a mode of inquiry, has gained momentum within HCI. However, the approach is not yet formalised, and there are ongoing debates about fundamental issues, such as how to articulate and evaluate knowledge that springs from RtD, and how this knowledge is comparable to knowledge from other forms of research. I propose that Rheinberger’s conceptualisation of experimental systems, originally developed in the domain of the natural sciences, offers insights that can add to the understanding of these issues, and in turn to the development of RtD as a research approach. I examine key characteristics of experimental systems as they pertain to RtD, with a focus on the role of designs and forms of knowledge representation. I furthermore propose that the experimental systems perspective can shed light on similarities and differences between RtD and other research approaches.
Social Inequality and HCI: The View from Political Economy
Massive changes in the economy and computing technology in recent years call for a close examination of their relationship. Changes include a broad range of topics and issues, some of which directly and crucially fall within the purview of HCI research and practice. We propose a perspective that engages issues of political economy, with a focus on social inequality. We introduce some of the history of concepts of this perspective, and discuss implications for HCI. We observe that practical and conceptual resources within HCI for considering political economy and inequality are emerging.
SESSION: Visualization Methods and Evaluation
Egocentric Analysis of Dynamic Networks with EgoLines
The egocentric analysis of dynamic networks focuses on discovering the temporal patterns of a subnetwork around a specific central actor (i.e., an ego-network). These types of analyses are useful in many application domains, such as social science and business intelligence, providing insights about how the central actor interacts with the outside world. We present EgoLines, an interactive visualization to support the egocentric analysis of dynamic networks. Using a “subway map” metaphor, a user can trace an individual actor over the evolution of the ego-network. The design of EgoLines is grounded in a set of key analytical questions pertinent to egocentric analysis, derived from our interviews with three domain experts and general network analysis tasks. We demonstrate the effectiveness of EgoLines in egocentric analysis tasks through a controlled experiment with 18 participants and a use-case developed with a domain expert.
ResViz: Politics and Design Issues in Visualizing Academic Metrics
The use of data and metrics on a professional and personal level has led to considerable discourse around the performative power and politics of ‘big data’ and data visualization, with academia being no exception. We have developed a university system, ResViz, which publicly visualizes the externally funded research projects of academics, and their internal collaborations. We present an interview study that engages 20 key stakeholders, academics and administrators who are part of the pilot release for the first version of this system. In doing so, we describe and problematize our design space, considering the implications of making metrics visible and their social use within a large organization. Our findings cut across the way people communicate, review and manage performance with metrics. We raise seven design issues in this space — practical considerations that expose the tensions in making metrics available for public contestation.
Evaluating Information Visualization via the Interplay of Heuristic Evaluation and Question-Based Scoring
In an instructional setting it can be difficult to accurately assess the quality of information visualizations of several variables. Instead of a standard design critique, an alternative is to ask potential readers of the chart to answer questions about it. A controlled study with 47 participants shows a good correlation between aggregated novice heuristic evaluation scores and results of answering questions about the data, suggesting that the two forms of assessment can be complementary. Using both metrics in parallel can yield further benefits; discrepancies between them may reveal incorrect application of heuristics or other issues.
A Comparison of Cooperative and Competitive Visualizations for Co-located Collaboration
We present a study that investigates the influence of different types of visualizations on collaboration. The visualizations present the group’s performance either in a more cooperative or more competitive way. Decades of research suggest that cooperation leads to greater productivity than competition. However, most of the existing group mirror visualizations achieve an increase in productivity and better self-regulation by enabling a direct comparison of performance within the group. We conducted a repeated measures study with 12 groups that were supported by visualizations that displayed the number of ideas of a brainstorming session (1) per person (competitive condition) (2) per group (cooperative condition), (3) per person and per group (mixed condition) and (4) without visualization (baseline). Results indicate that groups that see a combination of individual and group performance (mixed condition) are more productive, more satisfied with their results and participate in a more balanced way.
The Effect of Richer Visualizations on Code Comprehension
Researchers often introduce visual tools to programming environments in order to facilitate program comprehension, reduce navigation times, and help developers answer difficult questions. Syntax highlighting is the main visual lens through which developers perceive their code, and yet its effects and the effects of richer code presentations on code comprehension have not been evaluated systematically. We present a rigorous user study comparing mainstream syntax highlighting to two visually-enhanced presentations of code. Our results show that: (1) richer code visualizations reduce the time necessary to answer questions about code features, and (2) contrary to the subjective perception of developers, richer code visualizations do not lead to visual overload. Based on our results we outline practical recommendations for tool designers.
SESSION: Transportation and HCI
Peer-to-peer in the Workplace: A View from the Road
This paper contributes to the growing literature on peer-to-peer (P2P) applications through an ethnographic study of auto-rickshaw drivers in Bengaluru, India. We describe how the adoption of a P2P application, Ola, which connects passengers to rickshaws, changes drivers work practices. Ola is part of the ‘peer services’ phenomenon which enable new types of ad-hoc trade in labour, skills and goods. Auto-rickshaw drivers present an interesting case because prior to Ola few had used Smartphones or the Internet. Furthermore, as financially vulnerable workers in the informal sector, concerns about driver welfare become prominent. Whilst technologies may promise to improve livelihoods, they do not necessarily deliver [57]. We describe how Ola does little to change the uncertainty which characterizes an auto drivers’ day. This leads us to consider how a more equitable and inclusive system might be designed.
A Design Space to Support the Development of Windshield Applications for the Car
In this paper we present a design space for interactive windshield displays in vehicles and discuss how this design space can support designers in creating windshield applications for drivers, passengers, and pedestrians. Our work is motivated by numerous examples in other HCI-related areas where seminal design space papers served as a valuable basis to evolve the respective field — most notably mobile devices, automotive user interfaces, and interactive public displays. The presented design space is based on a comprehensive literature review. Furthermore we present a classification of 211 windshield applications, derived from a survey of research projects and commercial products as well as from focus groups. We showcase the utility of our work for designers of windshield applications through two scenarios. Overall, our design space can help building applications for diverse use cases. This includes apps inside and outside the car as well as applications for specific areas (fire fighters, police, ambulance).
When (ish) is My Bus?: User-centered Visualizations of Uncertainty in Everyday, Mobile Predictive Systems
Users often rely on realtime predictions in everyday contexts like riding the bus, but may not grasp that such predictions are subject to uncertainty. Existing uncertainty visualizations may not align with user needs or how they naturally reason about probability. We present a novel mobile interface design and visualization of uncertainty for transit predictions on mobile phones based on discrete outcomes. To develop it, we identified domain specific design requirements for visualizing uncertainty in transit prediction through: 1) a literature review, 2) a large survey of users of a popular realtime transit application, and 3) an iterative design process. We present several candidate visualizations of uncertainty for realtime transit predictions in a mobile context, and we propose a novel discrete representation of continuous outcomes designed for small screens, quantile dotplots. In a controlled experiment we find that quantile dotplots reduce the variance of probabilistic estimates by ~1.15 times compared to density plots and facilitate more confident estimation by end-users in the context of realtime transit prediction scenarios.
Error Recovery in Multitasking While Driving
Human-technology interactions involving errors undermine acceptance and performance. The effect of errors and the ability to recover from them represent a particularly important consideration for design in safety-critical multitasking situations. However, few studies have considered the recovery process of errors in multitasking situations, such as their contribution to driver distraction. This paper investigates errors that drivers make interacting with an infotainment system. In this study, participants (N = 46) drove a stimulated vehicle and performed word entry tasks on a touch screen. Errors undermined driving and task performance. We also identified four different error recovery strategies and found that the accumulated information related to the driving situation and the characteristics of an infotainment system affected the choice of strategy. Implications for in-vehicle interface design, driver models, and general multitasking design are discussed.
SESSION: Interaction Techniques for Mobile Interfaces
Personalized Compass: A Compact Visualization for Direction and Location
Maps on mobile/wearable devices often make it difficult to determine the location of a point of interest (POI). For example, a POI may exist outside the map or on a background with no meaningful cues. To address this issue, we present Personalized Compass, a self-contained compact graphical location indicator. Personalized Compass uses personal a priori POIs to establish a reference frame, within which a POI in question can then be localized. Graphically, a personalized compass combines a multi-needle compass with an abstract overview map. We analyze the characteristics of Personalized Compass and the existing Wedge technique, and report on a user study comparing them. Personalized Compass performs better for four inference tasks, while Wedge is better for a locating task. Based on our analysis and study results, we suggest the two techniques are complementary and offer design recommendations.
SymmetriSense: Enabling Near-Surface Interactivity on Glossy Surfaces using a Single Commodity Smartphone
Driven to create intuitive computing interfaces throughout our everyday space, various state-of-the-art technologies have been proposed for near-surface localization of a user’s finger input such as hover or touch. However, these works require specialized hardware not commonly available, limiting the adoption of such technologies. We present SymmetriSense, a technology enabling near-surface 3-dimensional fingertip localization above arbitrary glossy surfaces using a single commodity camera device such as a smartphone. SymmetriSense addresses the localization challenges in using a single regular camera by a novel technique utilizing the principle of reflection symmetry and the fingertip’s natural reflection casted upon surfaces like mirrors, granite countertops, or televisions. SymmetriSense achieves typical accuracies at sub-centimeter levels in our localization tests with dozens of volunteers and remains accurate under various environmental conditions. We hope SymmetriSense provides a technical foundation on which various everyday near-surface interactivity can be designed.
FlexCase: Enhancing Mobile Interaction with a Flexible Sensing and Display Cover
FlexCase is a novel flip cover for smartphones, which brings flexible input and output capabilities to existing mobile phones. It combines an e-paper display with a pressure- and bend-sensitive input sensor to augment the capabilities of a phone. Due to the form factor, FlexCase can be easily transformed into several different configurations, each with different interaction possibilities. Users can use FlexCase to perform a variety of touch, pressure, grip and bend gestures in a natural manner, much like interacting with a sheet of paper. The secondary e-paper display can act as a mechanism for providing user feedback and persisting content from the main display. In this paper, we explore the rich design space of FlexCase and present a number of different interaction techniques. Beyond, we highlight how touch and flex sensing can be combined to support a novel type of gestures, which we call Grip & Bend gestures. We also describe the underlying technology and gesture sensing algorithms. Numerous applications apply the interaction techniques in convincing real-world examples, including enhanced e-paper reading and interaction, a new copy and paste metaphor, high degree of freedom 3D and 2D manipulation, and the ability to transfer content and support input between displays in a natural and flexible manner.
Evaluation of a Smart-Restorable Backspace Technique to Facilitate Text Entry Error Correction
We present a new smart-restorable backspace technique to facilitate correction of “overlooked” errors on touchscreen-based tablets. We conducted an empirical study to compare the new backspace technique with the conventional one. Results of the study revealed that the new technique improves the overall text entry performance, both in terms of speed and operations per character, by significantly reducing error correction efforts. In addition, results showed that most users preferred the new technique to the one they use on their tablets, and found it easy to learn and use. Most of them also felt that it improved their overall text entry performance, thus wanted to keep using it.
TapBoard 2: Simple and Effective Touchpad-like Interaction on a Multi-Touch Surface Keyboard
We introduce TapBoard 2, a touchpad-based keyboard that solves the problem of typing and pointing disambiguation. The pointing interaction design of TapBoard 2 is nearly identical to natural touchpad interaction, and its shared workspace naturally invites bimanual pointing interaction. To implement TapBoard 2, we developed a novel gesture representation scheme for a systematic design and gesture recognizer. A user evaluation showed that TapBoard 2 successfully supports collocated pointing and typing interaction. It was able to disambiguate typing and pointing actions with an accuracy of greater than 95%. In addition, the typing and pointing performance of TapBoard 2 were comparable to that of a separate keyboard and mouse. In particular, the bimanual pointing operations of TapBoard 2 are highly efficient and strongly favored by participants.
SESSION: Eye Gaze
Building a Personalized, Auto-Calibrating Eye Tracker from User Interactions
We present PACE, a Personalized, Automatically Calibrating Eye-tracking system that identifies and collects data unobtrusively from user interaction events on standard computing systems without the need for specialized equipment. PACE relies on eye/facial analysis of webcam data based on a set of robust geometric gaze features and a two-layer data validation mechanism to identify good training samples from daily interaction data. The design of the system is founded on an in-depth investigation of the relationship between gaze patterns and interaction cues, and takes into consideration user preferences and habits. The result is an adaptive, data-driven approach that continuously recalibrates, adapts and improves with additional use. Quantitative evaluation on 31 subjects across different interaction behaviors shows that training instances identified by the PACE data collection have higher gaze point-interaction cue consistency than those identified by conventional approaches. An in-situ study using real-life tasks on a diverse set of interactive applications demonstrates that the PACE gaze estimation achieves an average error of 2.56º, which is comparable to state-of-the-art, but without the need for explicit training or calibration. This demonstrates the effectiveness of both the gaze estimation method and the corresponding data collection mechanism.
Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks
In this work, we investigate how remote collaboration between a local worker and a remote collaborator will change if eye fixations of the collaborator are presented to the worker. We track the collaborator’s points of gaze on a monitor screen displaying a physical workspace and visualize them onto the space by a projector or through an optical see-through head-mounted display. Through a series of user studies, we have found the followings: 1) Eye fixations can serve as a fast and precise pointer to objects of the collaborator’s interest. 2) Eyes and other modalities, such as hand gestures and speech, are used differently for object identification and manipulation. 3) Eyes are used for explicit instructions only when they are combined with speech. 4) The worker can predict some intentions of the collaborator such as his/her current interest and next instruction.
Gaze-Contingent Manipulation of Color Perception
Using real time eye tracking, gaze-contingent displays can modify their content to represent depth (e.g., through additional depth cues) or to increase rendering performance (e.g., by omitting peripheral detail). However, there has been no research to date exploring how gaze-contingent displays can be leveraged for manipulating perceived color. To address this, we conducted two experiments (color matching and sorting) that manipulated peripheral background and object colors to influence the user’s color perception. Findings from our color matching experiment suggest that we can use gaze-contingent simultaneous contrast to affect color appearance and that existing color appearance models might not fully predict perceived colors with gaze-contingent presentation. Through our color sorting experiment we demonstrate how gaze-contingent adjustments can be used to enhance color discrimination. Gaze-contingent color holds the promise of expanding the perceived color gamut of existing display technology and enabling people to discriminate color with greater precision.
Spotlights: Attention-Optimized Highlights for Skim Reading
The paper contributes a novel technique that can improve user performance in skim reading. Users typically use a continuous-rate-based scrolling technique to skim works such as longer Web pages, e-books, and PDF files. However, visual attention is compromised at higher scrolling rates because of motion blur and extraneous objects with overly brief exposure times. In response, we present Spotlights. It complements the regular continuous technique at high speeds (2–20 pages/s). We present a novel design rule informed by theories of the human visual system for dynamically selecting objects and placing them on transparent overlays on top of the viewer. This improves the quality of visual processing at high scrolling rates by 1) limiting the number of objects, 2) ensuring minimal processing time per object, and 3) keeping objects static to avoid motion blur and facilitate gaze deployment. Spotlights was compared to continuous scrolling in two studies using long documents (200+ pages). Comprehension levels for long documents were comparable with those in continuous-rate-based scrolling, but Spotlights showed significantly better scrolling speed, gaze deployment, recall, lookup performance, and user-rated comprehension.
SESSION: Mental Models of Privacy
“If You Put All The Pieces Together…”: Attitudes Towards Data Combination and Sharing Across Services and Companies
Online services often rely on processing users’ data, which can be either provided directly by the users or combined from other services. Although users are aware of the latter, it is unclear whether they are comfortable with such data combination, whether they view it as beneficial for them, or the extent to which they believe that their privacy is exposed. Through an online survey (N=918) and follow-up interviews (N=14), we show that (1) comfort is highly dependent on the type of data, type of service and on the existence of a direct relationship with a company, (2) users have a highly different opinion about the presence of benefits for them, irrespectively of the context, and (3) users perceive the combination of online data as more identifying than data related to offline and physical behavior (such as location). Finally, we discuss several strategies for companies to improve upon these issues.
Privacy Personas: Clustering Users via Attitudes and Behaviors toward Security Practices
A primary goal of research in usable security and privacy is to understand the differences and similarities between users. While past researchers have clustered users into different groups, past categories of users have proven to be poor predictors of end-user behaviors. In this paper, we perform an alternative clustering of users based on their behaviors. Through the analysis of data from surveys and interviews of participants, we identify five user clusters that emerge from end-user behaviors-Fundamentalists, Lazy Experts, Technicians, Amateurs and the Marginally Concerned. We examine the stability of our clusters through a survey-based study of an alternative sample, showing that clustering remains consistent. We conduct a small-scale design study to demonstrate the utility of our clusters in design. Finally, we argue that our clusters complement past work in understanding privacy choices, and that our categorization technique can aid in the design of new computer security technologies.
It’s Creepy, But it Doesn’t Bother Me
Undergraduates interviewed about privacy concerns related to online data collection made apparently contradictory statements. The same issue could evoke concern or not in the span of an interview, sometimes even a single sentence. Drawing on dual-process theories from psychology, we argue that some of the apparent contradictions can be resolved if privacy concern is divided into two components we call intuitive concern, a “gut feeling,” and considered concern, produced by a weighing of risks and benefits. Consistent with previous explanations of the so-called privacy paradox, we argue that people may express high considered concern when prompted, but in practice act on low intuitive concern without a considered assessment. We also suggest a new explanation: a considered assessment can override an intuitive assessment of high concern without eliminating it. Here, people may choose rationally to accept a privacy risk but still express intuitive concern when prompted.
Make it Simple, or Force Users to Read?: Paraphrased Design Improves Comprehension of End User License Agreements
Users often react negatively towards applications that track their personal information, even though they have consented to such tracking by hitting the “I Agree” button on the application’s end user license agreement (EULA). This is because most users do not read the EULA carefully. The language and presentation of EULAs are often dull, dense and inaccessible. Researchers have proposed design options for heightening comprehension of EULA content, but the effectiveness of these suggestions is unclear. To address this gap, we conducted an experiment that examined how users’ attitudes towards EULAs are affected by paraphrased and forced EULA formats. Paraphrased EULA presentations increased the time spent on reading the EULA. Moreover, they elicited more positive attitudes toward the EULA, which in turn predicted better comprehension. These findings hold implications for design of EULAs by showing that complex content displayed in simple terms across multiple windows can increase reader comprehension.
Behavior Ever Follows Intention?: A Validation of the Security Behavior Intentions Scale (SeBIS)
The Security Behavior Intentions Scale (SeBIS) measures the computer security attitudes of end-users. Because intentions are a prerequisite for planned behavior, the scale could therefore be useful for predicting users’ computer security behaviors. We performed three experiments to identify correlations between each of SeBIS’s four sub-scales and relevant computer security behaviors. We found that testing high on the awareness sub-scale correlated with correctly identifying a phishing website; testing high on the passwords sub-scale correlated with creating passwords that could not be quickly cracked; testing high on the updating sub-scale correlated with applying software updates; and testing high on the securement sub-scale correlated with smartphone lock screen usage (e.g., PINs). Our results indicate that SeBIS predicts certain computer security behaviors and that it is a reliable and valid tool that should be used in future research.
SESSION: Living in Smart Environments
It is too Hot: An In-Situ Study of Three Designs for Heating
Smart energy systems that leverage machine learning techniques are increasingly integrated in all aspects of our lives. To better understand how to design user interaction with such systems, we implemented three different smart thermostats that automate heating based on users’ heating preferences and real-time price variations. We evaluated our designs through a field study, where 30 UK households used our thermostats to heat their homes over a month. Our findings through thematic analysis show that the participants formed different understandings and expectations of our smart thermostat, and used it in various ways to effectively respond to real-time prices while maintaining their thermal comfort. Based on the findings, we present a number of design and research implications, specifically for designing future smart thermostats that will assist us in controlling home heating with real-time pricing, and for future intelligent autonomous systems.
Living In A Prototype: A Reconfigured Space
In this paper, we present a twenty-three months autobiographical design project of converting a Mercedes Sprinter van into a camper van. This project allows us to investigate the complexities and nuances of a case where people engage in a process of making, transforming and adapting a space they live in. This example opens a radically different and productive context for revisiting concepts that are currently at the center of human-computer interaction (HCI) research: ubiquitous computing, home automation, smart homes, and the Internet of Things. We offer six qualities characterizing the evolving relationship between the makers and the lived-in environment: the van. We conclude with a discussion on the two themes of living in a reconfigured home and prototype qualities in a reconfigured space, and a critical reflection around the theme of the invariably unfinished home.
“Like Having a Really Bad PA”: The Gulf between User Expectation and Experience of Conversational Agents
The past four years have seen the rise of conversational agents (CAs) in everyday life. Apple, Microsoft, Amazon, Google and Facebook have all embedded proprietary CAs within their software and, increasingly, conversation is becoming a key mode of human-computer interaction. Whilst we have long been familiar with the notion of computers that speak, the investigative concern within HCI has been upon multimodality rather than dialogue alone, and there is no sense of how such interfaces are used in everyday life. This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use. We find user expectations dramatically out of step with the operation of the systems, particularly in terms of known machine intelligence, system capability and goals. Using Norman’s ‘gulfs of execution and evaluation’ [30] we consider the implications of these findings for the design of future systems.
LivingDesktop: Augmenting Desktop Workstation with Actuated Devices
We investigate the potential benefits of actuated devices for the desktop workstation which remains the most used environment for daily office works. A formative study reveals that the desktop workstation is not a fixed environment because users manually change the position and the orientation of their devices. Based on these findings, we present the LivingDesktop, an augmented desktop workstation with devices (mouse, keyboard, monitor) capable of moving autonomously. We describe interaction techniques and applications illustrating how actuated desktop workstations can improve ergonomics, foster collaboration, leverage context and reinforce physicality. Finally, the findings of a scenario evaluation are (1) the perceived usefulness of ergonomics and collaboration applications; (2) how the LivingDesktop inspired our participants to elaborate novel accessibility and social applications; (3) the location and user practices should be considered when designed actuated desktop devices.
SESSION: Design for Health Care
Technological Caregiving: Supporting Online Activity for Adults with Cognitive Impairments
With much of the population now online, the field of HCI faces new and pressing issues of how to help people sustain online activity throughout their lives, including through periods of disability. The onset of cognitive impairment later in life affects whether and how individuals are able to stay connected online and manage their digital information. While caregivers play a critical role in the offline lives of adults with cognitive impairments, less is known about how they support and enable online interaction. Using a constructivist grounded theory approach, data from focus groups with caregivers of adults with cognitive impairments reveal four forms of cooperative work caregivers perform in the context of supporting online activity. We find that staying active online is a way of empowering and engaging adults with cognitive impairments, yet this introduces new forms of risk, surrogacy, and cooperative technology use to the already demanding work of caregiving.
Closing the Gap: Supporting Patients’ Transition to Self-Management after Hospitalization
Patients going home after a hospitalization face many challenges. This transition period exposes patients to unnecessary risks related to inadequate preparation prior to leaving the hospital, potentially leading to errors and patient harm. Although patients engaging in self-management have better health outcomes and increased self-efficacy, little is known about the processes in place to support and develop these skills for patients leaving the hospital. Through qualitative interviews and observations of 28 patients during and after their hospitalizations, we explore the challenges they face transitioning from hospital care to self-management. We identify three key elements in this process: knowledge, resources, and self-efficacy. We describe how both system and individual factors contribute to breakdowns leading to ineffective patient management. This work expands our understanding of the unique challenges faced by patients during this difficult transition and uncovers important design opportunities for supporting crucial yet unmet patient needs.
Care Partnerships: Toward Technology to Support Teens’ Participation in Their Health Care
Adolescents with complex chronic illnesses, such as cancer and blood disorders, must partner with family and clinical caregivers to navigate risky procedures with life-altering implications, burdensome symptoms and lifelong treatments. Yet, there has been little investigation into how technology can support these partnerships. We conducted 38 in-depth interviews (15 with teenage adolescents with chronic forms of cancer and blood disorders, 15 with their parents, and eight with clinical caregivers) along with nine non-participant observations of clinical consultations, to better understand common challenges and needs that could be supported through design. Participants faced challenges primarily concerning: 1) teens’ limited participation in their care, 2) communicating emotionally-sensitive information, and 3) managing physical and emotional responses. We draw on these findings to propose design goals for sociotechnical systems to support teens in partnering in their care, highlighting the need for design to support gradually-evolving partnerships.
SESSION: Representing User Experience
Data-driven Personas: Constructing Archetypal Users with Clickstreams and User Telemetry
User Experience (UX) research teams following a user centered design approach harness personas to better understand a user’s workflow by examining that user’s behavior, goals, needs, wants, and frustrations. To create target personas these researchers rely on workflow data from surveys, self-reports, interviews, and user observation. However, this data not directly related to user behavior, weakly reflects a user’s actual workflow in the product, is costly to collect, is limited to a few hundred responses, and is outdated as soon as a persona’s workflows evolve. To address these limitations we present a quantitative bottom-up data-driven approach to create personas. First, we directly incorporate user behavior via clicks gathered automatically from telemetry data related to the actual product use in the field; since the data collection is automatic it is also cost effective. Next, we aggregate 3.5 million clicks from 2400 users into 39,000 clickstreams and then structure them into 10 workflows via hierarchical clustering; we thus base our personas on a large data sample. Finally, we use mixed models, a statistical approach that incorporates these clustered workflows to create five representative personas; updating our mixed model ensures that these personas remain current. We also validated these personas with our product’s user behavior experts to ensure that workflows and the persona goals represent actual product use.
Evaluating the Paper-to-Screen Translation of Participant-Aided Sociograms with High-Risk Participants
While much social network data exists online, key network metrics for high-risk populations must still be captured through self-report. This practice has suffered from numerous limitations in workflow and response burden. However, advances in technology, network drawing libraries and databases are making interactive network drawing increasingly feasible. We describe the translation of an analog-based technique for capturing personal networks into a digital framework termed netCanvas that addresses many existing shortcomings such as: 1) complex data entry; 2) extensive interviewer intervention and field setup; 3) difficulties in data reuse; and 4) a lack of dynamic visualizations. We test this implementation within a health behavior study of a high-risk and difficult-to-reach population. We provide a within–subjects comparison between paper and touchscreens. We assert that touchscreen-based social network capture is now a viable alternative for highly sensitive data and social network data entry tasks.
SESSION: Making Music on the Brain
Learn Piano with BACh: An Adaptive Learning Interface that Adjusts Task Difficulty Based on Brain State
We present Brain Automated Chorales (BACh), an adaptive brain-computer system that dynamically increases the levels of difficulty in a musical learning task based on pianists’ cognitive workload measured by functional near-infrared spectroscopy. As users’ cognitive workload fell below a certain threshold, suggesting that they had mastered the material and could handle more cognitive information, BACh automatically increased the difficulty of the learning task. We found that learners played with significantly increased accuracy and speed in the brain-based adaptive task compared to our control condition. Participant feedback indicated that they felt they learned better with BACh and they liked the timings of the level changes. The underlying premise of BACh can be applied to learning situations where a task can be broken down into increasing levels of difficulty.
#Scanners: Exploring the Control of Adaptive Films using Brain-Computer Interaction
This paper explores the design space of bio-responsive entertainment, in this case using a film that responds to the brain and blink data of users. A film was created with four parallel channels of footage, where blinking and levels of attention and meditation, as recorded by a commercially available EEG device, affected which footage participants saw. As a performance-led piece of research in the wild, this experience, named #Scanners, was presented at a week long national exhibition in the UK. We examined the experiences of 35 viewers, and found that these forms of partially-involuntary control created engaging and enjoyable, but sometimes distracting, experiences. We translate our findings into a two-dimensional design space between the extent of voluntary control that a physiological measure can provide against the level of conscious awareness that the user has of that control. This highlights that novel design opportunities exist when deviating from these two-dimensions – when giving up conscious control and when abstracting the affect of control. Reflection on of how viewers negotiated this space during an experience reveals novel design tactics.
Inspect, Embody, Invent: A Design Framework for Music Learning and Beyond
This paper introduces a new framework to guide the design of interactive music learning systems, focusing on the piano. Taking a Reflective approach, we identify the implicit assumption behind most existing systems-that learning music is learning to play correctly according to the score-and offer an alternative approach. We argue that systems should help cultivate higher levels of musicianship beyond correctness alone for students of all levels. Drawing from both pedagogical literature and the personal experience of learning to play the piano, we identify three skills central to musicianship-listening, embodied understanding, and creative imagination-which we generalize to the Inspect, Embody, Invent framework. To demonstrate how this framework translates to design, we discuss two existing interfaces from our own research-MirrorFugue and Andante-both built on a digitally controlled player piano augmented by in-situ projection. Finally, we discuss the framework’s relevance toward bigger themes of embodied interactions and learning beyond the domain of music.
SESSION: Natural User Interfaces for InfoVis
TimeFork: Interactive Prediction of Time Series
We present TimeFork, an interactive prediction technique to support users predicting the future of time-series data, such as in financial, scientific, or medical domains. TimeFork combines visual representations of multiple time series with prediction information generated by computational models. Using this method, analysts engage in a back-and-forth dialogue with the computational model by alternating between manually predicting future changes through interaction and letting the model automatically determine the most likely outcomes, to eventually come to a common prediction using the model. This computer-supported prediction approach allows for harnessing the user’s knowledge of factors influencing future behavior, as well as sophisticated computational models drawing on past performance. To validate the TimeFork technique, we conducted a user study in a stock market prediction game. We present evidence of improved performance for participants using TimeFork compared to fully manual or fully automatic predictions, and characterize qualitative usage patterns observed during the user study.
The Effect of Visual Appearance on the Performance of Continuous Sliders and Visual Analogue Scales
Sliders and Visual Analogue Scales (VASs) are input mechanisms which allow users to specify a value within a predefined range. At a minimum, sliders and VASs typically consist of a line with the extreme values labeled. Additional decorations such as labels and tick marks can be added to give information about the gradations along the scale and allow for more precise and repeatable selections. There is a rich history of research about the effect of labelling in discrete scales (i.e., Likert scales), however the effect of decorations on continuous scales has not been rigorously explored. In this paper we perform a 2,000 user, 250,000 trial online experiment to study the effects of slider appearance, and find that decorations along the slider considerably bias the distribution of responses received. Using two separate experimental tasks, the trade-offs between bias, accuracy, and speed-of-use are explored and design recommendations for optimal slider implementations are proposed.
Making Sense of Temporal Queries with Interactive Visualization
As real-time monitoring and analysis become increasingly important, researchers and developers turn to data stream management systems (DSMS’s) for fast, efficient ways to pose temporal queries over their datasets. However, these systems are inherently complex, and even database experts find it difficult to understand the behavior of DSMS queries. To help analysts better understand these temporal queries, we developed StreamTrace, an interactive visualization tool that breaks down how a temporal query processes a given dataset, step-by-step. The design of StreamTrace is based on input from expert DSMS users; we evaluated the system with a lab study of programmers who were new to streaming queries. Results from the study demonstrate that StreamTrace can help users to verify that queries behave as expected and to isolate the regions of a query that may be causing unexpected results.
Investigating Time Series Visualisations to Improve the User Experience
Research on graphical perception of time series visualisations has focused on visual representation, and not on interaction. Even for visual representation, there has been limited study of the impact on users of visual encodings and the strengths and weaknesses of Cartesian and Polar coordinate systems. In order to address this research gap, we performed a comprehensive graphical perception study that measured the effectiveness of time series visualisations with different interactions, visual encodings and coordinate systems for several tasks. Our results show that, while positional and colour visual encodings were better for most tasks, area visual encoding performed better for data comparison. Most importantly, we identified that introducing interactivity within time series visualisations considerably enhances the user experience, without any loss of efficiency or accuracy. We believe that our findings can greatly improve the development of visual analytics tools using time series visualisations in a variety of domains.
SESSION: Multi-Device Interaction
Smartwatch in vivo
In recent years, the smartwatch has returned as a form factor for mobile computing with some success. Yet it is not clear how smartwatches are used and integrated into everyday life differently from mobile phones. For this paper, we used wearable cameras to record twelve participants’ daily use of smartwatches, collecting and analysing incidents where watches were used from over 34 days of user recording. This allows us to analyse in detail 1009 watch uses. Using the watch as a timepiece was the most common use, making up 50% of interactions, but only 14% of total watch usage time. The videos also let us examine why and how smartwatches are used for activity tracking, notifications, and in combination with smartphones. In discussion, we return to a key question in the study of mobile devices: how are smartwatches integrated into everyday life, in both the actions that we take and the social interactions we are part of?
When Tablets meet Tabletops: The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets
Cross-device collaboration with tablets is an increasingly popular topic in HCI. Previous work has shown that tablet-only collaboration can be improved by an additional shared workspace on an interactive tabletop. However, large tabletops are costly and need space, raising the question to what extent the physical size of shared horizontal surfaces really pays off. In order to analyse the suitability of smaller-than-tabletop devices (e.g. tablets) as a low-cost alternative, we studied the effect of the size of a shared horizontal interactive workspace on users’ attention, awareness, and efficiency during cross-device collaboration. In our study, 15 groups of two users executed a sensemaking task with two personal tablets (9.7″) and a horizontal shared display of varying sizes (10.6″, 27″, and 55″). Our findings show that different sizes lead to differences in participants’ interaction with the tabletop and in the groups’ communication styles. To our own surprise we found that larger tabletops do not necessarily improve collaboration or sensemaking results, because they can divert users’ attention away from their collaborators and towards the shared display.
Enhancing Cross-Device Interaction Scripting with Interactive Illustrations
Cross-device interactions involve input and output on multiple computing devices. Implementing and reasoning about interactions that cover multiple devices with a diversity of form factors and capabilities can be complex. To assist developers in programming cross-device interactions, we created DemoScript, a technique that automatically analyzes a cross-device interaction program while it is being written. DemoScript visually illustrates the step-by-step execution of a selected portion or the entire program with a novel, automatically generated cross-device storyboard visualization. In addition to helping developers understand the behavior of the program, DemoScript also allows developers to revise their program by interactively manipulating the cross-device storyboard. We evaluated DemoScript with 8 professional programmers and found that DemoScript significantly improved development efficiency by helping developers interpret and manage cross-device interaction; it also encourages testing to think through the script in a development process.
XDBrowser: User-Defined Cross-Device Web Page Designs
There is a significant gap in the body of research on cross-device interfaces. Research has largely focused on enabling them technically, but when and how users want to use cross-device interfaces is not well understood. This paper presents an exploratory user study with XDBrowser, a cross-device web browser we are developing to enable non-technical users to adapt existing single-device web interfaces for cross-device use while viewing them in the browser. We demonstrate that an end-user customization tool like XDBrowser is a powerful means to conduct user-driven elicitation studies useful for understanding user preferences and design requirements for cross-device interfaces. Our study with 15 participants elicited 144 desirable multi-device designs for five popular web interfaces when using two mobile devices in parallel. We describe the design space in this context, the usage scenarios targeted by users, the strategies used for designing cross-device interfaces, and seven concrete mobile multi-device design patterns that emerged. We discuss the method, compare the cross-device interfaces from our users and those defined by developers in prior work, and establish new requirements from observed user behavior. In particular, we identify the need to easily switch between different interface distributions depending on the task and to have more fine-grained control over synchronization.
SESSION: Social Media and Health
“With most of it being pictures now, I rarely use it”: Understanding Twitter’s Evolving Accessibility to Blind Users
Social media is an increasingly important part of modern life. We investigate the use of and usability of Twitter by blind users, via a combination of surveys of blind Twitter users, large-scale analysis of tweets from and Twitter profiles of blind and sighted users, and analysis of tweets containing embedded imagery. While Twitter has traditionally been thought of as the most accessible social media platform for blind users, Twitter’s increasing integration of image content and users’ diverse uses for images have presented emergent accessibility challenges. Our findings illuminate the importance of the ability to use social media for people who are blind, while also highlighting the many challenges such media currently present this user base, including difficulty in creating profiles, in awareness of available features and settings, in controlling revelations of one’s disability status, and in dealing with the increasing pervasiveness of image-based content. We propose changes that Twitter and other social platforms should make to promote fuller access to users with visual impairments.
Sleep Debt in Student Life: Online Attention Focus, Facebook, and Mood
The amount of sleep college students receive has become a pressing societal concern. While studies show that information technology (IT) use affects sleep, here we examine the converse: how sleep duration might affect IT use. We conducted an in situ study, and logged computer and phone use and collected sleep diaries and daily surveys of 76 college students for seven days, all waking hours. We examined effects of sleep duration and sleep debt. Our results show that with less sleep, people report higher perceived work pressure and productivity. Also, computer focus duration is significantly shorter suggesting higher multitasking. The more sleep debt, the more Facebook use and the higher the negative mood. With less sleep, people may seek out activities requiring less attentional resources such as social media use. Our results have theoretical implications for multitasking: physiological and cognitive reasons could explain more computer activity switches: related to less sleep.
“Tell It Like It Really Is”: A Case of Online Content Creation and Sharing Among Older Adult Bloggers
While the majority of older adults are now active online, they are often perceived as passive consumers of online information rather than active creators of content. As a counter to this view, we examine the practices of older adult bloggers (N=20) through in-depth interviews. We study this group of older adults as a unique case of content creation and sharing. We find that the practice of creating and sharing through blogging meets several important psychological and social needs for older adults. Specifically, blogging supports the development of identity in older adulthood; fosters self-expression that supports older adults’ values; provides meaningful engagement during retirement; and enables a sense of community and social interaction that is important for wellbeing in late-life. We argue for a focus on designing for late-life development and detail opportunities for online systems to better support the dynamic experience of growing older through online content creation and sharing.
Social Media Image Analysis for Public Health
Several projects have shown the feasibility to use emph{textual} social media data to track public health concerns, such as temporal influenza patterns or geographical obesity patterns. In this paper, we look at whether geo-tagged emph{images} from Instagram also provide a viable data source. Especially for “lifestyle” diseases, such as obesity, drinking or smoking, images of social gatherings could provide information that is not necessarily shared in, say, tweets. In this study, we explore whether (i) tags provided by the users and (ii) annotations obtained via automatic image tagging are indeed valuable for studying public health. We find that both user-provided and machine-generated tags provide information that can be used to infer a county’s health statistics. Whereas for most statistics user-provided tags are better features, for predicting excessive drinking machine-generated tags such as “liquid’ and “glass’ yield better models. This hints at the potential of using machine-generated tags to study substance abuse.
It Matters If My Friends Stop Smoking: Social Support for Behavior Change in Social Media
A growing body of research has examined whether and how an individual can leverage online social networks to receive social support for health behavior change. This prior research largely focuses on attributes of the post content and the experiences and concerns of people posting. Less is known about moderators and mediators that influence whether and how one’s social network will respond to a request for support. Using a factorial survey experiment, we find evidence that attitudes toward specific types of health behaviors greatly increase likelihood of response to a post, and that targeting close-tie relationships may increase effectiveness of social media based behavior change interventions, particularly related to smoking cessation.
SESSION: Engaging Players in Games
Designing Engaging Games Using Bayesian Optimization
We use Bayesian optimization methods to design games that maximize user engagement. Participants are paid to try a game for several minutes, at which point they can quit or continue to play voluntarily with no further compensation. Engagement is measured by player persistence, projections of how long others will play, and a post-game survey. Using Gaussian process surrogate-based optimization, we conduct efficient experiments to identify game design characteristics—specifically those influencing difficulty—that lead to maximal engagement. We study two games requiring trajectory planning, the difficulty of each is determined by a three-dimensional continuous design space. Two of the design dimensions manipulate the game in user-transparent manner (e.g., the spacing of obstacles), the third in a subtle and possibly covert manner (incremental trajectory corrections). Converging results indicate that overt difficulty manipulations are effective in modulating engagement only when combined with the covert manipulation, suggesting the critical role of a user’s self-perception of competence.
Operationalising and Evaluating Sub-Optimal and Optimal Play Experiences through Challenge-Skill Manipulation
The study examines the relationship of challenge-skill balance and the player experience through evaluation of competence, autonomy, presence, interest/enjoyment, and positive and negative affect states. To manipulate challenge-skill balance, three video game modes — boredom (low challenge), balance (medium challenge), and overload (high challenge) — were developed and experimentally tested (n = 45). The study showed that self-reported positive affect, autonomy, presence, and interest/enjoyment differed between the levels. The balance condition generally performed well in terms of positive player experiences, confirming the key role challenge-skill balance plays in designing for optimal play experiences. Interestingly, the study found significantly lower negative affect scores when playing the boredom condition. Greater feelings of competence were also reported for the boredom condition than the balance and overload conditions. Finally, some measures point to overload as a more enjoyable experience than boredom, suggesting possible player preference for challenge > skill imbalance over skill > challenge imbalance. Implications for design and future research are presented.
How to Present Game Difficulty Choices?: Exploring the Impact on Player Experience
Matching game difficulty to player ability is a crucial step toward a rewarding player experience, yet making difficulty adjustments that are effective yet unobtrusive can be challenging. This paper examines the impact of automatic and player-initiated difficulty adjustment on player experience through two studies. In the first study, 40 participants played the casual game THYFTHYF either in motion-based or sedentary mode, using menu-based, embedded, or automatic difficulty adjustment. In the second study, we created an adapted version of the commercially available game fl0w to allow us to carry out a more focused study of sedentary casual play. Results from both studies demonstrate that the type of difficulty adjustment has an impact on perceived autonomy, but other player experience measures were not affected as expected. Our findings suggest that most players express a preference for manual difficulty choices, but that overall game experience was not notably impacted by automated difficulty adjustments.
Peak-End Effects on Player Experience in Casual Games
The peak-end rule is a psychological heuristic observing that people’s retrospective assessment of an experience is strongly influenced by the intensity of the peak and final moments of that experience. We examine how aspects of game player experience are influenced by peak-end manipulations to the sequence of events in games that are otherwise objectively identical. A first experiment examines players’ retrospective assessments of two games (a pattern matching game based on Bejeweled and a point-and-click reaction game) when the sequence of difficulty is manipulated to induce positive, negative and neutral peak-end effects. A second experiment examines assessments of a shootout game in which the balance between challenge and skill is similarly manipulated. Results across the games show that recollection of challenge was strongly influenced by peak-end effects; however, results for fun, enjoyment, and preference to repeat were varied — sometimes significantly in favour of the hypothesized effects, sometimes insignificant, but never against the hypothesis.
SESSION: Food as Method and Inquiry
“My Doctor is Keeping an Eye on Me!”: Exploring the Clinical Applicability of a Mobile Food Logger
By enabling people to track their lifestyles, including activity level, sleeping, and diet, technology helps clinicians to treat patients suffering from “lifestyle diseases.” However, despite its importance compared to other lifestyle factors, it is not easy to record food intake consistently. Although researchers have attempted to solve this problem, most have not considered its applicability in the clinical context. In this paper, we aim to (1) understand food-journaling practices and (2) explore the applicability of lifestyle data in the clinical context. By observing 20 patients who recorded data including food logs, steps, and sleeping time, we found that patients recorded their food logs diligently, as they were conscious of clinicians. Clinicians were surprised by the high adherence rate of journaling and tried to overlap food data with other data, such as steps, sleeping time, etc. This paper contributes by providing qualitative insights for designing applicable strategies utilizing lifestyle data in the clinical context.
Crumbs: Lightweight Daily Food Challenges to Promote Engagement and Mindfulness
Many people struggle with efforts to make healthy behavior changes, such as healthy eating. Several existing approaches promote healthy eating, but present high barriers and yield limited engagement. As a lightweight alternative approach to promoting mindful eating, we introduce and examine crumbs: daily food challenges completed by consuming one food that meets the challenge. We examine crumbs through developing and deploying the iPhone application Food4Thought. In a 3 week field study with 61 participants, crumbs supported engagement and mindfulness while offering opportunities to learn about food. Our 2×2 study compared nutrition versus non-nutrition crumbs coupled with social versus non-social features. Nutrition crumbs often felt more purposeful to participants, but non-nutrition crumbs increased mindfulness more than nutrition crumbs. Social features helped sustain engagement and were important for engagement with non-nutrition crumbs. Social features also enabled learning about the variety of foods other people use to meet a challenge.
Evaluation of a Food Portion Size Estimation Interface for a Varying Literacy Population
Portion size estimation is important for managing dietary intake in many chronic conditions. We conducted a 6-week field study with nine varying literacy dialysis patients to explore the usability and feasibility of a dietary intake mobile application that emphasizes portion size estimation. Seven participants demonstrated sustained use of the application and improved their self-efficacy, knowledge, and ability to estimate portion sizes in pre- and post-study assessments. Participants reported moments when portion size information in the application differed from their prior understanding, challenging them to reconcile dissonant information. Although participants acquired new knowledge about portion sizes, they struggled to accurately estimate portion sizes in situ for most foods. Despite using the application consistently, rating it highly, and exhibiting learning, we found that self-efficacy and knowledge are not sufficient to support improved behaviors in everyday life.
Examining Unlock Journaling with Diaries and Reminders for In Situ Self-Report in Health and Wellness
In situ self-report is widely used in human-computer interaction, ubiquitous computing, and for assessment and intervention in health and wellness. Unfortunately, it remains limited by high burdens. We examine unlock journaling as an alternative. Specifically, we build upon recent work to introduce single-slide unlock journaling gestures appropriate for health and wellness measures. We then present the first field study comparing unlock journaling with traditional diaries and notification-based reminders in self-report of health and wellness measures. We find unlock journaling is less intrusive than reminders, dramatically improves frequency of journaling, and can provide equal or better timeliness. Where appropriate to broader design needs, unlock journaling is thus an overall promising method for in situ self-report.
SESSION: Medical Device Sensing
Delineating the Operational Envelope of Mobile and Conventional EDA Sensing on Key Body Locations
Electrodermal activity (EDA) is an important affective indicator, measured conventionally on the fingers with desktop sensing instruments. Recently, a new generation of wearable, battery-powered EDA devices came into being, encouraging the migration of EDA sensing to other body locations. To investigate the implications of such sensor/location shifts in psychophysiological studies we performed a validation experiment. In this experiment we used startle stimuli to instantaneously arouse the sympathetic system of n=23 subjects while sitting. Startle stimuli are standard but minimal stressors, and thus ideal for determining the sensor and location resolution limit. The experiment revealed that precise measurement of small EDA responses on the fingers and palm is feasible either with conventional or mobile EDA sensors. By contrast, precise measurement of small EDA responses on the sole is challenging, while on the wrist even detection of such responses is problematic for both EDA modalities. Given that affective wristbands have emerged as the dominant form of EDA sensing, researchers should beware of these limitations.
SpiroCall: Measuring Lung Function over a Phone Call
Cost and accessibility have impeded the adoption of spirometers (devices that measure lung function) outside clinical settings, especially in low-resource environments. Prior work, called SpiroSmart, used a smartphone’s built-in microphone as a spirometer. However, individuals in low- or middle-income countries do not typically have access to the latest smartphones. In this paper, we investigate how spirometry can be performed from any phone-using the standard telephony voice channel to transmit the sound of the spirometry effort. We also investigate how using a 3D printed vortex whistle can affect the accuracy of common spirometry measures and mitigate usability challenges. Our system, coined SpiroCall, was evaluated with 50 participants against two gold standard medical spirometers. We conclude that SpiroCall has an acceptable mean error with or without a whistle for performing spirometry, and advantages of each are discussed.
Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models
Understanding predictive models, in terms of interpreting and identifying actionable insights, is a challenging task. Often the importance of a feature in a model is only a rough estimate condensed into one number. However, our research goes beyond these naïve estimates through the design and implementation of an interactive visual analytics system, Prospector. By providing interactive partial dependence diagnostics, data scientists can understand how features affect the prediction overall. In addition, our support for localized inspection allows data scientists to understand how and why specific datapoints are predicted as they are, as well as support for tweaking feature values and seeing how the prediction responds. Our system is then evaluated using a case study involving a team of data scientists improving predictive models for detecting the onset of diabetes from electronic medical records.
Musically Informed Sonification for Chronic Pain Rehabilitation: Facilitating Progress & Avoiding Over-Doing
In self-directed chronic pain physical rehabilitation it is important that the individual can progress as physical capabilities and confidence grow. However, people with chronic pain often struggle to pass what they have identified as safe boundaries. At the same time, over-activity due to the desire to progress fast or function more normally, may lead to setbacks. We investigate how musically-informed movement sonification can be used as an implicit mechanism to both avoid overdoing and facilitate progress during stretching exercises. We sonify an end target-point in a stretch exercise, using a stable sound (i.e., where the sonification is musically resolved) to encourage movements ending and an unstable sound (i.e., musically unresolved) to encourage continuation. Results on healthy participants show that instability leads to progression further beyond the target-point while stability leads to a smoother stop beyond this point. We conclude discussing how these findings should generalize to the CP population.
KeDiary: Using Mobile Phones to Assist Patients in Recovering from Drug Addiction
Ketamine is an addictive drug that has been shown to inflict considerable physical and mental damage on users. Due in part to its low cost, ketamine has become one of the most popular club drugs among young adults and teenagers in Southeast Asia. This paper proposes a phone-based support system (KeDiary) with Bluetooth-enabled device for the screening of saliva, as a means of assisting ketamine-dependent patients to self-monitor their ketamine use following acute withdrawal treatment. We also conducted a practical experiment to evaluate the feasibility of the proposed system, wherein three ketamine-dependent patients self-administered tests at least once per day over a period of three weeks. Follow-up interviews with the same users helped in the further refinement of the proposed self-monitoring system.
SESSION: Supporting Information Seeking
The 32 Days of Christmas: Understanding Temporal Intent in Image Search Queries
Temporal terms, such as ‘winter’, ‘Christmas’, or ‘January’ are often used in search queries for personal images. But how do people’s memories and perceptions of time match with the actual dates when their images were captured? We compared the temporal terms that 74 Flickr users used to search their own photo collections, and compared them to the date captured data in the target image. We also conducted a larger study across several billion images, comparing user-applied tags for holidays and seasons to the dates the images were captured. We demonstrate that various query terms and tags can be in conflict with the actual dates photos were taken for specific types of temporal terms up to 40% of the time. We will conclude by highlighting implications for search systems where users are querying for personal content by date.
Influence of Content Layout and Motivation on Users’ Herd Behavior in Social Discovery
Social product discovery is an emerging paradigm that enables users to seek information and inspiration from peer-contributed contents. Researchers have observed herd behaviors in social discovery, i.e., basing beliefs and decisions on what similarly situated others have done. In this paper, we explore the effects of content layout and motivation on users’ herd behaviors in social discovery. We conduct an eye-tracking study with 120 participants to compare goal- and action-oriented users’ behaviors on a grid versus waterfall style social discovery site. The results show that users have a higher tendency to herd on a grid-style website, more so for goal-oriented users.
Age-related Differences in the Content of Search Queries when Reformulating
This study investigated the change in the content of the queries when performing reformulations in relation to age and task difficulty. Results showed that both generalization and specialization strategies were applied significantly more often for difficult tasks compared to simple tasks. Young participants were found to use specialization strategy significantly more often than old participants. Generalization strategy was also used significantly more often by young participants, especially for difficult tasks. Young participants were found to reformulate much longer than old participants. The semantic relevance of queries with the target information was found to be significantly higher for difficult tasks compared to simple tasks. It showed a decreasing trend across reformulations for old participants and remained constant for young participants, indicating that as old participants reformulated, they produced queries that were further away from the target information. Implications of these findings for design of information search systems are discussed.
SESSION: Designing New Materials and Manufacturing Techniques
Steel-Sense: Integrating Machine Elements with Sensors by Additive Manufacturing
Many interactive devices use both machine elements and sensors, simultaneously but redundantly enabling and measuring the same physical function. We present Steel-Sense, an approach to joining these two families of elements to create a new type of HCI design primitive. We leverage recent developments in 3D printing to embed sensing in metal structures that are otherwise difficult to equip with sensors, and present four design principles, implementing (1) an electronic switch integrated within a ball bearing; (2) a voltage divider within a gear; (3) a variable capacitor embedded in a hinge; and (4) a pressure sensor within a screw. Each design demonstrates a different sensing principle, and signals its performance through (1) movement; (2) position; (3) angle (4) or stress. We mirror our elements physical performance in a virtual environment, evaluate our designs electronically and structurally, and discuss future work and implications for HCI research.
xPrint: A Modularized Liquid Printer for Smart Materials Deposition
To meet the increasing requirements of HCI researchers who are looking into using liquid-based materials (e.g., hydrogels) to create novel interfaces, we present a design strategy for HCI researchers to build and customize a liquid-based smart material printing platform with off-the-shelf or easy-to-machine parts. For the hardware, we suggest a magnetic assembly-based modular design. These modularized parts can be easily and precisely reconfigured with off-the-shelf or easy-to-machine parts that can meet different processing requirements such as mechanical mixing, chemical reaction, light activation, and solution vaporization. In addition, xPrint supports an open-source, highly customizable software design and simulation platform, which is applicable for simulating and facilitating smart material constructions. Furthermore, compared to inkjet or pneumatic syringe-based printing systems, xPrint has a large range of printable materials from synthesized polymers to natural micro-organism-living cells with a printing resolution from 10μm up to 5mm (droplet size). In this paper, we will introduce the system design in detail and three use cases to demonstrate the material variability and the customizability for users with different demands (e.g., designers, scientific researchers, or artists).
Cilllia: 3D Printed Micro-Pillar Structures for Surface Texture, Actuation and Sensing
This work presents a method for 3D printing hair-like structures on both flat and curved surfaces. It allows a user to design and fabricate hair geometries that are smaller than 100 micron. We built a software platform to let users quickly define the hair angle, thickness, density, and height. The ability to fabricate customized hair-like structures not only expands the library of 3D-printable shapes, but also enables us to design passive actuators and swipe sensors. We also present several applications that show how the 3D-printed hair can be used for designing everyday interactive objects.
Foldem: Heterogeneous Object Fabrication via Selective Ablation of Multi-Material Sheets
Foldem, a novel method of rapid fabrication of objects with multi-material properties is presented. Our specially formulated Foldem sheet allows users to fabricate and easily assemble objects with rigid, bendable, and flexible properties using a standard laser-cutter. The user begins by creating his designs in a vector graphics software package. A laser cutter is then used to fabricate the design by selectively ablating/vaporizing one or more layers of the Foldem sheet to achieve the desired physical properties for each joint. Herein the composition of the Foldem sheet, as well as various design considerations taken into account while building and designing the method, are described. Sample objects made with Foldem are demonstrated, each showcasing the unique attributes of Foldem. Additionally, a novel method for carefully calibrating a laser cutter for precise ablation is presented.
SESSION: Eye Tracking Applications
A Model Relating Pupil Diameter to Mental Workload and Lighting Conditions
In this paper, we present a proof-of-concept approach to estimating mental workload by measuring the user’s pupil diameter under various controlled lighting conditions. Knowing the user’s mental workload is desirable for many application scenarios, ranging from driving a car, to adaptive workplace setups. Typically, physiological sensors allow inferring mental workload, but these sensors might be rather uncomfortable to wear. Measuring pupil diameter through remote eye-tracking instead is an unobtrusive method. However, a practical eye-tracking-based system must also account for pupil changes due to variable lighting conditions. Based on the results of a study with tasks of varying mental demand and six different lighting conditions, we built a simple model that is able to infer the workload independently of the lighting condition in 75% of the tested conditions.
Pointing while Looking Elsewhere: Designing for Varying Degrees of Visual Guidance during Manual Input
We propose using eye tracking to support interface use with decreased reliance on visual guidance. While the design of most graphical user interfaces take visual guidance during manual input for granted, eye tracking allows distinguishing between the cases when the manual input is conducted with or without guidance. We conceptualize the latter cases as input with uncertainty that require separate handling. We describe the design space of input handling by utilizing input resources available to the system, possible actions the system can realize and various feedback techniques for informing the user. We demonstrate the particular action mechanisms and feedback techniques through three applications we developed for touch interaction on a large screen. We conducted a two stage study of positional accuracy during target acquisition with varying visual guidance, to determine the selection range around a touch point due to positional uncertainty. We also conducted a qualitative evaluation of example applications with participants to identify perceived utility and hand eye coordination challenges while using interfaces with decreased visual guidance.
EyeGrip: Detecting Targets in a Series of Uni-directional Moving Objects Using Optokinetic Nystagmus Eye Movements
EyeGrip proposes a novel and yet simple technique of analysing eye movements for automatically detecting the user’s objects of interest in a sequence of visual stimuli moving horizontally or vertically in front of the user’s view. We assess the viability of this technique in a scenario where the user looks at a sequence of images moving horizontally on the display while the user’s eye movements are tracked by an eye tracker. We conducted an experiment that shows the performance of the proposed approach. We also investigated the influence of the speed and maximum number of visible images in the screen, on the accuracy of EyeGrip. Based on the experiment results, we propose guidelines for designing EyeGrip-based interfaces. EyeGrip can be considered as an implicit gaze interaction technique with potential use in broad range of applications such as large screens, mobile devices and eyewear computers. In this paper, we demonstrate the rich capabilities of EyeGrip with two example applications: 1) a mind reading game, and 2) a picture selection system. Our study shows that by selecting an appropriate speed and maximum number of visible images in the screen the proposed method can be used in a fast scrolling task where the system accurately (87%) detects the moving images that are visually appealing to the user, stops the scrolling and brings the item(s) of interest back to the screen.
Eye-Trace: Segmentation of Volumetric Microscopy Images with Eyegaze
We introduce an image annotation approach for the analysis of volumetric electron microscopic imagery of brain tissue. The core task is to identify and link tubular objects (neuronal fibers) in images taken from consecutive ultrathin sections of brain tissue. In our approach an individual ‘flies’ through the 3D data at a high speed and maintains eye gaze focus on a single neuronal fiber, aided by navigation with a handheld gamepad controller. The continuous foveation on a fiber of interest constitutes an intuitive means to define a trace that is seamlessly recorded with a desktop eyetracker and transformed into precise 3D coordinates of the annotated fiber (skeleton tracing). In a participant experiment we validate the approach by demonstrating a tracing accuracy of about the respective radiuses of the traced fibers with browsing speeds of up to 40 brain sections per second.
SESSION: Large Display Interaction
The Bicycle Barometer: Design and Evaluation of Cyclist-Specific Interaction for a Public Display
As cycling is increasingly promoted as an environment-friendly, cheap and even fast alternative, there exists an increasing need to civically involve the potentially engaged and opinionated user group of cyclists. Therefore, we designed and evaluated Bicycle Barometer, an interactive bicycle count display that gathers the opinions from cyclists and conveys real-time, multi-dimensional data to them regarding cycling behavior. Our user-centered design process focused on optimizing the user experience by comparing several alternative cyclist-specific interaction designs, which resulted in the combination of a pressure sensitive floor mat, push button and low-resolution LED display. An in-the-wild evaluation study resulted in a set of design recommendations for cyclist-specific interaction, providing concrete insights into how a specifically targeted interaction method for public display is able to afford engagement and enthusiasm from a particular target audience.
HandMark Menus: Rapid Command Selection and Large Command Sets on Multi-Touch Displays
Command selection on large multi-touch surfaces can be difficult, because the large surface means that there are few landmarks to help users build up familiarity with controls. However, people’s hands and fingers are landmarks that are always present when interacting with a touch display. To explore the use of hands as landmarks, we designed two hand-centric techniques for multi-touch displays — one allowing 42 commands, and one allowing 160 — and tested them in an empirical comparison against standard tab widgets. We found that the small version (HandMark-Fingers) was significantly faster at all stages of use, and that the large version (HandMark-Multi) was slower at the start but equivalent to tabs after people gained experience with the technique. There was no difference in error rates, and participants strongly preferred both of the HandMark menus over tabs. We demonstrate that people’s intimate knowledge of their hands can be the basis for fast and feasible interaction techniques that can improve the performance and usability of interactive tables and other multi-touch systems.
Glowworms and Fireflies: Ambient Light on Large Interactive Surfaces
Ambient light is starting to be commercially used to enhance the viewing experience for watching TV. We believe that ambient light can add value in meeting and control rooms that use large vertical interactive surfaces. Therefore, we equipped a large interactive whiteboard with a peripheral ambient light display and explored its utility for different scenarios by conducting two controlled experiments. In the first experiment, we investigated how ambient light can be used for peripheral notifications, and how perception is influenced by the user’s position and the type of work they are engaged in. The second experiment investigated the utility of ambient light for off-screen visualization. We condense our findings into several design recommendations that we then applied to application scenarios to show the versatility and usefulness of ambient light for large surfaces.
Off-Limits: Interacting Beyond the Boundaries of Large Displays
The size of information spaces often exceeds the limits of even the largest displays. This makes navigating such spaces through on-screen interactions demanding. However, if users imagine the information space extending in a plane beyond the display’s boundaries, they might be able to use the space beyond the display for input. This paper investigates Off-Limits, an interaction concept extending the input space of a large display into the space beyond the screen through the use of mid-air pointing. We develop and evaluate the concept through three empirical studies in one-dimensional space: First, we explore benefits and limitations of off-screen pointing compared to touch interaction and mid-air on-screen pointing; next, we assess users’ accuracy in off-screen pointing to model the distance-to-screen vs. accuracy trade-off; and finally, we show how Off-Limits is further improved by applying that model to the naïve approach. Overall, we found that the final Off-Limits concept provides significant performance benefits over on-screen and touch pointing conditions.
SESSION: IoT and HCI ASAP!
Pressing Not Tapping: Comparing a Physical Button with a Smartphone App for Tagging Music in Radio Programmes
A physical hardware prototype–The Button was developed as a research probe to understand how radio audiences could discover, organise and consume music radio content at the touch of a physical button, the only control on a tiny handheld device. The Button allows listeners to tag tracks they like via a simple one-touch interaction method, and save them to a non-commercial online playlist service: BBC Playlister. Users can then export these tags to other music streaming platforms, such as Spotify, Deezer, etc. Following a user-centric design process, a large in-the-wild study was conducted over several weeks to investigate the value of the Button in aiding listeners’ discovery of music. One group of participants was given a mobile phone app designed to facilitate tagging music heard on BBC radio stations; two other groups were given both the app and a Button (in one of two hardware versions). The findings revealed that Button users made significantly more tags on average than app users, indicating that a physical device could add significant value for radio listeners who want to tag music. Participants valued the simple one-touch interaction method, especially in situations where their smartphones were out of reach or contextual constraints meant that interaction with a complex device was undesirable or difficult.
PaperID: A Technique for Drawing Functional Battery-Free Wireless Interfaces on Paper
We describe techniques that allow inexpensive, ultra-thin, battery-free Radio Frequency Identification (RFID) tags to be turned into simple paper input devices. We use sensing and signal processing techniques that determine how a tag is being manipulated by the user via an RFID reader and show how tags may be enhanced with a simple set of conductive traces that can be printed on paper, stencil-traced, or even hand-drawn. These traces modify the behavior of contiguous tags to serve as input devices. Our techniques provide the capability to use off-the-shelf RFID tags to sense touch, cover, overlap of tags by conductive or dielectric (insulating) materials, and tag movement trajectories. Paper prototypes can be made functional in seconds. Due to the rapid deployability and low cost of the tags used, we can create a new class of interactive paper devices that are drawn on demand for simple tasks. These capabilities allow new interactive possibilities for pop-up books and other papercraft objects.
RapID: A Framework for Fabricating Low-Latency Interactive Objects with RFID Tags
RFID tags can be used to add inexpensive, wireless, batteryless sensing to objects. However, quickly and accurately estimating the state of an RFID tag is difficult. In this work, we show how to achieve low-latency manipulation and movement sensing with off-the-shelf RFID tags and readers. Our approach couples a probabilistic filtering layer with a monte-carlo-sampling-based interaction layer, preserving uncertainty in tag reads until they can be resolved in the context of interactions. This allows designers’ code to reason about inputs at a high level. We demonstrate the effectiveness of our approach with a number of interactive objects, along with a library of components that can be combined to make new designs.
Snap-To-It: A User-Inspired Platform for Opportunistic Device Interactions
The ability to quickly interact with any nearby appliance from a mobile device would allow people to perform a wide range of one-time tasks (e.g., printing a document in an unfamiliar office location). However, users currently lack this capability, and must instead manually configure their devices for each appliance they want to use. To address this problem, we created Snap-To-It, a system that allows users to opportunistically interact with any appliance simply by taking a picture of it. Snap-To-It shares the image of the appliance a user wants to interact with over a local area network. Appliances then analyze this image (along with the user’s location and device orientation) to see if they are being “selected,” and deliver the corresponding control interface to the user’s mobile device. Snap-To-It’s design was informed by two technology probes that explored how users would like to select and interact with appliances using their mobile phone. These studies highlighted the need to be able to select hardware and software via a camera, and identified several novel use cases not supported by existing systems (e.g., interacting with disconnected objects, transferring settings between appliances). In this paper, we show how Snap-To-It’s design is informed by our probes and how developers can utilize our system. We then show that Snap-To-It can identify appliances with over 95.3% accuracy, and demonstrate through a two-month deployment that our approach is robust to gradual changes to the environment.
SESSION: Smart Homes, Devices and Data
“She’ll just grab any device that’s closer”: A Study of Everyday Device & Account Sharing in Households
Many technologies assume a single user will use an account or device. But account and device sharing situations (when 2+ people use a single device or account) may arise during everyday life. We present results from a multiple-methods study of device and account sharing practices among household members and their relations. Among our findings are that device and account sharing was common, and mobile phones were often shared despite being considered “personal” devices. Based on our study results, we organize sharing practices into a taxonomy of six sharing types–distinct patterns of what, why, and how people shared. We also present two themes that cut across sharing types: that (1) trust in sharees and (2) convenience highly influenced sharing practices. Based on these findings, we discuss implications for study and technology design.
“Just whack it on until it gets hot”: Working with IoT Data in the Home
This paper presents findings from a co-design project that aims to augment the practices of professional energy advisors with environmental data from sensors deployed in clients’ homes. Premised on prior ethnographic observations we prototyped a sensor platform to support the work of tailoring advice-giving to particular homes. We report on the deployment process and the findings to emerge, particularly the work involved in making sense of or accounting for the data in the course of advice-giving. Our ethnomethodological analysis focuses on the ways in which data is drawn upon as a resource in the home visit, and how understanding and advice-giving turns upon unpacking the indexical relationship of the data to the situated goings-on in the home. This insight, coupled with further design workshops with the advisors, shaped requirements for an interactive system that makes the sensor data available for visual inspection and annotation to support the situated sense-making that is key to giving energy advice.
Designing for Domestic Memorialization and Remembrance: A Field Study of Fenestra in Japan
We describe the design, implementation, and deployment of Fenestra, a domestic technology embodied in the form of a wirelessly connected round mirror, photo frame, and candle that displays photos of departed loved ones. Fenestra’s interaction design, form, and materials are inspired by Japanese domestic practices of memorializing departed loved ones with a home altar called butsudan. We deployed Fenestra in three Japanese households to explore how this design artifact might support everyday domestic practices of memorialization, and where complications might potentially emerge. Findings reveal that a range of outcomes emerged across our participants’ experiences of living with Fenestra–from profound remembrance to unexpected uses to unsettling encounters. These findings are interpreted to present opportunities for future research and practice initiatives in the HCI community.
Integrating the Smart Home into the Digital Calendar
With the growing adoption of smart home technologies, inhabitants are faced with the challenge of making sense of the data that their homes can collect to configure automated behaviors that benefit their routines. Current commercial smart home interfaces usually provide information on individual devices instead of a more comprehensive overview of a home’s behavior. To reduce the complexity of smart home data and integrate it better into inhabitants’ lives, we turned to the familiar metaphor of a calendar and developed our smart home interface Casalendar. In order to investigate the concept and evaluate our goals to facilitate the understanding of smart home data, we created a prototype that we installed in two commercial smart homes for a month. The results we present in this paper are based on our analysis of user data from questionnaires, semi-structured interviews, participant-driven audio and screenshot feedback as well as logged interactions with our system. Our findings exposed advantages and disadvantages of this metaphor, emerging usage patterns, privacy concerns and challenges for information visualization. We further report on implications for design and open challenges we revealed through this work.
SESSION: Seams of Craft, Design and Fabrication
Expanding on Wabi-Sabi as a Design Resource in HCI
The material foundations of computer systems and interactive technology is a topic that gained an increased interest within the HCI community during the last years. In this paper we discuss this topic through the Japanese concept of Wabi-Sabi, a philosophy that embraces three basic realities of the material world: ‘nothing lasts’, ‘nothing is finished’, and ‘nothing is perfect’. We use these concepts to reflect on four unique interactive artefacts, which all in different ways embrace aspects of Wabi-Sabi, in terms of their design gestalt, materiality, but also in terms of use practices. Further, we use our analysis to articulate three high-level principles that may help addressing the long-term realities faced in physical interaction design, and for the design of interactive systems in general.
The Hybrid Bricolage: Bridging Parametric Design with Craft through Algorithmic Modularity
The digital design space, unlimited by its virtual freedom, differs from traditional craft, which is bounded by a fixed set of given materials. We study how to introduce parametric design tools to craftspersons. Our hypothesis is that the arrangement of parametric design in modular representation, in the form of a catalog, can assist makers unfamiliar with this practice. We evaluate this assumption in the realm of bag design, through a Honeycomb Smocking Pattern Catalog and custom Computer-Aided Smocking (CAS) design software. We describe the technical work and designs made with our tools, present a user study that validates our assumptions, and conclude with ideas for future work developing additional tools to bridge computational design and craft.
ExoSkin: On-Body Fabrication
There is a long tradition for crafting wearable objects directly on the body, such as garments, casts, and orthotics. However, these high-skill, analog practices have yet to be augmented by digital fabrication techniques. In this paper, we explore the use of hybrid fabrication workflows for on-body printing. We outline design considerations for creating on-body fabrication systems, and identify several human, machine, and material challenges unique to this endeavor. Based on our explorations, we present ExoSkin, a hybrid fabrication system for designing and printing digital artifacts directly on the body. ExoSkin utilizes a custom built fabrication machine designed specifically for on-body printing. We demonstrate the potential of on-body fabrication with a set of sample workflows, and share feedback from initial observation sessions.
Mimetic Machines: Collaborative Interventions in Digital Fabrication with Arc
This paper examines the collaborative process of developing Arc, a computer numerical controlled (CNC) engraving tool for ceramics that offers a new window onto traditional forms of craft. In reflecting on this case and scholarship from the social sciences, we make two contributions. First, we show that fabrication tools may integrate multiple and distinct roles (as copiers, translators and connectors) in their production of form, selectively limiting the agency of the maker and machine. Second, we situate small-scale manufacturing in a wider historical context of “mimetic machinery”: machines for mechanical reproduction that draw their symbolic power from a material connection with the phenomena represented (in this case, sound and gesture). We end by sharing lessons learned for fabrication research based on this study.
SESSION: Body and Fashion
Embodied Sketching
Designing bodily experiences is challenging. In this paper, we propose embodied sketching as a way of practicing design that involves understanding and designing for bodily experiences early in the design process. Embodied sketching encompasses ideation methods that are grounded in, and inspired by, the lived experience and includes the social and spatial settings as design resources in the sketching. Embodied sketching is also based on harnessing play and playfulness as the principal way to elicit creative physical engagement. We present three different ways to implement and use embodied sketching in the application domain of co-located social play. These include bodystorming of ideas, co-designing with users, and sensitizing designers. The latter helps to uncover and articulate significant, as well as novel embodied experiences, whilst the first two are useful for developing a better understanding of possible design resources.
“I don’t Want to Wear a Screen”: Probing Perceptions of and Possibilities for Dynamic Displays on Clothing
This paper explores the role dynamic textile displays play in relation to personal style: What does it mean to wear computationally responsive clothing and why would one be motivated to do so? We developed a novel textile display technology, called Ebb, and created several woven and crochet fabric swatches that explored clothing-specific design possibilities. We engaged fashion designers and non-designers in imagining how Ebb would integrate into their design practice or personal style of dressing. Participants evaluated the appeal and utility of clothing-based displays according to a very different set of criteria than traditional screen-based computational displays. Specifically, the slowness, low-resolution, and volatility of Ebb tended to be seen as assets as opposed to technical limitations in the context of personal style. Additionally, participants envisioned various ways that ambiguous, ambient, and abstract displays of information could prompt new experiences in their everyday lives. Our paper details the complex relationships between display and personal style and offers a new design metaphor and extension of Gaver et al.’s original descriptions of ambiguity in order to guide the design of clothing-based displays for everyday life.
BeUpright: Posture Correction Using Relational Norm Intervention
Research shows the critical role of social relationships in behavior change, and the advancement of mobile technologies brings new opportunities of using online social support for persuasive applications. In this paper, we propose Relational Norm Intervention (RNI) model for behavior change, which involves two individuals as a target user and a helper respectively. RNI model uses Negative Reinforcement and Other-Regarding Preferences as motivating factors for behavior change. The model features the passive participation of a helper who will undergo artificially generated discomforts (e.g., limited access to a mobile device) when a target user performs against a target behavior. Based on in-depth discussions from a two-phase design workshop, we designed and implemented BeUpright, a mobile application employing RNI model to correct sitting posture of a target user. Also, we conducted a two-week study to evaluate the effectiveness and user experience of BeUpright. The study showed that the RNI model has a potential to increase efficacy, in terms of behavior change, compared to conventional notification approaches. The most influential factor of RNI model in the changing the behavior of target users was the intention to avoid discomforting their helpers. RNI model also showed a potential to help unmotivated individuals in behavior change. We discuss the mechanism of the RNI model in relation to prior literature on behavior change and implications of exploiting discomfort in mobile behavior change services.
Body Integrated Programmable Joints Interface
Physical interfaces with actuation capability enable the design of wearable devices that augment human physical capabilities. Extra machine joints integrated to our biological body may allow us to achieve additional skills through programmatic reconfiguration of the joints. To that end, we present a wearable multi-joint interface that offers “synergistic interactions” by providing additional fingers, structural supports, and physical user interfaces. Motions of the machine joints can be controlled via interfacing with our muscle signals, as a direct extension of our body. On the basis of implemented applications, we demonstrate our design guidelines for creating a desirable human-machine synergy — that enhances our innate capabilities, not replacing or obstructing, and also without enforcing the augmentation. Finally we describe technical details of our muscle-based control method and implementations of the presented applications.
Mirror Mirror: An On-Body T-shirt Design System
Virtual fitting rooms equipped with magic mirrors let people evaluate fashion items without actually putting them on. The mirrors superimpose virtual clothes on the user’s reflection. We contribute the Mirror Mirror system, which not only supports mixing and matching of existing fashion items, but also lets users design new items in front of the mirror and export designs to fabric printers. While much of the related work deals with interactive cloth simulation on live user data, we focus on collaborative design activities and explore various ways of designing on the body with a mirror.