Program

Note: the titles are linked to the papers on ACM (free access)

Below is the ISWC 2013 program with all accepted papers and posters. Even with an 8-year high in submissions, we had this year a 22.7% acceptance rate for oral presentations, leading to a pretty selective scientific papers track!

To read the paper's abstract and get additional information, hover over the text.

Registration rates and further information are available here.

Note that all UbiComp attendees are free to visit these sessions, just like all ISWC attendees are free to visit any session in UbiComp 2013's program.



Sunday/Monday, 8th/9th of September 2013

The workshops are held on the first two days of the UbiComp/ISWC conference week.

For the list of accepted workshops and links to the workshop websites, click here
(or download as a PDF booklet)



Tuesday, 10th of September 2013

08:15 - 09:00LobbyWelcome Breakfast

09:00 - 09:45in G1Joint welcome to ISWC 2013 and UbiComp 2013
09:45 - 10:45in G1Joint ISWC/UbiComp Keynote
Creating the Magic with Information Technology
Markus Gross
ETH Zurich, Switzerland
Advanced information technology has become a key enabler in modern media and entertainment. This comprises the production of animation or live action films, the design of next-generation toys and consumer products, or the creation of richer experiences in theme parks. At Disney Research Zurich, more than 200 researchers and scientists are working at the forefront of innovation in entertainment technology. Our research covers a wide spectrum of different fields, including graphics and animation, human computer interaction, wireless communication, computer vision, materials and design, robotics, and more. In this talk I will demonstrate how innovations in information technology and computational methods developed at Disney Research are serving as platforms for future content creation. I will emphasize the transformative power of 3D printing, digital fabrication, and our increasing ability to make the whole world responsive and interactive.

Bio. Dr. Gross is a professor of Computer Science at ETH Zurich, head of the Computer Graphics Laboratory, and the director of Disney Research in Zurich. His research interests include computer graphics as well as media and entertainment technology. He has published more than 300 scientific papers and he holds several patents on core graphics and media technologies. Gross was chair of the technical program committee of ACM SIGGRAPH 2005 and he serves on the scientific advisory boards of various research organizations. Gross is a fellow of the ACM, a fellow of the EUROGRAPHICS Association, and a member of the German Academies of Science Leopoldina and Berlin-Brandenburg. He received the SWISS ICT Champions Award in the category People in 2011 and a Technical Achievement Award from the Academy of Motion Picture Arts and Sciences in 2013.
10:45 - 11:15LobbyCoffee Break
11:15 - 11:45in G4Madness Session (chair: Masaaki Fukumoto)
11:45 - 12:45in G4Session 1: Locations (chair: Oliver Amft)UbiComp
Improved ActionSLAM for long-term indoor tracking with wearable motion sensors
Michael Hardegger, Daniel Roggen, Gerhard Troester
ETH Zurich, Switzerland We present an indoor tracking system based on two wearable inertial measurement units for tracking in home and workplace environments. It applies simultaneous localization and mapping with user actions as landmarks, themselves recognized by the wearable sensors. The approach is thus fully wearable and no pre-deployment effort is required. We identify weaknesses of past approaches and address them by introducing heading drift compensation, stance detection adaptation, and ellipse landmarks. Furthermore, we present an environment-independent parameter set that allows for robust tracking in daily-life scenarios. We assess the method on a dataset with five participants in different home and office environments, totaling 8.7h of daily routines and 2500m of travelled distance. This dataset is publicly released. The main outcome is that our algorithm converges 87% of the time to an accurate approximation of the ground truth map (0.52m mean landmark positioning error) in scenarios where previous approaches fail.
Crowdsourcing I,

At Work,

Context Sensing
Scaled Monocular SLAM for Walking People
Daniel Gutierrez, J.J. Guerrero
Universidad de Zaragoza, Spain In this paper we present a full-scaled real-time monocular SLAM using only a wearable camera. Assuming that the person is walking, the perception of the head oscillatory motion in the initial visual odometry estimate allows for the computation of a dynamic scale factor for static windows of N camera poses. Improving on this method we introduce a consistency test to detect non-walking situations and propose a sliding window approach to reduce the delay in the update of the scaled trajectory. We evaluate our approach experimentally on a unscaled visual odometry estimate obtained with a wearable camera along a path of 886 m. The results show a significant improvement respect to the initial unscaled estimate with a mean relative error of 0.91% over the total trajectory length.
12:45 - 14:15LobbyLunch
14:15 - 15:45in G4Session 2: Activity Recognition (chair: Louis Atallah)UbiComp
Confidence-based Multiclass AdaBoost for Physical Activity Monitoring
Attila Reiss, Gustaf Hendeby, Didier Stricker
German Research Center for Artificial Intelligence (DFKI), Germany Physical activity monitoring has recently become an important topic in wearable computing, motivated by e.g. healthcare applications. However, new benchmark results show that the difficulty of the complex classification problems exceeds the potential of existing classifiers. Therefore, this paper proposes the ConfAdaBoost.M1 algorithm. The proposed algorithm is a variant of the AdaBoost.M1 that incorporates well established ideas for confidence based boosting. The method is compared to the most commonly used boosting methods using benchmark datasets from the UCI machine learning repository and it is also evaluated on an activity recognition and an intensity estimation problem, including a large number of physical activities from the recently released PAMAP2 dataset. The presented results indicate that the proposed ConfAdaBoost.M1 algorithm significantly improves the classification performance on most of the evaluated datasets, especially for larger and more complex classification tasks.
Home Heating,


Health I,


Location-based Services I
A Hybrid Unsupervised/Supervised Model for Group Activity Recognition
Tomoya Hirano, Takuya Maekawa
Osaka University, JapanThe new method proposed here recognizes activities performed by a group of users (e.g., attending a meeting, playing sports, and participating in a party) by using sensor data obtained from the users. Note that such group activities (GAs) have characteristics that differ from those of single user activities. For example, the number of users who participate in a GA is different for each activity. The number of meeting participants, for instance, may sometimes be different for each meeting. Also, a user may play different roles (e.g., `moderator' and `presenter' roles) in meetings on different days. We introduce the notion of role into our GA recognition model and try to capture the intrinsic characteristics of GAs with a hybrid unsupervised/supervised approach.
Personalized Mobile Physical Activity Recognition
Attila Reiss, Didier Stricker
German Research Center for Artificial Intelligence (DFKI), Germany Personalization of activity recognition has become a topic of interest recently. This paper presents a novel concept, using a set of classifiers as general model, and retraining only the weight of the classifiers with new labeled data from a previously unknown subject. Experiments with different methods based on this concept show that it is a valid approach for personalization. An important benefit of the proposed concept is its low computational cost compared to other approaches, making it also feasible for mobile applications. Moreover, more advanced classifiers (e.g. boosted decision trees) can be combined with the new concept, to achieve good performance even on complex classification tasks. Finally, a new algorithm is introduced based on the proposed concept, which outperforms existing methods, thus further increasing the performance of personalized applications.
Reducing User Intervention in Incremental Activity Recognition for Assistive Technologies
Julien Rebetez, Hector F. Satizabal, Andres Perez-Uribe
HEIG-VD / HES-SO, SwitzerlandActivity recognition has recently gained a lot of interest and there already exist several methods to detect human activites based on wearable sensors. Most of the existing methods rely on a database of labelled activities that is used to train an offline activity recognition system. This paper presents an approach to build an online activity recognition system that do not require any a priori labelled data. The system incrementally learns activities by actively querying the user for labels. To choose when the user should be queried, we compare a method based on random sampling and another that uses a Growing Neural Gas (GNG). The use of GNG helps reducing the number of user queries by 20% to 30%.
15:45 - 16:15LobbyCoffee Break
16:15 - 17:00in G4The ISWC 2013 Gadget Show v.2.0! (chair: Daniel Roggen)UbiComp
Let's come together for ISWC's tradionally spectacular (and improved) Gadget Show!
The Gadget Show is not only about live demonstration of ISWC papers and posters: You can bring any gadget and demonstrate it in front of everyone -- On-site registration sheets will also be available at the registration desk, or you can just put yourself at the end of the presentation queue during the session.
Crowdsourcing II,

Emotion and Behavior I,

Authentication
17:00 - 17:45in G4The ISWC 2013 Industry Session (chair: Ulf Blanke)UbiComp
A Review of the State of the Art in the Head-Mounted Displays Industry
Bernard Kress
Google [X], USA

Head-Up Displays (HUD), Helmet- or Head-Mounted Displays (HMDs) as well as see-through gun sights have been extensively investigated during the past decades for military applications by major defense contractors. While the first see through HMD optical combiners have been based on conventional reflective/refractive optics, the first and most efficient HUD combiner technologies have been based rather on holographic optics. There are a multitude of HMD optical architectures available today on the market (in both defense and consumer electronics markets), designed along a wide range of different requirements, bearing their respective advantages and shortcomings. We will review the state of the art in this industry.

Bio. For over 20 years, Bernard has made significant scientific contributions as researcher, professor, consultant, advisor, instructor, and author, making major contributions to digital micro-optical systems for consumer electronics, generating IP, and teaching and transferring technological solutions to industry. Many of the world’s largest producers of optics and photonics products have consulted with him on a wide range of optics and photonics technologies including; laser materials processing, optical security, optical telecom/datacom, optical data storage, optical computing, optical motion sensors, pico- projectors, light emitting diode displays, optical gesture sensing, three dimensional remote sensing, digital imaging processing, and biotechnology sensors.

Bernard has generated 28 patents of which nine have been granted in the United States, nine have been granted in Europe, two are awaiting filing numbers, and eight are pending. He has published four books, a book chapter, 88 refereed publications and proceedings, and numerous technical publications. He has also been Involved in several European Research Projects in Micro-Optics including the Eureka Flat Optical Technology and Applications (FOTA) Project and the Network for Excellence in Micro-Optics (NEMO) Project. Bernard is currently working with the Glass group at Google[X] labs in Mountain View, CA.

Video Sessions
Design and Wearable Computing
Sonny Vu
Misfit Wearables, USA Bio. Founder of Misfit Wearables, makers of highly wearable computing products, including the award-winning Shine, an elegant activity monitor. Founder of AgaMatrix, makers of the world's first iPhone-connected hardware medical device (Red Dot & GOOD Design Awards). Built AgaMatrix from a two-person start-up to shipping 15+ FDA-cleared medical device products, 1B+ biosensors, 3M+ glucose meters for diabetics. Worked at Microsoft Research on machine learning / linguistic technologies. Studied math (BS) at UIUC and linguistics (PhD) under Noam Chomsky at MIT. Knows a number of interesting languages and is a patron of good product design. Believes an era of wearable computing is coming soon where UX design will be geared towards glance-able displays as well as non-visual modalities.
Contact/follow on Twitter: @SonnyVu
How smartphones learned to feel your ambient temperature and humidity
Martin Wirz
Sensirion, Switzerland Modern smartphones contain various sensors which make them truly intelligent. Thanks to these sensors, smartphones help you to navigate through your daily life and keep you informed about things that matter to you. But this is not the whole story. Modern smart phones can also feel. Thanks to Sensirion’s hardware and software technology, they are able to measure ambient temperature and humidity at anytime and anywhere you are. In this talk, I will tell the story on how this was made possible and talk about challenges we were facing and how we addressed them.
Bio. Martin Wirz received his MSc ETH in 2008 from ETH Zürich. Afterwards, he joined the Wearable Computing Laboratory at ETH Zürich as a research assistant and received his Dr. Sc. ETH (PhD) degree in 2013. In his research, he focused on crowd sensing, context-aware systems and social network analysis. In 2013, he started working as a Product Manager at Sensirion AG, Switzerland where he is responsible for mobile software components which power the temperature and humidity sensor integrated in mobile devices. More Information: http://smart.sensirion.com
18:00 - 20:00LobbyThe ISWC 2013 Juried Design Exhibition
Click here for the list of all accepted designs



Wednesday, 11th of September 2013

09:30 - 10:00in G4Madness Session (chair: Masaaki Fukumoto)
10:00 - 10:30LobbyCoffee Break
10:30 - 12:15in G4Session 3: Ins and Outs (chair: Tsutomu Terada)UbiComp
Conductive Rubber Electrodes for Earphone-Based Eye Gesture Input Interface
Hiroyuki Manabe, Masaaki Fukumoto, Tohru Yagi
NTT DOCOMO, JapanAn eartip made of conductive rubber providing bio-potential electrodes is proposed for a daily-use earphone-based eye gesture input interface. Several prototypes, each with three electrodes to capture Electrooculogram (EOG), are implemented on earphones and examined. Experiments with one subject over a 10 day period reveal that all prototypes capture EOG similarly but they differ as regards stability of the baseline and motion artifacts. Another experiment conducted on a simple eye-controlled application with six subjects shows that the proposed prototype minimizes motion artifacts and offers good performance. We conclude that conductive rubber with Ag filler is the most suitable setup for daily-use.
Activity Recognition,


Hardware,


Domestic Computing
Sensor-Embedded Teeth for Oral Activity Recognition
Cheng-Yuan Li, Yen-Chang Chen, Wei-Ju Chen, Polly Huang, Hao-hua Chu
National Taiwan University, Taipei, Taiwan This paper presents the design and implementation of a wearable oral sensory system that recognizes human oral activities, such as chewing, drinking, speaking, and coughing. We conducted an evaluation of this oral sensory system in a laboratory experiment involving 8 participants. The results show 93.8% oral activity recognition accuracy when using a person-dependent classifier and 59.8% accuracy when using a person-independent classifier.
ThermOn‚ a Thermo-musical Interface for an Enhanced Emotional Experience
Shimon Akiyama, Katsunari Sato, Yasutoshi Makino, Takashi Maeno
Keio University, Japan This report proposes a thermal media system, ThermOn, which enables users to feel dynamic hot and cold sensations on their body corresponding to the sound of music. Thermal sense plays a significant role in the human recognition of environments and influences human emotions. By employing thermal sense in the music experience, which also greatly affects human emotions, we have successfully created a new medium with an unprecedented emotional experience. With ThermOn, a user feels enhanced excitement and comfort, among other responses. For the initial prototype, headphone-type interfaces were implemented using a Peltier device, which allows users to feel thermal stimuli on their ears. Along with the hardware, a thermal-stimulation model that takes into consideration the characteristics of human thermal perception was designed. The prototype device was verified using two methods: the psychophysical method, which measures the skin potential response and the psychometric method using a Likert-scale questionnaire and open-ended interviews. The experimental results suggest that ThermOn (a) changes the impression of music, (b) provides comfortable feelings, and (c) alters the listener's ability to concentrate on music in the case of a rock song. Moreover, these effects were shown to change based on the methods with which thermal stimuli were added to music (such as temporal correspondence) and on the type of stimuli (warming or cooling). From these results, we have concluded that the ThermOn system has the potential to enhance the emotional experience when listening to music.
Detecting Fabric Folds through Stitched Sensors
Guido Gioberto, James Coughlin, Kaila Bibeau, Lucy Dunne
University of Minnesota, USA In this paper we describe a novel method for detecting bends and folds in fabric structures. Bending and folding can be used to detect human joint angles directly, or to detect possible errors in the signals of other joint-movement sensors due to fabric folding. Detection is achieved through measuring changes in the resistance of a complex stitch, formed by an industrial coverstitch machine using an un-insulated conductive yarn, on the surface of the fabric. We evaluate self-intersecting folds which cause short-circuits in the sensor, creating a quasi-binary resistance response, and non-contact bends, which deform the stitch structure and result in a more linear response. Folds and bends created by human movement were measured on the dorsal and lateral knee of both a robotic mannequin and a human. Preliminary results are promising. Both dorsal and lateral stitches showed repeatable characteristics during testing on a mechanical mannequin and a human.
12:15 - 14:15LobbyLunch and Poster Session
Click here for the list of all accepted posters
14:15 - 15:45in G4Session 4: Context and Awareness (chair: Tom Martin)UbiComp
Ultrasound-based movement sensing, gesture-, and context-recognition
Hiroki Watanabe, Tsutomu Terada, Masahiko Tsukamoto
Kobe University, Japan Wearable computing technologies attract a great deal of attentions on context-aware systems. They recognize user context by using wearable sensors. Though conventional context-aware systems use accelerometers or magnetic sensors, these sensors need wired/wireless with a storage or a data processor such as PC for data storing/processing. Conventional microphone-based context-recognition methods can capture surrounding context by audio processing but they cannot recognize complex user motions. In this paper, we propose a context recognition method using sound-based gesture recognition. In our system, the user wears a microphone and small speakers, which generate ultrasonic sound, on his/her body. The system recognizes gestures on the basis of the volume of generated sound and the Doppler effect. The former indicates the distance between the neck and wrists, and the later indicates the speed of motions. The speaker just transmits ultrasonic sound, and the recording device, which is an ordinary voice recorder, just records the sound, thus there is no need to communicate with a storage for data storing. Moreover, since we use an ultrasonic sound, our method is robust to different sound environments. Evaluation results confirmed that when there was no environmental sound generated from other people, the recognition rate was 86.6% on average. When there was environmental sound generated from other people, by applying the proposed method in the presence environmental sound from others, the recognition rate was 64.7% while that without our method is 57.3%.
Novel Interfaces,


Mobility,


Location-Based Services II
On Preserving Statistical Characteristics of Accelerometry Data using their Empirical Cumulative Distribution
Nils Y. Hammerla, Reuben Kirkham, Peter Andras, Thomas Ploetz
Newcastle University, United KingdomThe majority of activity recognition systems in wearable computing rely on a set of statistical measures, such as means and moments, extracted from short frames of continuous sensor measurements to perform recognition. These features implicitly quantify the distribution of data observed in each frame. However, feature selection remains challenging and labour intensive, rendering a more generic method to quantify distributions in accelerometer data much desired. In this paper we present the ECDF representation, a novel approach to preserve characteristics of arbitrary distributions for feature extraction which is particularly suitable for embedded applications. In extensive experiments on 6 publicly available datasets we demonstrate that it outperforms common approaches to feature extraction across a wide variety of tasks.
Preference, Context and Communities: A Multi-faceted Approach to Predicting Smartphone App Usage Patterns
Ye Xu†, Mu Lin†, Hong Lu◊, Giuseppe Cardone*, Nicholas Lane‡, Zhenyu Chen†, Andrew Campbell†, Tanzeem Choudhury∇
†Dartmouth College, ◊Intel Labs, *University of Bologna, ‡Microsoft Research Asia, ∇Cornell University Nowadays, users are overwhelmed by the ever growing number of smartphone apps they can choose from. Reliable smartphone app prediction that benefits both users alike and phone system performance is very desirable. However, real-world smartphone app usage behavior is a complex phenomena driven by multiple factors from individual users or more broad user communities. In this paper, we develop an app usage prediction model that leverages three key everyday factors that affect app decisions - (1) intrinsic user app preferences and user historical patterns; (2) user activities and the environment as observed through sensor-based contextual signals; and, (3) the shared aggregate patterns of app behavior that appear in specific user communities. While rapid progress has been made recently in smartphone app prediction, existing prediction models tend to focus on only one of these factors. Furthermore, our approach to prediction is the first work to elevate community similarity to a first-class citizen in app usage prediction modeling. Using a detailed 3-week field trial over 35 people along with the analysis of app usage logs of 4,606 active smartphone users worldwide, we demonstrate that the proposed model can not only make more robust application recommendations, but also drive significant smartphone system optimizations.
Wearable partner agent with anthropomorphic physical contact with awareness of clothing and posture
Tomoko Yonezawa, Hirotake Yamaozoe
Kansai University, Japan In this paper, we introduce a wearable partner agent, that makes physical contacts corresponding to the user's clothing, posture, and detected contexts. Physical contacts are generated by combining haptic stimuli and anthropomorphic motions of the agent. The agent performs two types of the behaviors: a) it notifies the user of a message by patting the user's arm and b) it generates emotional expression by strongly enfolding the user's arm. Our experimental results demonstrated that haptic communication from the agent increases the intelligibility of the agent's messages and familiar impressions of the agent.
15:45 - 16:15LobbyCoffee Break
16:15 - 17:45in G4Session 5: Touch and On-Body (chair: Lucy Dunne)UbiComp
FIDO - Facilitating Interactions for Dogs with Occupations: Wearable Dog-Activated Interfaces
Melody Jackson, Thad Starner, Clint Zeagler
Georgia Institute of Technology, USA Assistance dogs have improved the lives of thousands of people with disabilities. However, communication between human and canine partners is currently limited. The main goal of the FIDO project is to research fundamental aspects of wearable technologies to support communication between working dogs and their handlers. In this pilot study, the FIDO team investigated on-body interfaces for assistance dogs in the form of electronic textiles and computers integrated into assistance dog vests. We created four different sensors that dogs could activate (based on biting, tugging, and nose gestures) and tested them on-body with three assistance-trained dogs. We were able to demonstrate that it is possible to create wearable sensors that dogs can reliably activate on command.
User Experience Design,

Location Privacy,

Mobile Devices
Don't Mind Me Touching My Wrist: A Case Study of Interacting with On-Body Technology in Public
Halley Profita, James Clawson, Scott Gilliland, Clint Zeagler, Thad Starner, Jim Budd, Ellen Yi-Luen Do
University of Colorado at Boulder, USA Wearable technology, specifically e-textiles, offers the potential for interacting with electronic devices in a whole new manner. However, some may find the operation of a system that employs non-traditional on-body interactions uncomfortable to perform in a public setting, which impacts how readily a new form of mobile technology may be received. Thus, it is important for interaction designers to take into consideration the implications of on-body gesture interactions when designing wearable interfaces. In this study, we explore the third-party perceptions of a user's interactions with a wearable e-textile interface. This two-prong evaluation examines the societal perceptions of a user interacting with the textile interface at different on-body locations, as well as the observer's attitudes toward on-body controller placement. We performed the study in the United States and South Korea to gain cultural insights into the perceptions of on-body technology usage.
Sensing Group Proximity Dynamics of Firefighting Teams using Smartphones
Sebastian Feese, Bernt Arnrich, Michael Burtscher, Bertolt Meyer, Klaus Jonas, Gerhard Troester
ETH Zurich, Switzerland Firefighters work in dangerous and unfamiliar situations under a high degree of time pressure and thus team work is of utmost importance. Relying on trained automatisms, firefighters coordinate their actions implicitly by observing the actions of their team members. Consequently, being out of sight is likely to reduce coordination. The aim of this work is to automatically detect when a firefighter is in sight with other firefighters and to visualize the proximity dynamics of firefighting missions. In our approach, we equip firefighters with smartphones and use the built-in ANT protocol, a low-power communication radio, to scan nearby device fast and efficiently in order to measure proximity to other firefighters. In a second step, we cluster the proximity data to detect moving sub-groups. To evaluate our method, we recorded proximity data of 16 professional firefighting teams performing a real-life training scenario. We manually labeled six randomly selected training sessions, involving 51 firefighters, to obtain 79 minutes of ground truth data. On average, our algorithm assigns each group member to the correct ground truth cluster with 80% accuracy. Considering height information derived from atmospheric pressure signals, increases group assignment accuracy to 95%.
17:45 - 23:00UetlibergJoint ISWC / UbiComp Social Dinner (buses leave at 18:00)



Thursday, 12th of September 2013

08:30 - 10:00in G4Session 6: EyeWear Computing (chair: Alois Ferscha)UbiComp
3D from Looking: Using Wearable Gaze Tracking for Hands-Free and Feedback-Free Object Modelling
Teesid Leelasawassuk, Walterio Mayol-Cuevas
University of Bristol, United Kingdom This paper presents a method for estimating the 3D shape of an object being observed using wearable gaze tracking. Starting from a sparse environment map generated by a simultaneous localization and mapping algorithm (SLAM), we use the gaze direction positioned in 3D to extract the model of the object under observation. By letting the user look at the object of interest, and without any feedback, the method determines 3D point-of-regards by back-projecting the user's gaze rays into the map. The 3D point-of-regards are then used as seed points for segmenting the object from captured images and the calculated silhouettes are used to estimate the 3D shape of the object. We explore methods to remove outlier gaze points that result from the user saccading to non object points and methods for reducing the error in the shape estimation. Being able to exploit gaze information in this way, enables the user of wearable gaze trackers to be able to do things as complex as object modelling in a hands-free and even feedback-free manner.
Health II,

Computing in the Home,

Social Computing I
I Know what You are Reading - Recognition of Document Types using Mobile Eye Tracking
Kai Kunze, Andreas Bulling, Yuzuko Utsumi, Yuki Shiga, Koichi Kise
Osaka Prefecture University, Japan Reading is a ubiquitous activity that many people even perform in transit, such as while on the bus or while walking. Tracking reading enables us to gain more insights about expertise level and potential knowledge of users towards a reading log tracking and improve knowledge acquisition. As a first step towards this vision, in this work we investigate whether different document types can be automatically detected from visual behaviour recorded using a mobile eye tracker. We present an initial recognition approach that combines special purpose eye movement features as well as machine learning for document type detection. We evaluate our approach in a user study with eight participants and five Japanese document types and achieve a recognition performance of 74% using user-independent training.
Eyeglass-based Hands-free Videophone
Shinji Kimura, Masaaki Fukumoto, Tsutomu Horikoshi
NTT DOCOMO, Japan We propose an eyeglass-based videophone that enables the wearer to make a video call without holding a phone (that is to say “hands-free”) in the mobile environment. The glasses have 4 (or 6) fish-eye cameras to widely capture the face of the wearer and the images are fused to yield 1 frontal face image. The face image is also combined with the background image captured by a rear-mounted camera; the result is a self-portrait image without holding any camera device at arm’s length. Simulations confirm that 4 fish-eye cameras with 250-degree viewing angles (or 6 cameras with 180-degree viewing angles) can cover 83 % of the frontal face. We fabricate a 6 camera prototype, and confirm the possibility of generating the self-portrait image. This system suits not only hands-free videophones but also other applications like visual life logging and augmented reality use.
10:00 - 10:30LobbyCoffee Break
10:30 - 11:30in G4ISWC Town Hall Meeting

11:35 - 12:35in G1Joint ISWC/UbiComp Keynote (chair: Kristof Van Laerhoven)
Wearable Computing: Through the Looking Glass
Thad Starner
Georgia Institute of Technology, USA
Google's Glass has captured the world's imagination, with new articles speculating on it almost every day. Yet, why would consumers want a wearable computer in their everyday lives? For the past 20 years, my teams have been creating living laboratories to discover the most compelling reasons. In the process, we have investigated how to create interfaces for technology which are designed to be "there when you need it, gone when you don't." This talk will attempt to articulate the most valuable lessons we have learned, including some design principles for creating "microinteractions" to fit a user's lifestyle.

Bio. Thad Starner is a wearable computing pioneer. He is a Professor in the School of Interactive Computing at the Georgia Institute of Technology and a Technical Lead on Google's Glass, a self-contained wearable computer which was named a Time Magazine Invention of the Year for 2012. Starner was perhaps the first to integrate a wearable computer into his everyday life as an intelligent personal assistant, and he coined the term "augmented reality" in 1990 to describe the types of interfaces he envisioned at the time. Thad has authored over 150 peer-reviewed scientific publications and is an inventor on over 80 United States patents awarded or in process.
12:40 - 14:40LobbyLunch and Poster Session
Click here for the list of all accepted posters
16:00 - 16:30LobbyCoffee Break

17:30 - 18:00in G4Closing Session


ISWC Posters

Posters
Activity Monitoring in Daily Life as an Outcome Measure for Surgical Pain Relief Intervention Using Smartphones
Julia Seiter, Lucian Macrea, Sebastian Feese, Oliver Amft, Bert Arnrich, Konrad Maurer, Gerhard Troester
ETH Zurich, Switzerland We investigate the potential of a smartphone to measure a patient’s change in physical activity before and after a surgical pain relief intervention. We show feasibility for our smartphone system providing physical activity from acceleration, barometer and location data to measure the intervention’s outcome. In a single-case study, we monitored a pain patient carrying the smartphone before and after a surgical intervention over 26 days. Results indicate significant changes before and after intervention, particularly in physical activity in the home environment.
Nanostructured Gas Sensors Integrated into Fabric for Wearable Breath Monitoring System
Hyejin Park, Hosang Ahn, Dong-Joo Kim, Helen Koo
Auburn University, USA This paper presents a technology to design and fabricate nanostructured gas sensors in fabric substrates. Nanostructured gas sensors were fabricated by constructing ZnO nanorods on fabrics including polyester, cotton and polyimide for continuous monitoring of wearer's breath gas that can indicate health status. The developed fabric-based gas sensors demonstrated gas sensing by monitoring electrical resistance change upon exposure of acetone and ethanol gases.
Reversible Contacting of Smart Textiles with adhesive bonded magnets
Klaus Scheulen, Anne Schwarz, Stefan Jockenhoevel
Institut fuer Textiltechnik der RWTH Aachen, Germany The aim of this study was to develop a reversible contacting through adhesive bonded neodymium magnets. To implement this, suitable magnets and adhesives are chosen by defined requirements and conductive bonds between textile and magnet are optimized. For the latter, three different bonds are produced and tested in terms of achievable conductivity and mechanical strength. It is shown that gold-coated neodymium magnets are most appropriate for such a contact. The reproducible electrical resistances are low with sufficient mechanical strength.
FIREMAN: FIRefighter team brEathing Management system using ANdroid
Fabio Marques, Paulo Azevedo, Joao Paulo Cunha, Manuel Bernardo Cunha, Susana Bras, Jose Maria Fernandes
University of Aveiro, Portugal In this paper we propose FIREMAN, a low cost system for online monitoring of firefighters ventilation patterns when using Self-Contained Breathing Apparatus (SCBA), based on a specific hardware device attached to SCBA and a Smartphone application. The system implementation allows the detection of relevant ventilation patterns while providing feasible and accurate estimation of SCBA air consumption.
Driving Low-Power Wearable Systems with an Adaptively-Controlled Foot-Strike Scavenging Platform
Vishwa Goudar, Zhi Ren, Paul Brochu, Miodrag Potkonjak, Qibing Pei
University of California, Los Angeles, USA We explore the use of Dielectric Elastomer (DE) micro-generators as a means to scavenge energy from foot-strikes and power wearable systems. While they exhibit large energy densities, DEs must be closely controlled to maximize the energy they transduce. Towards this end, we propose a DE micro-generator array configuration that enhances transduction efficiency, and the use of foot pressure sensors to realize accurate control of the individual DEs. Statistical techniques are applied to customize performance for a user's gait and enable energy-optimized adaptive online control of the system. Simulations based on experimentally collected foot pressure datasets, empirical characterization of DE mechanical behavior and a detailed model of DE electrical behavior show that the proposed system can achieve between 45 and 66mJ per stride.
Pattern Resistors: Exploring resistive motifs as components for e-embroidery
Ramyah Gowrishankar, Jussi Mikkonen
Aalto ARTS, FinlandE-textile practitioners have improvised innovatively with existing off-the shelf electronics to make them textile-compatible. However, there is a need to further the development of soft materials or parts that could replace regular electronics in a circuit. As a starting point, we look at the possibility of creating a repository of specific motifs with different resistance values that can be easily incorporated into e-embroidery projects and used instead of normal resistors. The paper describes our larger objective and gives an overview of the first experiment done to compare the resistance values of a simple pattern embroidered multiple times with conductive yarn to observe its behavior and reliability.
Retrofitting Smartphones to be used as Particulate Matter Dosimeters
Matthias Budde, Pierre Barbera, Rayan El Masri, Till Riedel, Michael Beigl
Karlsruhe Institute of Technology (KIT), GermanyThis work discusses ways of measuring particulate matter with mobile devices. Solutions using a dedicated sensor device are presented along with a novel method of retrofitting a sensor to a camera phone without need for electrical modifications. Instead, the flash and camera of the phone are used as light source and receptor of an optical dust sensor respectively. Experiments to evaluate the accuracy are presented.
Prior Knowledge of Human Activities from Social Data
Zack Zhu, Ulf Blanke, Alberto Calatroni, Gerhard Troester
ETH Zurich, SwitzerlandWe explore the feasibility of utilizing large, crowd-generated online repositories to construct prior knowledge models for high-level activity recognition. Towards this, we mine the popular location-based social network, Foursquare, for geo-tagged activity reports. Although unstructured and noisy, we are able to extract, categorize and geographically map people's activities, thereby answering the question: what activities are possible where? Through Foursquare text only, we obtain a testing accuracy of 59.2% with 10 activity categories; using additional contextual cues such as venue semantics, we obtain an increased accuracy of 67.4%. By mapping prior odds of activities via geographical coordinates, we directly benefit activity recognition systems built on geo-aware mobile phones.
Can I Wash It? : The Effect of Washing Conductive Materials Used in Making Textile Based Wearable Electronic Interfaces
Clint Zeagler, Scott Gilliland, Stephen Audy, Thad Starner
Georgia Institute of Technology, USAWe explore the wash-ability of conductive materials used in creating traces and touch sensors in wearable electronic textiles. We perform a wash test measuring change in resistivity after each of 10 cycles of washing for conductive traces constructed using two types of conductive thread, conductive ink, and combinations of thread and ink.
Detecting Strumming Action While Playing Guitar
Soichiro Matsushita, Daisuke Iwase
Tokyo University of Technology, JapanIn this paper we describe a wristwatch-like device using a 3-axis gyro sensor to determine how a player is strumming the guitar. The device was worn on the right-handed player's right hand to evaluate the strumming action, which is important to play the guitar musically in terms of the timing and the strength of notes. With a newly developed calculation algorithm to specify the timing and the strength of the motion when the guitar string(s) were strummed, beginners and experienced players were clearly distinguished without hearing the sounds. The beginners as well as intermediate-level players showed a fairly large variation of the maximum angular velocity around the upper arm for each strum. Since the developed system reports the evaluation results with a graphical display as well as sound effects in real time, the players may improve their strumming action without playing back the performance.
An Underwater Wearable Computer for Two Way Human-Dolphin Communication
Daniel Kohlsdorf, Scott Gilliland, Peter Presti, Denise Herzing, Thad Starner
Georgia Institute of Technology, USAResearch in dolphin cognition and communication in the wild is still a challenging task for marine biologists. Most problems arise from the uncontrolled nature of field studies and the challenges of building suitable underwater research equipment. We present a novel underwater wearable computer enabling researchers to engage in an audio-based interaction between humans and dolphins. The design requirements are based on a research protocol developed by a team of marine biologists associated with the Wild Dolphin Project.



ISWC Exhibition Designs

Designs
Garment for rapid prototyping of pose-based applications
Jacob Dennis, Robert Lewis, Tom Martin, Mark Jones, Kara Baumann, John New, Taylor Pearman
Virginia Tech, USA In this paper, we present a versatile smart garment framework for exploring wearable computing applications based upon a user’s pose. We have developed a loose fitting, self-contained smart garment that reports body pose and movement. The current version of the smart garment has been compared to laboratory-based optical motion capture systems and shows the system to be within 60 mm error of the optical capture systems. The tradeoff in accuracy is acceptable for the loose fitting nature of the system such that the design space targets integration into everyday clothing.
Garment with Stitched Stretch Sensors that Detects Breathing
Mary Ellen Berglund, Guido Gioberto, Crystal Compton
University of Minnesota, USA The concept motivating this piece was to design a comfortable, everyday athletic garment incorporating a breathing sensor to monitor the activity of crewmembers on NASA missions to the International Space Station and beyond.
EEG Data Visualising Pendant For Use In Social Situations
Rain Ashford
Goldsmiths College, University of London, UK The EEG Visualising Pendant is intended for use in awkward or intense social situations to indicate when the wearer’s attention is waning. The pendant uses an EEG (Electroencephalography) headset that sends data to an LED (Light Emitting Diode) pendant to visualise the wearer’s EEG attention and meditation data to themselves and others.
AVAnav: Helmet-Mounted Display for Avalanche Rescue
Jason O. Germany
University of Oregon, USA AVAnav is a wearable interface prototype for search and rescue personnel that serves to reduce the time to locate buried avalanche victims. In keeping with the theme ‘from Mobile to Wearable’, this research examines the use of existing handheld avalanche transceivers and proposes a new helmet-mounted system that is more contextually appropriate for the activity of search and rescue in snow covered terrain.
A Wearable Sensing Garment to Detect and Prevent Suit Injuries for Astronauts
Crystal Compton, Reagan Rockers, Thanh Nguyen
University of Minnesota, USA Astronauts must wear a heavy, rigid, and cumbersome space suit during Extravehicular Activity (EVA) and during EVA training that takes place in the Neutral Buoyancy Lab (NBL). While the suit is a life-saving portable environment, it can also cause injuries to the wearer. This is problematic, especially during the long period of training that precedes a mission. In order to resolve this issue, designers need a better understanding of the relationship between the suit and the body in order to locate and resolve sources of injury or restriction. A wearable sensing garment was developed to explore two different approaches to detecting pressure on the body resulting directly from contact with the Hard Upper Torso (HUT) portion of the space suit. This garment uses two different types of sensors to collect data that can be used in identifying and resolving sources of injury and restriction in the space suit.
Haptic Mirror Therapy Glove
James Hallam
Georgia Institute of Technology, USA This paper describes the creation of a proof-of-concept design for an interactive glove that augments the mirror therapy therapeutic protocol in the treatment of a paretic limb following a stroke. The glove has been designed to allow the user to stimulate the fingertips of their affected hand by tapping the fingers of their unaffected hand, using Force Sensing Resistors to trigger Linear Resonance Actuators on the corresponding fingers. This paper outlines the design considerations and methods used to create the glove, and discusses the potential for further work in the pursuit of a clinical trial.
Garment Body Position Monitoring and Gesture Recognition
Sahithya Baskaran, Norma Easter, Cameron Hord, Emily Keen, Mauricio Uruena
Georgia Institute of Technology, USA This paper covers the development of a garment for body position monitoring and gesture recognition. We developed a comfortable, unobtrusive, textile based system that can be used to monitor the wearer’s arm position in real time. This involved testing for best sensor placement and the creation of a patch-based textile pattern for sensor stability. This Lilypad-based system delivers a low-profile, wireless garment that is not only fully functional in zero-gravity environments and is not constrained by the limitations of stationary motion input devices, but also outputs an intuitive visualization of sensor data in real time. Created for the Human Interface Branch of NASA, the concepts within this garment can be used to monitor which tasks might lead to repetitive stress injuries or fatigue, and to capture things like reaction time and reach envelope.
The Photonic Bike Clothing IV for Cute Cyclist
Jiyoung Kim, Sunhee Lee
Dong-A University, South Korea In this paper, we describe Photonic Bike Clothing IV: for cute cyclist. The concept of the design is the combination of formal and functional garments. The garments are made using the Lilypad Arduino kit, heating pads, and solar cells to enable the functionality of this clothing during city riding.
Strokes & Dots
Valérie Lamontagne
3lectromode Strokes&Dots is a micro-collection using DIY and open design practices aimed at fostering creative innovation and advancement in the design aesthetics of wearable technologies and fashion-tech. This paper explores the current state of the art of wearable technologies; the 3lectromode platform; and the key goals in creating Strokes&Dots as an advancement platform for present and future fashion-tech design.
Fiber Optic Corset Dress
Rachel Reichert, James Knight, Lisa Ciafaldi, Keith Connelly
Cornell University, USA This corset dress was designed to illustrate what a fictitious celebrity performer will wear in the future. It features an illuminated fiber optic fabric embellishment. The corset base is made out of organic cotton and hemp silk using traditional and modern professional corset-making techniques. The silhouette created is an extreme hourglass figure, accented by the illuminated strips draped around it. The corset dress is part of the “Cybelle Horizon” fashion collection by Rachael Reichert, based on a short story of the same name, written by Rachael Reichert.
Play the Visual Music
Helen Koo
Auburn University, USA The dress was developed for musicians who play and sing at concerts and other types of events. This dress is intended to provide multi-sensory stimulations to audience members, especially people with hearing impairments. To incorporate this function, electroluminescent (EL) wire was connected to a sound inverter that can control the level of sensitivity with a battery pack. The dress was ergonomically designed for mobility and comfort during a performance.
Brace Yourself – The World’s Sexiest Knee “Brace”
Crystal Compton and Guido Gioberto
University of Minnesota, USA Knee braces can be bulky, ugly, cumbersome and unattractive to wear. Moreover, they are typically not very aesthetically pleasing to the eye. Wearing clothing that exposes the knee, such as a skirt or a dress can present problems with wearing a brace, as the intention of wearing a dress or a skirt is often for aesthetic purposes and dressing for more formal occasions. This knee “brace” is a playful exploration of the opposing expectations for aesthetics and function in knee support. It is created from a pair of stockings, and makes use of a decorative stitch that forms a “seam” on the back of the leg, but also functions as a bend sensor. When harmful movements are sensed, the stockings trigger a vibrotactile stimulus that alerts the user to stop or to move differently. “Brace yourself” offers a sexy interpretation of a clinical product, playing on a stereotype of sexiness to subvert a stereotype of medical devices.
Lüme
Elizabeth E. Bigger, Luis E. Fraguada
Jorge & Esther, Built by Associative Data Fashion design with embedded electronics is a rapidly expanding field which is the intersection of fashion design, computer science, and electrical engineering. Each of these practices can require a deep level of knowledge and experience in order to produce practical garments that are suitable for everyday use. Lüme is an electronically infused clothing collection which integrates dynamic, user customizable elements driven wirelessly from a common mobile phone. The design and engineering of the collection is focused on the integration of electronics in such a way that they could be easily removed or embedded when desired, thus creating pieces that are easy to wash and care for. Subsequent iterations of the collection will focus on low power electronics, alternative power sources, local and global positioning, potential applications, and collaborative computing.
E-Shoe: A High Heeled Shoe Guitar
Alex Murray-Leslie, Melissa Logan, Max Kibardin
University of Technology, Sydney, Australia This paper explores Fashion Acoustics and wearable computers as fashionable and practical musical instruments for live multi-modal performance. The focus lies on Fashion Acoustics in the form of an E-Shoe: a high heeled shoe guitar, made by art collective Chicks on Speed, designer Max Kibardin and technologist Alex posada. Introduced is the idea of building miniaturised-musical-computer-wearables, using industrialised techniques in shoe manufacturing, where the electronics are embodied in the design, promoting a more creative application of technology on the body for live intermedia performances. The approach is illustrated by describing design aims, methods and usages in theatrical settings and detailing practice driven research experiments in this cutting edge field of Fashion Acoustics.



















Gold Sponsors
Microsoft
Google
Silver Sponsors
Intel
Bronze Sponsors
Nokia
Supported by
ACM SIGCHI sigmobile
Locally organized by
ETH Zurich