Skip to content

Joint session between eSAME and S-Cube conference

Schedule : 14H20 – 15H40
Chairman : Benoit Miramomnd
Place : Room 121

Topic : This session will discuss about the new technologies to augment human senses using IoT technologies


14H20 – 14H40 : Invited paper
Towards P300-based Mind-Control: a Non-Invasive
Quickly Trained BCI for Remote Car Driving”

Valerio F. Annese, Giovanni Mezzina, Daniela De Venuto

Dept. of Electrical and Information Engineering, Politecnico di Bari
Via Orabona 4, 70125 Bari, Italy

Abstact : This paper presents a P300-based Brain Computer Interface (BCI) for the control of a mechatronic actuator (i.e. wheelchairs, robots or even cars), driven by EEG signals for assistive technology. The overall architecture is made up by two subsystems: the Brain-to-Computer System (BCS) and the mechanical actuator (a proof of concept of the proposed BCI is shown using a prototype car). The BCS is devoted to signal acquisition (6 EEG channels from wireless headset), visual stimuli delivery for P300 evocation and signal processing. Due to the P300 inter-subject variability, a first stage of Machine Learning (ML) is required. The ML stage is based on a custom algorithm (t-RIDE) which allows a fast calibration phase (only ~190s for the first learning). The BCI presents a functional approach for time-domain features extraction, which reduce the amount of data to be analyzed. The real-time function is based on a trained linear hyper-dimensional classifier, which combines high P300 detection accuracy with low computation times. The experimental results, achieved on a dataset of 5 subjects (age: 26 ± 3), show that: (i) the ML algorithm allows the P300 spatio-temporal characterization in 1.95s using 38 target brain visual stimuli (for each direction of the car path); (ii) the classification reached 80.5±4.1% on single-trial detection accuracy in only 22ms (worst case), allowing real-time driving. The BCI system here described can be also used on different mechatronic actuators.

14H40 – 15H00 : “iHouse: A Voice-Controlled, Centralized, Retrospective Smart Home”

Benjamin Volker, Tobias Schubert, and Bernd Becker

University of Freiburg, Freiburg 79110, Germany

Abstract : Speech recognition in smart home systems has become popular in both, research and consumer areas. This paper introduces an innovative concept for a modular, customizable, and voice-controlled smart home system. The system combines the advantages of distributed and centralized processing to enable a secure as well as highly modular platform and allows to add existing non-smart components retrospectively into the smart environment. To interact with the system in the most comfortable way – and in particular without additional devices like smartphones – voice-controlling was added as the means of choice. The task of speech recognition is partitioned into decentral Wake-Up-Word (WUW) recognition and central continuous speech recognition to enable flexibility while maintaining security. This is achieved utilizing a novel WUW algorithm suitable to be executed on small microcontrollers which uses Mel Frequency Cepstral Coefficients as well as Dynamic Time Warping. A high rejection rate up to 99.93% was achieved, justifying the use of the algorithm as a voice trigger in the developed smart home system.

15H00 – 15H20 : “IoT for outdoor sport safely”

Riccardo De Filippi

Motorialab, Trento, Italy

Abstract : In this talk Motorialab’s experience in the world of IoT and wearable technology for sport outdoor will be presented. After 2 years on the market the Bruno Kessler Foundation’s spinoff based in Trento will illustrate with two examples the needs and interests of internationals companies and tourism industry into new technologies. The first example is a prototype of an intelligent freeride backpack where the goal was to embed wireless communication and sensing capabilities in sport equipment for event detection enhancing skiers sport experience and safety. The second example presented is HI-FIS a smart ski slope that is a WIFI mesh network, video cameras set up, wearable sensors to be used plug and play and to be deliver on different interfaces (video, 3d mapping, georeferenced data coming from WS) to the user. The idea was to create a “take home your performance” video and statistics like in a real entertainment fun park setup. Is the world of outdoor sport ready for this kind of innovation?


15H20 – 15H40 :
SHelmet: an intelligent self-sustaining multi sensors smart helmet for bikers”

Michele Magno*°, Angelo D’Aloia*, Tommaso Polonelli*, Lorenzo Spadaro* and Luca Benini*

*DIE, Università Di Bologna, Italy °D-ITET; ETH Ziruch, Switzerland

Abstact : This paper presents the design of a wearable system to transform a helmet into a smart, multi-sensor connected helmet (SHelmet) to improve motorcycle safety. Low power design and self-sustainability are the key for the usability of our helmet, to avoid frequent battery recharges and dangerous power losses. Hidden in the helmet structure, the designed system is equipped with a dense sensor network including accelerometer, temperature, light, and alcohol gas level, in addition, a Bluetooth low energy module interfaces the device with an on-vehicle IR camera, and eventually the user’s smart phone. To keep the driver focused, the user interface consists of a small non-invasive display combined with a speech recognition system. System architecture is optimized for aggressive power management, featuring an ultra-low power wake-up radio, and fine-grained software-controlled shutdown of all sensing, communication and computing sub-systems. Finally, a multi-source energy harvesting module (solar and kinetic) performs high-efficiency power recovery, improving battery management and achieving self-sustainability. SHelmet supports rich context awareness applications; breath alcohol control; real time vehicle data; sleep and fall detection; data display. Experimental results show that is possible achieve self-sustainability and demonstrate functionality of the developed node