Historically, a Brain-Computer Interface (BCI) is a system that measures central nervous system (CNS) activity and converts it into artificial output that replaces, restores, enhances, supplements, or improves natural CNS output (Wolpaw et al., 2012). In past years, BCI applications have mainly been developed for severely disabled persons providing new channels of communication and body control but in recent years BCI research have become more and more interesting for a broader community of researchers. Especially the type of BCIs, which are not consciously controlled by the user or reacting to external stimulation, called passive BCIs. These systems derive its outputs from brain activity, in order to enrich human–machine interaction with implicit information on the actual state of the user. Having access to the user’s ongoing brain activity enables applications spanning a variety of domains such as brain-activity based gaming, workload assessment, brain activity-based biometrics and neuromarketing or neuroergonomics.
Motor Imagery (MI) is one task which has been used for driving brain plasticity and motor learning in several fields including sports, motor rehabilitation and brain-computer interface (BCI) research. A BCI is a device translating brain signals into control signals providing severely motor-impaired persons with an additional, non-muscular channel for communication and control. In the past many studies have shown that brain activity changes associated with MI can serve as useful control signals for BCIs. By using more vivid and engaging MI tasks instead of simple hand/finger tapping tasks, the performance of a BCI can be improved. In several imaging studies we found stronger and more distinctive brain activity in a broader network of brain areas. For example, imagining a complex action requires not only motor-related processing but also visuo-spatial imagery including a fronto-parietal network. The neural activity in MI of reach-to-grasp movements depends on the type of grasping which recruits a network including posterior parietal and premotor regions. Furthermore we found increased activation in parietal and frontal regions during imagery of emotion-laden objects and sports activities. Our results indicate that visuo-spatial cognition and action affordances play a significant role in MI eliciting distinctive brain patterns and suggested to improve the performance of future BCI systems. To support these first findings further research focusing on (sports)motor imagery and its neural correlates is still ongoing.
EEG-neurofeedback is a method to self-regulate one’s own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. People can self-control some of their brain functions in real-time. This method is currently used for many different applications, including e.g. experiments (to deduce the role of cognition and behaviour on specifc neural events), peak-performance training (to enhance the cognitive performance in healthy subjects) and therapy (to help people normalize their deviating brain activity or help physically disadvantaged people restore motor functions). Simple 2D EEG-neurofeedback has already been used for quite some time now, while 3D EEG-neurofeedback in the form of virtual reality gained fame and interest over the last few years. While a vast amount of 2D methods and paradigms have already been tested and validated, the 3D counterpart still holds high potential for new possible treatment methods. This project aims to develop and test novel 3D neurofeedback visualizations including VR environments.
With the recent evolution of consumer lever Mixed Reality (MR) devices, MR tutorials gained popularity in many different areas of application. For example, remote customer support, do-it-yourself tutorials, and applications for industrial manufacturing have recently applied MR visualizations for guiding users through the steps of a tutorial. MR instructions are superior to classical paper or video-based instructions when it comes both to completion time and to the numbers of errors made while following the tutorials. However, existing MR tutorials focus on presenting general instructions which are shown permanently. Thus, the specific needs or mental states of individual users are not considered, which often results in lower learning rates and higher fatigue of its users.In one project we are aiming to detect high mental workload (MWL) and mental fatiuge (MF) in participants performing a cognitively demanding task, such as an AR based piano tutorial. The intelligent tutoring system will support adapting the presentation by varying the medium and the level of detail that is used to provide visual instructions. This project is in cooperation with the Institute of Computer Graphics and Vision (ICG, TUG). Another project focuses on the detection of fear, specifically arachnophobia, to adapt a specifially designed VR environment according to the fear state (detection of fear related brain markers online) of the users.
Affective computing & Arts
WS 2024/25: No topics available!!
Current Members:
George Zenkovskiy, MSc (Phd Student)
Suktipol Kiatthaveephong, Incoming Phd-Student (VISTEC, Thailand)
Alexander Haupt (Lab assistant)
Marharyta Papakina (Lab Tutor)
Former Members:
Daniele Lozzi
Florian Maitz
Marharyta Papakina
Sinem Demirkan
Giovanna Mastrati
Giulia Pezzutti
Giulia Bressan
Monica Crotti
Alice Grazia
Laura Lizama
Alice Valenti
Federica Armani
Institute of Neural Engineering
Stremayrgasse 16/iv
8010 Graz, Austria
+4331687330713
s.wriessnegger@tugraz.at