Zum Hauptinhalt springen
TU Graz/ TU Graz/ Services/ News+Stories/

Understanding Human Behaviour with AI

08/08/2023 | Planet research | FoE Information, Communication & Computing

By Birgit Baustädter

Elisabeth Lex’s research combines computer science methods with social science approaches, searching for clues to understand framing, polarisation and opinion clusters.

Elisabeth Lex uses computer science to understand human behaviour. Source: agsandrew - Adobe Stock

More and more often, people are getting the feeling that society is increasingly polarised – regarding measures introduced during the Covid-19 pandemic, the question of vaccinations, elections, or environmental protection. TU Graz computer scientist Elisabeth Lex has been monitoring this trend over the years from a research perspective. Her focus is on different machine learning approaches and artificial intelligence, with the aim of better understanding human behaviour, and applying recommender systems designed to make life easier. Lex cooperates with social science researchers in a range of projects, who interpret the data she identifies. "That's what's exciting about this combination: learning algorithms enable us to recognise structures and patterns in large volumes of data that humans wouldn't be able to get an overview of on their own. We can then interpret and analyse the patterns and structures found with the aid of social science models and theories." 

One example is a joint project with the University of Graz, where Lex investigated whether statements made publicly on social media (in this case text-based platform Twitter) are consistent with answers given in a survey. "What comes into play here is what we call social desirability bias," Lex explains. "People have a tendency to adapt their statements to fit more closely with what the crowd is saying, and to provide answers which they believe would be supported by the majority of society." The study, which was concerned with measures introduced to combat the Covid-19 pandemic, yielded interesting results. Firstly, it was possible to identify a clear overlap between opinions expressed online and answers provided in the survey. "So we could see that our computer science methods definitely enable us to represent social phenomena." However, it was important to take into account that in a survey, responses are given to clearly defined questions and therefore reflect a person's opinion clearly and directly. In a Twitter account, by contrast, all of the tweets on a particular topic need to be considered in order to derive a kind of median opinion. "We also saw that people who viewed the measures positively were much more likely to share their accounts with us and the research project." Due to social desirability bias, the picture could be completely different today, since it has become more socially acceptable to question and criticise Covid-19 restrictions.

Polarisation – a Difficult Concept to Understand

Polarisation is one of the key focuses of Elisabeth Lex's research. But it is an extremely complex topic to investigate using artificial intelligence, as Lex explains: "Polarisation is a concept that people understand very well. But – and we recognised this at the beginning of our research – different research communities do not have common definitions of terms like this." As a consequence it is very difficult to teach the concept to an algorithm. Algorithms frequently learn patterns and structures from clearly defined, large-scale examples. They can then apply what they have learned to new sets of data. In the case of polarisation, this process – called supervised learning – cannot be applied without some preparation. Instead, the researchers, supported by machine models, search for clusters – strongly linked accounts that frequently interact with one another and post on similar topics. "When we identify such clusters, we carry out content analyses – for example, a sentiment analysis – to understand whether people have a positive, negative or neutral attitude towards a topic." 

Framing - Catastrophe or Warming?

Then comes perhaps the most relevant question: why do people have such polarised opinions? "Framing plays a major role here," Lex explains. The same content can be packaged differently, depending on how it is actually expressed. For instance, when talking about climate change, people may use the term ‘warming’ or alternatively ‘catastrophe’. "Warming sounds positive to us. We like warmth, so it can't be all that bad. But the word 'catastrophe' triggers completely different mental images." In this area in particular, a lot has changed in recent years, as Lex discovered. In the past, media influenced by conspiracy theories could easily be distinguished from reputable, quality media by their choice of language and visual appearance. Now they are much more similar and use the same techniques (e.g. in the shape of neutrally-written articles including references to sources) but different framing. "For instance, in our analyses we saw that when discussing Covid-related topics, magazines that had an affinity for conspiracy theories very often used the frames of faith and religion for their arguments, while reliable media based their arguments much more on science." 

Again, the media were examined using machine learning methods. Large language models, like those we are familiar with from ChatGPT, are working in the background. "It is remarkable how powerful these models are now and how easily they can navigate different languages," Lex points out. The researchers used language models that were already available and adapted them. Their most recent success was at this year's edition of the internationally respected SemEval Challenge, which focused on detecting frames in texts in different languages, with very little training data or none at all – which is known as "few shot" or "zero shot" learning. "My team won first place for recognising frames in Spanish, and our approach was among the leaders in eight other languages," Lex reports. 

Fear of a Dystopian Future

The huge potential of artificial intelligence has also stoked fears regarding its impact. How does Lex view this topic? "Recently, algorithms have been able to identify complex relationships in large volumes of data and make predictions, but – as yet – they don't possess a fundamental understanding of meaning and context in human language. Generally I'm not that afraid of an AI-driven killer robot. But if AI-generated content becomes ever more prevalent, and it is more difficult to distinguish from ’authentic’ content, there is a greater risk that it will be used in a targeted fashion to spread misinformation, and influence opinions and democratic processes. In fact, people are very good at reading between the lines and identifying motives behind statements. That said, the development of technology designed to detect spurious AI-generated content will be very important.“

Contact

Elisabeth LEX
Assoc.Prof. Dipl.-Ing. Dr.techn.
Institute of Interactive Systems and Data Science
Sandgasse 36
8010 Graz
Tel.: +43 316 873 30841
elisabeth.lexnoSpam@tugraz.at