Abstract
Questionnaires are essential for measuring self-reported attitudes, beliefs, and behaviour in many research fields. Semantic similarity of the questions is recognized as a source of covariance in the human data, implying that response patterns partly arise from the questionnaire itself. A practical method to assess the influence of semantic similarity could significantly facilitate the design of questionnaires and the interpretation of their results. The current study presents a novel method for estimating the influence of semantic similarity for questionnaires with Likert-scale responses. The method represents responses as natural language sentences combining the statement and the response option and uses the Sentence-BERT algorithm to estimate a semantic similarity matrix between them. Synthetic response data are generated using the semantic similarity matrix and a noise parameter as input. Synthetic data can then be analysed using the same tools as human survey data, making the comparison straightforward. The method was tested with a questionnaire measuring the acceptance of automated driving. Synthetic data explained 40correlations in the human response data. This means that semantic similarity substantially influenced responses. Using synthetic data, it was possible to identify the same factor structure as in the human data and to identify relationships between factors that might have been inflated by semantic similarity. Semantically generated synthetic data could help in designing multi-factor questionnaires and correctly interpreting the found relationships between factors.
| Original language | English |
|---|---|
| Pages (from-to) | 40285-40301 |
| Journal | IEEE Access |
| Volume | 13 |
| DOIs | |
| Publication status | Published - 2025 |
| MoE publication type | A1 Journal article-refereed |
Keywords
- Large language models
- Natural language processing
- Questionnaire
- Synthetic data
- Technology acceptance