Third-party data annotators often fail to accurately read the emotions of others, study finds

Machine learning algorithms and large language models (LLMs), such as the model underpinning the functioning of the platform ChatGPT, have proved to be effective in tackling a wide range of tasks. These models are trained on various types of data (e.g., texts, images, videos, and/or audio recordings), which are typically annotated by humans, who label important features, including the emotions expressed in the data.

This post was originally published on this website.