Jump to a question:
The primary objective of this study is to explore the patterns in political and behavioral expression of academics on social media. By analyzing a novel dataset linking Twitter profiles to academic records, we aim to understand how academics' online expressions might shape public perceptions and influence policy debates.
We utilized a dataset of 300,000 academics on Twitter, matched to their OpenAlex profiles, which include detailed academic metadata. Using the Twitter API, we collected the complete timelines of these academics from January 1, 2016, to December 31, 2022, capturing tweets, retweets, quoted retweets, and replies.
The study focused on politically salient topics such as climate action, immigration, abortion rights, racial equality, welfare state, taxation policy, and income redistribution. These topics were chosen for their global relevance and policy implications.
We employed GPT-4 to generate keyword dictionaries for topic detection and GPT-3.5 Turbo for stance detection. Tweets were classified into topics, and their stances were categorized as pro, anti, neutral, or unrelated based on the content of each tweet.
Academics are significantly more expressive about climate action, cultural liberalism, and economic collectivism compared to the general Twitter population.
There is notable inequality in content creation and engagement, with a small fraction of academics generating the majority of content.
Differences in political expression and tone were observed across fields of study, gender, and geographical regions.
Academics from top-ranked institutions and US-based scholars exhibit higher egocentrism and toxicity in their tweets.
This study is the first to provide a comprehensive descriptive analysis of academics' political and behavioral expressions on social media. It highlights the potential impact of these expressions on public perceptions of science and policy debates, adding a new dimension to the literature on science communication and social media.
The text analysis methods were validated using human-labeled datasets, achieving high F1 scores for stance detection. The dynamic keyword dictionaries generated by GPT-4 and the precise stance classification by GPT-3.5 Turbo ensure robustness and accuracy in our analysis.
The study suggests that a small, vocal subset of academics on social media might skew public perceptions of academic consensus. This could influence policy debates and public trust in science. Understanding these patterns can help in designing more balanced and inclusive science communication strategies.
Future research can explore the motivations behind academics' political expressions on social media, such as the desire for name recognition, ideological drive, or the intention to share knowledge. Further studies could also investigate the impact of these expressions on public trust and policy formulation.
The dataset will be made available at multiple levels of aggregation to encourage further research on public political expression by academics and its impact on public discourse and policy. Researchers interested in accessing more detailed dataset can contact us through our data access contact form [here]. Please use the form to also ask any related questions about the data.