Texts and Images of Austerity in Britain

Last October I visited Erlangen, Germany to attend a workshop set up by Dr. Tim Griebel and Prof. Dr. Stefan Evert called “Texts and Images of Austerity in Britain. A Multimodal Multimedia Analysis”. Tim and Stefan are leading this ongoing project aimed at analyzing 20k news articles from The Telegraph and The Guardian starting in 2010 and leading up to Brexit. I’m working alongside 21 other researchers with backgrounds in discourse analysis, corpus linguistics, computational linguistics, multimodal analysis, and sociology to explore discourse between the two news sources across time from different perspectives.

IMG_20171229_135415
Comparison of front pages of The Guardian and The Daily Telegraph after the Greek people voted to reject austerity measures imposed by the IMF.

Continue reading “Texts and Images of Austerity in Britain”

Theory, Method, and Reproducibility in Text Analysis

I’ve been thinking recently about how we think and talk about the relationship between theory and methods in computational text analysis. I argue that an assumed inseparability of theory and methods leads us to false conclusions about the potential for Topic Modeling and other machine learning (ML) approaches to provide meaningful insight, and that it holds us back from developing more systematic interpretation methods and addressing issues like reproducibility and robustness. I’ll respond specifically to Dr. Andrew Hardie’s presentation “Exploratory analysis of word frequencies across corpus texts” given at the 2017 Corpus Linguistics conference in Birmingham. Andrew makes some really good points in his critique about shortcomings and misunderstandings of tools like Topic Modeling, and I hope to contribute to this conversation so that we can further improve these methods – both in how we use them and how we think about them.

Continue reading “Theory, Method, and Reproducibility in Text Analysis”

Fieldwork in Colombia

I spent Summer of 2017 with my colleague Marcelle Cohen living in and studying the conflict and peace process in Colombia. Our objective was to explore how political discourse as cultural practice creates entrenched ideologies and contentious politics there, and how those discourses relate to other populist movements happening around the world. From a methodological perspective, I’m interested to see how we can use interview data in tandem with computational text analysis and quantitative network methods. We performed interviews with politicians and diplomats, attended political rallies in Bogota and more rural communities, and made connections with some local peace organizations and universities. Our interviews will allow us to give agency to the political elite and understand discourse at a point of production as it is embedded in a political institution. Ultimately I had a great experience that allowed me to test the lenses of cultural and political theory, learn about qualitative methods, and dive deeper into the political culture in Colombia.colombia_overview1

I took this photo after the last disarmament event at a FARC camp in rural Colombia. The after-event scne felt like a foreshadowing of post-accords politics.

This article is more about my meta-impressions – see the academic presentation Political Culture in Colombia for some depth.

Continue reading “Fieldwork in Colombia”

Summer Mentorship – Topic Modeling Secretary of Education News

This summer I had the opportunity to work with sociology undergraduate student Emma Kerr as part of her summer research internship with the UCSB IGERT program. Emma proposed a project investigating whether or not news coverage of Betsy DeVos was more focused on her personal life or her policy initiatives relative to other SoE. The summer program is designed to introduce big data and network science to students with interdisciplinary backgrounds. Emma had taken a computational sociology class at UCSB with John Mohr working on Twitter analysis and really enjoyed it, so I thought she would be a good fit for the program.

Continue reading “Summer Mentorship – Topic Modeling Secretary of Education News”

Word2Vec Server Python Library

Word2Vec models that have been pre-trained on large corpora are invaluable because they contain all of the semantic and contextual information in a lookup dictionary of only a few million words. They tend to perform well on synonym and analogy tests at around 300 dimensions, and can be applied to a number of machine learning applications. The challenge with these large models is that they take a long time to load into memory when your program starts and the lookup algorithms are intense to the point where you may not want to run them on your desktop computer. I’ve written a python library called word2vecserver that allows one to load a pre-trained model onto a server and use the client library to make requests for vector representations or analogy tests from another computer.

Word2VecServer GitHub Page

To use the library, download the pre-trained Google News file and load it into memory using Gensim.

I’ll add updates as I begin to use it in different contexts. Feel free to update as needed – if you make useful commits I’ll accept them!