This Fall, John Mohr and I ran a pilot program for teaching Sociology undergraduates how to use topic modeling in their projects. The pilot program lasted only about 4 weeks and students were asked to prepare a text corpus of approximately 100 documents using LexisNexis (or copy-paste from the web) and perform analysis using Excel or Google Sheets. Past mentoring projects of both John and I showed that undergraduates can come up with some pretty creative ways to use these computational analysis tools, even if they can’t write the code to do it themselves (see my summer mentorship project). Beyond the technical, the most challenging part of this work is getting students to think about what information they can get from large corpora and how to use the tools to answer questions of interest. It is clear that the era of Big Data and access to internet has changed the way social processes occur on a large scale (think Fake News), so we need to train social scientists to use new tools and think about data differently.
Last October I visited Erlangen, Germany to attend a workshop set up by Dr. Tim Griebel and Prof. Dr. Stefan Evert called “Texts and Images of Austerity in Britain. A Multimodal Multimedia Analysis”. Tim and Stefan are leading this ongoing project aimed at analyzing 20k news articles from The Telegraph and The Guardian starting in 2010 and leading up to Brexit. I’m working alongside 21 other researchers with backgrounds in discourse analysis, corpus linguistics, computational linguistics, multimodal analysis, and sociology to explore discourse between the two news sources across time from different perspectives.
I’ve been thinking recently about how we think and talk about the relationship between theory and methods in computational text analysis. I argue that an assumed inseparability of theory and methods leads us to false conclusions about the potential for Topic Modeling and other machine learning (ML) approaches to provide meaningful insight, and that it holds us back from developing more systematic interpretation methods and addressing issues like reproducibility and robustness. I’ll respond specifically to Dr. Andrew Hardie’s presentation “Exploratory analysis of word frequencies across corpus texts” given at the 2017 Corpus Linguistics conference in Birmingham. Andrew makes some really good points in his critique about shortcomings and misunderstandings of tools like Topic Modeling, and I hope to contribute to this conversation so that we can further improve these methods – both in how we use them and how we think about them.
I spent Summer of 2017 with my colleague Marcelle Cohen living in and studying the conflict and peace process in Colombia. Our objective was to explore how political discourse as cultural practice creates entrenched ideologies and contentious politics there, and how those discourses relate to other populist movements happening around the world. From a methodological perspective, I’m interested to see how we can use interview data in tandem with computational text analysis and quantitative network methods. We performed interviews with politicians and diplomats, attended political rallies in Bogota and more rural communities, and made connections with some local peace organizations and universities. Our interviews will allow us to give agency to the political elite and understand discourse at a point of production as it is embedded in a political institution. Ultimately I had a great experience that allowed me to test the lenses of cultural and political theory, learn about qualitative methods, and dive deeper into the political culture in Colombia.colombia_overview1
This article is more about my meta-impressions – see the academic presentation Political Culture in Colombia for some depth.