“Humans in the Loop: Humanities Hermeneutics and Machine Learning.” Keynote for DHd2020 (7th Annual Conference of the German Society for Digital Humanities), University of Paderborn, 6 March 2020.
- Abstract: As indicated by the emergent research fields of computational “interpretability” and “explainability,” machine learning creates fundamental hermeneutical problems. One of the least understood aspects of machine learning is how humans learn from machine learning. How does an individual, team, organization, or society “read” computational “distant reading” when it is performed by complex algorithms on immense datasets? Can methods of interpretation familiar to the humanities (e.g., traditional or poststructuralist ways of relating the general and the specific, the abstract and the concrete, the structure and the event, or the same and the different) be applied to machine learning? Further, can such traditions be applied with the explicitness, standardization, and reproducibility needed to engage meaningfully with the different Spielräum – scope for “play” (as in the “play of a rope,” “wiggle room,” or machine-part “tolerance”) – of computation? If so, how might that change the hermeneutics of the humanities themselves?
In his keynote lecture, Alan Liu uses the example of the formalized “interpretation protocol” for topic models he is developing for the Mellon Foundation funded WhatEvery1Says project (which is text-analyzing millions of newspaper articles mentioning the humanities) to reflect on how humanistic traditions of interpretation can contribute to machine learning. But he also suggests how machine learning changes humanistic interpretation through fresh ideas about wholes and parts, mimetic representation and probabilistic modeling, and similarity and difference (or identity and culture).
- Video of lecture