Class Business
- Upcoming Readings
- For Thursday
- Stephen P. Borgatti, et al. (2009), “Network Analysis in the Social Sciences”
alternative source: pre-copyedited manuscript of the article
- Elijah Meeks and Scott B. Weingart, “Introduction to Network Analysis and Representation” — click on the tabs for “centrality, ” “clustering coefficient,” etc. for brief interactive tutorials
- Stephen P. Borgatti, et al. (2009), “Network Analysis in the Social Sciences”
- For Next Tuesday
- Paola Pascual-Ferrá, Neil Alperstein, and Daniel J. Barnett, “Social Network Analysis of COVID-19 Public Discourse on Twitter: Implications for Risk Communication” (2020)
- Richard Jean So and Hoyt Long, “Network Analysis and the Sociology of Modernism” (2013) — read only pp. 147-149, 158-166
- Practicum 7 (optional) due Next Tuesday , May 28th: Social Network Analysis Exercise (Part A)
- For Thursday
Bot or Not?
Minh Hua and Rita Raley, “Playing With Unicorns: AI Dungeon and Citizen NLP” (2020)
What is striking even now is the extent to which humanistic evaluation in the domain of language generation is situated as a Turing decision: bot or not. We do not however need tales of unicorns to remind us that passable text is itself no longer a unicorn. (link)
- Anthropic. “Decomposing Language Models Into Understandable Components,” 2024.
- Anthropic. “Mapping the Mind of a Large Language Model,” 2024.
Creative or Not?
Epigraphs to Frame the Discussion
- Harold Cohen’s “Aaron”(1972-2010s)
- Whitney Museum of American Art
- Artnet
- Aaron Screensaver (video of screensaver in action)
- Douglass Bakkum, Philip Gamblen, Guy Ben-Ary, Zenas Chao, and Steve Potter, “MEART: The Semi-Living Artist” (2007).
- Margaret A. Boden, The Creative Mind: Myths and Mechanisms, 2nd ed. (1990/2004) (PDF)
Practicum 6: Large Language Models & Text-to-Image Models Exercise
- Nathan Cox, “Disney Parks Infrastructure – Field Guide” (2018)
- CFP: NeurIPS Creative AI Track: Ambiguity
- Qiu, Weihao, and George Legrady. “Fencing Hallucination: An Interactive Installation for Fencing with AI and Synthesizing Chronophotographs.” In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1–5. CHI EA ’23. New York, NY, USA: Association for Computing Machinery, 2023.
Other Questions
Functional or not?
Educational or not?
Fun or not?
Other questons to be asked?
Good or Not
Emily M. Bender, Timnit Gebru et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” (2021)
In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? (abtract)
-
- Environmental & Financial Cost (sect. 3)
- Unfathomable Training Data (sect. 4)
- Overrepresentation of dominant voices (4.1)
- Filtering out of marginalized voices (4.2)
- Encoding bias (4.3)
- “Documentation debt” (4.4)
- Opportunity cost (wasting research on wrong aims) (sect. 5)
- Illusion of coherent meaning and communication (sect. 6)
The ersatz fluency and coherence of LMs raises several risks, precisely because humans are prepared to interpret strings belonging to languages they speak as meaningful and corresponding to the communicative intent of some individual or group of individuals who have accountability for what is said. (sect. 6.2)