THIS IS A DRAFT SYLLABUS – DETAILS ARE SUBJECT TO CHANGE
COMM 106D: Data and Culture – Winter 2022 (UCSD Communication)
Instructor: Prof. Stuart Geiger (COMM106D@stuartgeiger.com)
Time: MW 9-9:50am, F asynchronous and online (blended mode)
Course summary: Developments in artificial intelligence are being combined with unprecedented levels of personal data collection, which are used to make inferences about who we are, what we are interested in, and where we belong. In response, this course takes a cultural lens to issues around data and AI, organized around three related themes: representations of data and AI in popular culture and media; how the culture industries are using data and AI in their work; and the challenges of representing culture through data and AI.
First, we will ask how data collection, data analytics, and AI are discussed and represented in popular culture, media, and politics. What are the tropes, metaphors, slogans, and discourses we rely on when we try to make sense of these issues? How accurate are these representations and what do they miss about how data and AI actually work? Second, we will discuss how the traditional culture industries — such as film, music, television, journalism, sports, or art — are using data science and AI to make key decisions in their work. How does micro-targeting work, and what are the implications of giving us the search results that will keep our eyeballs on the screen the longest? Finally, we will study the issues that arise in representing culture through analyses of data, which date back to the first censuses in ancient times, but have taken a turn with new methods and data. What does it mean to analyze cultural texts using approaches like sentiment analysis or topic modeling? What do these approaches capture and what do they miss? What are the implications of measuring “global happiness” or “the best places to live” by analyzing activity on social media?
Relationship to other COMM courses: This course may have a some amount of overlap with related courses in the department, including COMM 106E (Data, Science, and Society, which I’ll teach in S22), COMM 106I (Internet Industries), and COMM 162 (Culture Industries). However, it is designed to take a different perspective and cover a different set of topics. We will largely *not* be focusing on topics that are more central to these other courses, such as the role of data and AI in sectors like criminal justice, surveillance, banking, hiring, admissions, welfare benefits, or science — which will be the focus on 106E in S22.
Scheduling and hybrid format (this will not change): This class is officially scheduled for MWF 9-9:50am. Monday and Wednesday will be in-person, with no hybrid/remote/recorded participation option available. The Friday “hour” will be an asynchronous online participation component, which must be completed at a time of your choice between the end of class on Wednesday and 11:59pm on Friday.
Prerequisites (this will not change): This is an intermediate elective, so only COMM 10 as a prereq or a co-req. This means you can enroll in this class at the same time as taking COMM 10. You must submit an EASY request after enrolling in COMM 10. Seehttps://communication.ucsd.edu/undergrad/academic-overview/faqs.html#I-want-to-enroll-in-COMM-10-and. No other prereqs are required. We will learn some aspects about how data science and AI works, but this course assumes no prior knowledge of computing.
Assessment (some details may change):
15% Mon/Wed in-person participation (you can miss up to 3 classes unexcused; 4 unexcused absences will be penalized 1 letter grade)
15% Friday virtual participation
20% Take home assignments (3 assignments, lowest grade dropped):
- Critique a representation of data/AI
- Report on the use of data in a culture industry
- (Mis-)represent your culture through data
20% Group midterm performance/skit/parody project, comprised of:
- 5% proposal
- 10% group grade, with points distributed by group members
- 5% statement of group work
30% Final, comprised of:
- 5% Final paper/project proposal
- 25% Final paper/project
Midterm: The midterm will be a group performance project, in which groups of 3-5 students will produce a 4-6 minute creative segment / skit on a topic related to the class. You will have to propose the topic beforehand. You are free to choose the genre (e.g. fictional drama in the style of Black Mirror, media critic segment, comedy / parody / satire of a common AI trope, etc.) and whether it is recorded or live. The only constraint is that it should not be a lecture-style presentation.
Final: The final will be a final paper/project of your choice, in which you analyze cultural representations around data/AI, report on data/AI practices in a culture industry, and/or critique a way of representing culture with data/AI. You can also propose an alternative topic. The default format for the final project is a 6-8 page paper, but students who have experience with filmmaking, creative writing, web design, data science, etc., can propose an alternative format drawing on their skills.
- Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., and Sandvig, C. (2015). “‘I Always Assumed That I Wasn’t Really That Close to [Her]’: Reasoning about Invisible Algorithms in News Feeds.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153–62. CHI ’15. New York, NY, USA: Association for Computing Machinery, 2015. https://doi.org/10.1145/2702123.2702556.
- Geiger, R.S. (2009). “Does Habermas Understand the Internet? The Algorithmic Construction of the Blogo/Public Sphere.” Gnovis. A Journal of Communication, Culture, and Technology 10, no. 1 (October 1, 2009). https://papers.ssrn.com/abstract=2734947.
- Ghosh, A. (2019) ‘Forecasting’, in Paul, H. (ed.) Critical Terms in Futures Studies. Springer. https://doi.org/10.1007/978-3-030-28987-4.
- Gillespie, Tarleton. “Algorithmically Recognizable: Santorum’s Google Problem, and Google’s Santorum Problem.” Information, Communication & Society 20, no. 1 (January 2, 2017): 63–80. https://doi.org/10.1080/1369118X.2016.1199721.
- Johnson, Ben, and Aparna Alluri. “If You’re Happy and You Know It, Write a Tweet.” Marketplace. NPR, February 10, 2015. https://www.marketplace.org/2015/02/10/if-youre-happy-and-you-know-it-write-tweet/.
- Lecompte, C. (2015) ‘Automation in the Newsroom.’ Cambridge, MA: Nieman Foundation. https://niemanreports.org/articles/automation-in-the-newsroom/
- Losh, E. et al. (2016) ‘Putting the human back into the digital humanities: Feminism, generosity, and mess’, Debates in the digital humanities, pp. 92–103.
- Madrigal, A.C. (2014) ‘How Netflix Reverse Engineered Hollywood’, The Atlantic, 2, p. 2014. https://www.theatlantic.com/technology/archive/2014/01/how-netflix-reverse-engineered-hollywood/282679/
- Manovich, L. (2012) ‘Trending: The Promises and the Challenges of Big Social Data’, in Gold, M.K. (ed.) Debates in the Digital Humanities. University of Minnesota Press, pp. 460–475. https://www.jstor.org/stable/10.5749/j.ctttv8hq.30
- Manovich, L. (2013) ‘The Algorithms of Our Lives’, The Chronicle of Higher Education. https://www.chronicle.com/article/the-algorithms-of-our-lives/
- Manovich, L. (2019) ‘Data’, in Paul, H. (ed.) Critical Terms in Futures Studies. Springer. https://link.springer.com/chapter/10.1007/978-3-030-28987-4_10
- Manovich, L. (2018) ‘Can We Think Without Categories?’, Digital Culture & Society, 4(1), pp. 17–27.
- Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018.
- Phillips, S. (2016) ‘Can Big Data Find the Next “Harry Potter”?’, The Atlantic, 12 September. https://www.theatlantic.com/technology/archive/2016/09/bestseller-ometer/499256/
- Podolny, S. (2015) ‘If an Algorithm Wrote This, How Would You Even Know?’, The New York Times, 7 March. https://www.nytimes.com/2015/03/08/opinion/sunday/if-an-algorithm-wrote-this-how-would-you-even-know.html
- Risam, R. and Josephs, K.B. (2021) The Digital Black Atlantic. U of Minnesota Press.
- Robertson, S. (2016) ‘The differences between digital humanities and digital history’, Istoriya, 7(7 (51)).
- Scott, James C. (2008). ‘Cities, People, and Language’ (ch 2), in Seeing Like a State.
- Seaver, N. (2017) ‘Algorithms as culture: Some tactics for the ethnography of algorithmic systems’, Big Data & Society, 4(2), p. 2053951717738104. doi:10.1177/2053951717738104.
- Seaver, N. (2019a) ‘Captivating algorithms: Recommender systems as traps’, Journal of Material Culture, 24(4), pp. 421–436. doi:10.1177/1359183518820366.
- Seaver, N. (2019b) ‘Knowing Algorithms’, in Vertesi, J. and Ribes, D., digitalSTS. Available at: https://digitalsts.net/essays/knowing-algorithms/ (Accessed: 23 November 2021).
- Seaver, N. (2021) ‘Everything lies in a space: cultural data and spatial reality’, Journal of the Royal Anthropological Institute, 27(S1), pp. 43–61. doi:10.1111/1467-9655.13479.
- Vanderbilt, T. (2013) ‘The Science Behind the Netflix Algorithms That Decide What You’ll Watch Next’, Wired. https://www.wired.com/2013/08/qq-netflix-algorithm/
- Willard, L. (2016) ‘Tumblr’s GIF Economy: The Promotional Function of Industrially Gifted Gifsets’, Flow [Preprint]. http://www.flowjournal.org/2016/07/tumblrs-gif-economy/</p>