
Anthropology Assistant Professor Beth Semel has been awarded Seed Grant Funding from the Princeton AI Lab in support of a workshop entitled “The Future of (Data) Work: Trust, Safety, and Alignment From Within the LLM Pipeline.”
Recent conversations around AI ethics, trust, and safety—often housed under the umbrella of “alignment”—have focused primarily on the implications that AI models might hold for end-user populations once models have been deployed for public use. Far less focus is lent to the impacts that the model production process itself has on the essential workers at the frontline of putting trust and safety protocols into practice, namely, the people engaged in “data work,” defined by Milagros Miceli and Julian Posada as “the labor involved in the collection, curation, classification, labeling, and verification of data.” This workshop brings together qualitative social science and computer science researchers and data workers to address this gap by imagining a more holistic approach to alignment, one better equipped to redress the harms and ethical issues that recur throughout the full stack of the AI pipeline.
According to the AI Lab’s press release, “funded proposals were chosen for their quality, originality, potential impact, and fit with the goals of current research initiatives within the AI Lab.”
Congratulations, Beth!