Much of the philosophical community has pivoted over the last decade toward issues involving sociality. New energy has poured into burgeoning disciplines like feminism, race theory, social ontology, and social epistemology, with philosophers tackling timely issues such as social identity, social construction, and social inequality. I consider my own research to be in the spirit of this social turn in philosophy, and yet I aim my sights at subdisciplines in which social foundations remain underappreciated and undertheorized. My central concern is to understand the complex relations between normativity and social organization, and I pursue this concern both in metaethics and in ethics of data and artificial intelligence. In what follows, I expand upon these research projects and upon my research orientation and methodology more broadly.
In my dissertation, I build on expressivist and Kantian constructivist theories in order to show how metaethicists might do a better job of making sense of the relations between normativity and sociality. Specifically, I focus on developing accounts of the social function of a normative perspective (i.e., the social function of making sense of the world in terms of norms) and of the social structure of a normative perspective (i.e., the social understanding embedded within a normative understanding of the world). In regards to a social function, I argue that expressivists would do well to make sense of how a normative perspective enables us to hold one another into different patterns of behavior, or different social roles. To this end, I develop a concept of what I call “differential conformist behavior,” according to which we differentially hold one another to different, locally conformed patterns of behavior. In regards to a social structure, I argue that Kantian constructivists would do well to expand what some call “practical” identities into “social” identities that co-constitute one another. To understand myself as bound by teaching norms is to understand myself as a teacher, but it is, at the same time, to understand others as students. I am bound by teaching norms that concern students, and they are bound by academic norms that concern their teacher. To see the world as normative is thus to see the world as social.
The final contribution of the dissertation is to provide a conceptual framework that allows us to put these two social views in conversation with one another. In the writing sample I have submitted with this application, I argue that we often offer and defend metaethical accounts from different perspectives, just as experts offer nutritional and botanical accounts of cucumbers from different perspectives. Nutrition and botany can inform one another and together offer a more holistic view of cucumbers. So it is with accounts in metaethics. This complementation of accounts I call productive comparison.
After publishing the chapters of my dissertation, I plan to develop my position further. The social view I will put forward aims to walk a fine line between constructivism (norms are up to us) and realism (norms are not up to us). In this sense, the view holds similarities to, and incorporates insights from, quasi-realism. I will have ammunition that the quasi-realist does not, however, because my view is more expansive. With the conceptual tools of perspectives and productive comparison, I can look to build on arguments from expressivists, constructivists, constitutivists, historicists, and deflationists. My most immediate concern is to write a series of articles that expands upon the dissertation by incorporating mind-independence and objectivity into the position.
My research interest in data ethics and artificial intelligence is as deeply informed by my work in metaethics as it is by interdisciplinary engagement with computer scientists. As a Fritz Fellow, I have been working under Ethics Lab and the Computer Science Department at the Georgetown Initiative on Tech and Society, working collaboratively to better understand the ethics governing burgeoning social media research that uses machine-learning algorithms to gather and organize organic (non-designed) data. I have recently drafted an article and have co-authored another with professors and post-docs in the Computer Science Department on this topic, and I plan to publish both articles shortly.
Too often, authors in this field aim merely to adapt bioethical terminology (informed consent, privacy rights, human subjects, etc.) from one context to another. The most prominent issue in the literature is often whether to collect the data—whether posts and tweets are public and whether the user has truly consented to take part in the research. The primary drawback of this approach is that it misses the complexities of the social practice of the research, and thus the complexities of the ethical norms involved in the research as well. Researchers don’t just collect public data and respect private data. They construct algorithms that organize data into informational categories (what values go into this construction?); they navigate a variety of social contexts in their research (perhaps there is no strict dichotomy between private and public posts and platforms); and they aim to produce generalizable knowledge about social groups, social institutions, etc. that have the potential of re-enforcing biases, stereotypes, and inequalities or of dismantling them. Moreover, this approach often unproductively grasps for a solid answer (the answer) on privacy when we haven’t, as a society, determined exactly who the research participants are and what they are doing. One post on Facebook might be interpreted variously as an exhibition, an exhortation, an expression, an invitation, and so on, and it isn’t clear that each of these acts are deserving of the same level of privacy.
My next project in this area will explore the ethical issues surrounding the use of machine-learning algorithms to categorize social media data (and other organic data) into informational sets that are then used to target individuals along presumed traits and social identities. Our experience online is, more and more, personalized by companies who determine our social identities (race, gender, age, political affiliation, etc.) and curate our experience on the basis of those identities. Are there moral harms or benefits of this radical personalization of one’s online experience? Is this sort of targeting any different from the way we engage with one another offline, which, similarly, is often on the basis of our determinations of one another’s social identities?
In the near future, my plan is to begin a larger project that joins metaethics and AI ethics, focusing on the social foundations of ethical agency for artificial intelligence. Computer scientists speak of the difference between “tool” AI and “agent” AI, and my background in sociality and metaethics will be instrumental in determining how to make sense of whether and how the agent AI of the near future can be said to be morally responsible. Ethical agency is a part of a social practice in which we hold one another accountable for our actions. Currently, when AI “goes wrong,” as it were, we don’t hold it accountable—we reprogram it. This would be akin to performing brain surgery on a child when they’re being too aggressive with other children. Can we develop AI that can hold and be held accountable in the right sort of ways? What if future AI is not a network of discrete entities (like discrete human agents), but is rather epistemically and practically interwoven in a way that precludes the exact form of social life that gives rise to ethical practices of holding accountable? Can such an AI still engage in what we would consider ethical learning? I plan to explore these questions and more in a series of articles and, eventually, a monograph on the foundations of ethical agency of AI.