We are seeking proposals for chapters that study digital aggression and ethics from a rhetorical perspective.
In July 2017, Pew Research Center published a research report that illustrates that 4 in 10 US adults (41%, up 1% from 2014) have experienced online harassment and many more have witnessed it. From name calling and public shaming to physical threats and stalking, much of this activity is targeted at women and directed toward members of racial or ethnic minority groups and transgender individuals. In addition, religious views and political views and affiliations have also been the basis for aggression. Research shows that 1 in 4 black people have been the target of harassment, and women are twice as likely as men to be harassed online (Duggan, 2017). Recent events such as Gamergate in 2014 brought the gendered tensions and gatekeeping practices present in gamer culture to the attention of the mainstream public. The 2017 white nationalist demonstration in Charlottesville, VA (which was organized in the public digital forums), and the online political and public responses following it gave renewed visibility to hate groups.
This report and these current events and others raise questions about response, responsibility, and accountability that the field of digital rhetoric, with its attention to ethics, is uniquely poised to address. This chapters in this collection, as a whole, will build on what James E. Porter (1998) has called “rhetorical ethics,” which do not constitute a moral code or a set of laws but rather a “set of implicit understandings between writer and audience about their relationship” (p. 68). While Porter’s work appeared before the rise of social media and other contemporary web contexts, we have now seen how these implicit agreements extend beyond writer and reader (who often occupy both roles) to also include the individuals, communities, and institutions that build and manage technological spaces for discourse and engagement. Further, as James J. Brown, Jr. (2015) argues, digital platforms, networks, and technologies themselves carry ethical programs with rhetorical implications. Through examinations of unethical practices in digital spaces, this collection will contribute to the field’s research and theorizing about ethical participation, while also providing frameworks and approaches for informed responses and actions.
We seek proposals for chapters that address digital aggression through the lens of rhetorical theory and analysis. How does digital harassment, hate speech and aggression operate rhetorically? What does hate speech directed toward specific individuals or groups look like? What does rhetorical theory and analysis tell us about why and how digital aggression has become visible and widespread? Where does (and should) responsibility and accountability lie? What approaches might be best for addressing it? How do laws, policies, and technologies regulate unethical rhetorics, and how should they?
Proposal Guidelines
Ideally, the chapters in this collection will present new research and scholarly theorizing on the topic of digital ethics (rather than how-to guides for approaching legal and ethical issues online). The writing should be accessible to a broad audience of researchers and teacher-scholars working in the field of rhetoric and writing studies. Chapter length is ~6,500 words.
Proposals should be between 400 and 600 words and include:
- A working manuscript title
- An abstract of the manuscript that identifies the focus and topics of the chapter and its contribution to theory and research in the field of digital rhetoric, broadly conceived.
Timeline
- Proposals submitted to jreyman@niu.edu and emsparb@ilstu.edu by December 1, 2017
- Proposal decisions delivered by January 15, 2018
- Full manuscripts submitted by July 1, 2018
Leave a Reply