When reading Miller's "What Can Automation Tell Us About Agency?" piece, she describes the dangers of automated grading systems and how they pose a threat to spoken and written rhetorical agency. She states "in effect, automated assessment systems create a situation in which Burkean symbolic action directly confronts nonsymbolic motion in the form of the machine. This confrontation suggests that rhetorical agency is exactly what is at stake in automated assessment. It raises questions about the action and agentive capacity of the writer or speaker in the context of the presumably agentless motion of the mechanized audience" (Miller, 140). She's saying machines can't interpret a human's tonality, agency, and energy which are all key factors in how a message is delivered and should be interpreted.
This reminds me of how texting and emails aren't the best forms of communication compared to actually speaking to someone, especially face to face because you can't get a sense of their actual emotions from strictly interpreting the written word out of context. It's a big source of miscommunication and misunderstandings. But it's an even bigger stake when we're looking at automated assessments of student's schoolwork. Humans can pick up more on a writer or speaker's agency significantly better than an automated, emotionless machine can, so it's essential to keep that human element.
Miller, Carolyn R "What Can Automation Tell Us About Agency?" Rhetoric Society Quarterly, 26 February 2011
Hello,
ReplyDeleteI totally agree that a lot can be lost in translation with texting and emailing. I personally would prefer a face-to-face conversation or phone call rather than trying to convey not only my message but also my tone and energy through text or email. When you talked about automated assessments and how they can take away the agency of the person being assessed it reminded me of a recent experience I had in a class where we had an online test that was auto-graded. I got almost every single one of the fill-in-the-blank questions wrong because I didn't capitalize a letter and there was one question where there were two different names for the answer but only one was getting counted which was very frustrating when I technically had all the fill in the blank answers right but it was telling me they were wrong and the instructor had to go back through and regrade it themselves which makes them spend the time re-grading that they were supposed to save from the auto-grading.