Monday, October 10, 2022

Blog Post 5 - Miltenberger

     I chose to discuss Carolyn Miller's article on agency and automation, as I thought that it was really interesting. Miller talks about the introduction and vast reception of automatic grading tools, especially for standardized and placement assessment (Miller, 1). The area where this piqued her interest was in the topic of automated writing assessors and automated speech assessors. This is where Miller decides to look into the idea that these tools "denaturalize rhetorical action" and are problematic (Miller, 5). Miller asked a number of speech and writing professors abbout their thoughts on a program like this, and most of them had an issue with the usage of these tools to grade work for their students. The two main reasons that the author pointed out were that the programs could not take into account "Communicative complexities", and that the programs would do damage to "rhetoric's audience" (Miller, 6). The issue is that an auto-assessor like this removes a good deal of agency, and understanding of certain aspects of speech and writing, and rhetoric "Presupposes and celebrates agency" (Miller, 7). The major issue in all of this is that many scholars are convinced that a program that is set to automatically grade a piece or speech will not be able to percieve certain rhetorical strategies or be able to pick up certain creative aspects of peoples work. This condenses students work into standards determined by an algorithm, removing much of what is considered valuable to rhetors. 

    In the same vein of AI tools, this piece really made me think about those image AI processors. They let you put in a number of words or phrases and put out an image, taking reference from the internet to create these images. I know many people who think this kind of program is really cool, and they love creating these wacky images and whatnot, but I also know a good deal of people really concerned about this. Most of the concern comes from the fear that people will start to try to replace the work of real artists and photographers with these devices, or that that these programs will be able to help people distort reality (think Deepfakes). These fears haven't really been confirmed yet, but the issue is that people worry that programs like this will condition to disregard the creativity and work of others if its just easier to type in a bunch of words into a program and get something similar. I think that's been one of the largest issues that I've had with the popularity of AI programs like this, is that they cannot see human intent, nor do they grant the people using them any agency in their creation or assessment. Recently, Getty images actually banned the use of AI created imagery over the murky issue of copyright laws and images (Jacobs, 2022). I think it's an issue we should be considering with heavy scrutiny. 

https://www.artnews.com/art-news/news/getty-images-bans-ai-generated-images-due-to-copyright-1234640201/ 

1 comment:

  1. I agree that AI image processors can be really scary to think about. The idea of 'robots' taking over the world went from funny to scary a few years ago, and things like this seem to be getting humanity closer to that outcome. I'll be honest, I've definitely sat around and played with these image programs for a little bit with my friends- but there is always this strange feeling of "what if this actually is bad, and it's not all just a joke?" I do have a question though. Why can't this form of art from being something groundbreaking and positive? How could we turn this into something that could blossom into art careers for people. I know it seems farfetched, and I'm only asking to try and navigate this process by looking at every perspective before judging, but there could be a lot of beautiful, original art made if software like this is developed and used correctly.

    ReplyDelete

Blog Post 10 - 12/6

Arnett et al, discusses the modern state of communication ethics and pragmatism. Much like many of our discussions this semester, the piece ...