Thanks to all subscribers! Much appreciated, and please keep them coming. All are welcome!
Back in February of this year, Ian Bogost wrote a piece in The Atlantic called “ChatGPT is About to Dump More Work on Everyone.” Irrespective of whether or not you feel AI tools can help your workflow, Bogost is undeniably correct, and we are already seeing the early evidence.
The Nieman Journalism Lab—an excellent publication and resource—recently published “Writing guidelines for the role of AI in your newsroom? Here are some, er, guidelines for that.” This is an excellent piece in that its scale is global, collecting standards from publications around the world. It’s worth the read if just for the scenarios you might not have thought of, such as:
CBC says that they will not use AI to recreate the voice or likeness of any CBC journalist or personality “except to illustrate how the technology works.” Interestingly, they link this exception to two conditions, namely the (1) “advance approval of our standards office,” and the (2) “approval of the individual being ‘recreated.'” Additionally, they will not use the technology for their investigative journalism in the form of facial recognition or voice matching, as well as not use it to generate voices for confidential sources whose identity they are trying to protect. They will continue practices that are understood by their audiences, such as voice modulation, image blurring, and silhouette.
So, since journalist X has a terrible head cold, we have recreated their “feeling well” voice for this segment?
Either way, the article’s topic itself—the creation of these numerous standards, the hours of meetings past and future that will be dedicated to such work—has already proven Bogost correct. And it’s important to re-emphasize Bogost’s main point—this is something we’ve known about new technologies for a long time. My favorite example is email: the amount of work email has made for people in the workplace is mind breaking, with most of it unimportant and in terms of time lost, never to be regained.
What made me think of Bogost’s article today was this seemingly throw-away section in the Nieman article which refers to guidelines at Insider:
Do not plagiarize! Always verify originality. Best company practices for doing so are likely to evolve, but for now, at a minimum, make sure you are running any passages received from ChatGPT through Google search and Grammarly’s plagiarism search. (Emphasis mine.)
Just as Bogost claimed, here is the dumped work in question. Whereas writers would never have to do this with their own material, now, as a regular matter of practice, they must do so, at a minimum. This takes time. As any teacher can tell you, the reason plagiarism is awful is not the moral dimension—it simply makes a ton of work that wasn’t previously required. Grading a paper in 20 minutes becomes a multi-hour adventure. Now I have to search through this whole paper, point out every instance of plagiarism and its original source. I will have to meet with the student individually. Does this require a larger disciplinary action? Who do I contact about that? It may require a hearing of some sort. And so on. So what we now have is a) the original prompting of an AI tool to generate text for you, followed by b) checking that generated text against multiple software tools to ensure your non-human-generated text is indeed not human. The conjunction used in the passage quoted above is not “or” but “and.” Generate the text and then use, at a minimum, both of these resources. How long does that take? What does this mean for existing schedules and deadlines? Is this saving time and work or is it accomplishing the dreaded opposite? Do the writers do this work or does someone else?
If you comb through the Nieman article, especially the section on “transparency,” the word “label” is used five times in four paragraphs to reference various outlets clearly highlighting for readers what content is AI generated. Who does this labor that didn’t previously exist? Is it the writer(s) of the piece? After they are done with Google, Grammarly, and maybe TurnItIn? What does the labeling look like? Will this require meetings and the creation of style guides? How do we ensure such labeling does not distract readers as they make their way through the story?
All of this lends some serious weight to the final line of Bogost’s essay, which reads, “Maybe AI will help you work. But more likely, you’ll be working for AI.” I could write a similar sentence about our past and present: “Maybe email will help you work. But more likely, you’ll be working for email.” (See Cal Newport’s new book, A World Without Email.)
Ironically, this feels like one scenario where the creatives and artists win. Just as the hypertext novel and generative poems sent ripples through the early web years, creatives can harness this because the work is by definition not “more”; it is simply part of the vision and project. But for those who will do this work, well, at work, I would consider giving these articles, especially Bogost’s, a read—it should at the very least inspire people to frequently ask, “Is this saving or making work?” The answer is likely both—it saves on tasks you used to do but adds others in their place. The question is, when such guidelines are in place, which way do the scales finally tilt?