Dear Friends,
I’m a member of a few professor groups online, and each day there is another post about further shenanigans in the realms of large language models and generative AI (e.g.: ChatGPT), which mostly means students presenting LLM-created work as their own. I’ve noticed there are a few different archetypes of teachers that come up when we talk about AI.
There’s the How Do You Do, Fellow Kids? teacher who wants everyone to know how enthusiastically they incorporate AI into their classroom assignments because they are on the cutting edge and teaching their students how to use AI responsibly. In reality, few students are actually using AI models solely for consultations or brain-storming: they are copying the text that has been generated and presenting those ideas as their own (without citations, mind you).
While the former group of instructors is a little dorky, they worry me less than the Barney-Fife teachers who are frantically trying to reverse-engineer if their students are cheating by using flimsy AI-detection models. I’m pretty horrified at even imagining another higher-ed instructor “calling out” or even failing a student based on some false positive from GPTZero. It just seems like a disservice to oneself and one’s own time to be cosplaying as a police officer on the daily—not to mention potentially causing real harm to a student.
As someone who almost exclusively teaches literature and writing, I don’t think what I’m dealing with is anything new as far as academic dishonesty is concerned. SparkNotes has existed since I was in high school, and CliffsNotes existed for over 40 years before that. Students have always presented other scholars’ thoughts as their own. Creative writers have always plagarized. Artists have always traced (not to mention the controversy questioning photography’s hyperrealism as a genuine artform when it entered the public sphere in the 1800s). And all those, ahem, rumors, about ghost singers on JLo tracks….
Don’t get me wrong. I’m not trying to present myself as lackadaiscal or indifferent. In many ways I’m the opposite: someone who has no chill and is constantly trying to re-invent the wheel in their own classroom. I do raise my eyebrow every time I see a “delve into,” “rich tapestry,” “crucible,” “beacon,” “robust,” and so on from students who otherwise do not seem to possess a college-level vocabulary (for those not in-the-know: ChatGPT tends to love to generate the aforementioned words/phrases).
And I have been working on some proprietary techniques to keep my students on the straight-and-narrow (admittedly, this is a little easier in the humanities). But, I also have bigger fish to try: trying to get students to do the reading in the first place + be bold enough to speak in class + use college as a space to form their own ideas about the world. This often comes before making sure some 250-word reflection was generated by AI or not. Some idealist part of me would like to believe that if you cheat your way through college without putting in the hard work, it will catch up with you in some way at some point (I just had an intrusive thought about a cheater-engineer student designing a bridge and sweated a little).
I’m never going to blame my students about their lack of enthusiasm after we’ve spent over 20 years in the culture of “No Child Left Behind Act” where learning is primarily associated with testing and metrics of success and measurable goals. Nor am I going to be mad about youth who received a poor public school education because our government bleeds funding with the neoliberal end-goal to subsidize private (and charter) schools with public funds (and also abolish public schools). I’m not even going to go into the ever-increasing criminalization of young people and the ever-growing presence of ‘resource officers’ on-site at schools who contribute to the school-to-prison pipeline—or the ways higher ed has become less education-based and more customer-service-based.
I suppose what I’m trying to work toward is: I don’t blame my students for entering college with an ambivalence (or even hatred) toward education. They have faced much disenfranchisement. They are a product of their culture. My goal, first and foremost, is to turn them onto at least one reading that they love—as well as get them to form their ideas and begin to shape their own worldview just a little bit more. I try to avoid generation-based discourse as much as possible, but I am curious how being inundated with technologies of convenience and customized, curated content at all times (i.e.: the For You Page) is shaping Gen Z and Gen Alpha. There is a lot that only time will reveal to me as a teacher.
And to be honest, I’m mostly talking about academic work here. I have not been in a situation yet where a student has taken a creative writing class ‘for an easy credit’ and used ChatGPT to generate all their poetry assignments. I know it will happen (and has happened to other teachers). What I am trying to do is find some path forward that walks some middle ground between caution and negligence, certainty and wonder. I do think there are some people who are hell-bent on not putting in the work, but I also believe I have some capacity to empower change in students to realize their is a value to their own educations.
At the end of the day, I will not be resisting AI, because AI is not going to go away. Compared to what the future holds with artificial intelligence, we’re currently in a valley. Even with what we imagine as possibilities of technology during its next ascent, I don’t think we can truly imagine what comes after the next peak. There are mountains and mountains beyond.
In this past week, I received an Instagram ad with Gwyneth Paltrow saying that GOOP had reached a milestone, and that they would be giving away X number of free wellness kits if people just covered S/H. It made me feel skeptical/uneasy, and I clicked through, although the yellow flag was not a fully formed thought. It was after I was taken to definitely-not-goop.com that I realized it was a scam trying to get people to enter their credit card number (by the way: I reported this to IG and they declined my report telling me nothing was wrong with it 🙃). It was upon hitting the back button to return to the ad that I figured out what made me uneasy: the “Gwyneth” in the ad was a deepfake trained on her likeness and voice.
And this shit is not going away. It’s only going to get smarter, more indiscernable. Even the more innocuous misinformation is going to get harder to detect. Just last week I saw a friend share “photos” inside of a McDonalds in the 1980s. The six-fingered hands, incomprehensible dream text, and botched McDonalds logos (that looked like holy sigils used to seal an eldritch horror) were a quick giveway—although they weren’t immediately obvious. Soon, they might not be obvious at all.
Don’t get me wrong either: where there is money, and where there is capitalism, there will be exploitation and a lack of ethics. I’m no Pollyanna where this is concerned. But where there is a capitalist model, there are also potentialities toward a type of commons as well. There are opportunities for a principled way of being. This isn’t new in the arts either. There are lineages of collage and pastiche and canto poems and golden shovel poems and cut-up writing and Kathy Acker’s plagiarism and Richard Prince’s photo-of-photos and Flarf poetry… and AI has to fit in there somewhere.
A Few Operational Thoughts toward a Mindful & Ethical Relationship with AI
We all need to educate ourselves on the fallibility and unreliability of AI. We need to teach each other. We also need to practice vigilance against AI that is being used for information disorder, scams, hoaxes, political propaganda, and beyond. Businesses that work in the realms of AI should incentize employees to possess some awareness, including ethical and potential legal risks.
LLMs + GAI should be trained exclusively on open source, public domain, and/or public [copyright] licensed work (that explicitly consented to being used for LLM/AI training). Opt-in, not opt-out. If a technology cannot verify what exactly it was trained on, it should no longer be used. Boycotts should be reserved for LLMs that were not ethically trained—not AI across the board.
In an ideal world, ChatGPT and its ilk would be able to accurately point to credible sources that it was trained on to provide the information it’s generating (ChatGPT has a tendency to confidently make up fake sources when questioned). In this instance of scholarship, the writer could track down original sources to better understand the root of what AI is synthesizing together, and also locate actual humans to cite. Essentially what higher-ed educators like myself tell to students when they use Wikipedia: that Wiki is not a source, but you can use it to find footnotes from thinkers that are potential sources. That being said, citing ChatGPT as acknowledgment of using it should be the bare minimum (although I still have concerns about this because it feels like citing Wikipedia).
Any type of art (short story, poem, illustration, song, etc.) generated with AI needs to cite this as the medium. It also needs to cite exactly which generative machine learning model was used to create the artwork (e.g.: Adobe Firefly, Midjourney, DALL-E, Soundraw, ChatGPT…). Magazines, physical venues, publishing venues, and other spaces that exist as platforms or marketplaces for art reserve the right to ban AI-generated work, and this should be respected.
Scholarly works written, edited, or reworded with LLMs should acknowledge this—and academic publishers have every right to reject this. This is already an issue, with potentially thousands of scientific papers published in the past year relying on LLMs. We should also be looking at the factors that are pushing students to rely on models like ChatGPT instead of their own writing skills.
Creative works created with AI that were not trained on ethical/opt-in models should not be sold for profit.
As some final thoughts: I’ve been playing with AI for a few years now. I’ve always been a bit of an ‘early-adopter type’ as far as tech is concerned. In the immediate above, you’ll see two versions of a “seapunk cemetery” I played around with generating in either 2020 or 2021 (I had used some code listed on Github that I’m sorry to say I forget now). If you look at that versus the other three images I made with Adobe Firefly at the tail-end of 2023, the improvement in the tech is startling. I was already a little skeptical of Adobe Firefly (since I doubt very many of the creators of stock images in Adobe’s database had willfully consented to being used to train AI); I haven’t used it in awhile since it (unsurprisingly) came out that Firefly also trained on Midjourney (which has absolutely no ethical boundaries).
My own creative and scholarly work has taken precedance at the moment, however I would like to play around with AI art some more. What I enjoyed experimenting with Firefly was that it wasn’t just a push of a button (the first generation always looked like the most generic Shutterstock image you’ve ever seen). The image with two boys and a ghost you see above came out of hundreds of iterations. It took me hours to get the boys and the phantom just right, including the dusty light in the hallway. I then fussed around with about twenty different chandeliers (and adding an antique phone in the background). Many of these images I took into Photoshop and digitally painted over or edited further. While there are many ethical considerations with AI art, the ‘lazy’ or ‘easy’ arguments are dishonest, or at least ignorant. I’ve never used LLMs to generate creative writing, but I imagine one could edit and fuss around for hours to build meat and organs around the skeleton that AI generated. I’d be curious to see a well-documented process of someone generating a story with a model like ChatGPT and then building upon it to create something else entirely.
A few other final notes: I’ve pasted some of my own creative work in ChatGPT: some of it received shockingly poignant analyses. I’ve asked questions of LLMs before using them as prompts I assign (I tend to take incoherent or incorrect babble as a green flag). I play around with it because I want to know how my students are using these types of tech, and I want to possess some skill to navigate a quickly-changing academic and creative landscape.
In no way am I presenting myself as an expert on this. If you want an oracle, talk to Ted Chiang. What I do know is that this type of technology is going to become a part of our daily lives soon (I mean, it is already), and prohibition across the board is not the answer. Neither is a naïve sense of goodwill. There has to be some careful approach, some accountability for the billion-dollar companies (who I’m more concerned about than some random guy selling prints of AI-generated anime babes in swimsuits), and some principled model that respects the creative class as well as the scholarly one.
Maybe I’ll have more to say about this at some point. Maybe my positions will change. Whatever the case, there’s always another mountain to climb with this type of technology as it quickly and vastly improves itself. I’ll see you in the next valley.
Until next time,
JD