Is Your Homework Done? Doing What Is Right To Get It Right

Joan Dubinsky
Fellow, Rutland Institute for Ethics


February 6, 2023


Have you heard the buzz? There’s this great new app – it will do your homework, make you a better writer, guarantee that you will get all A’s this semester, and it will even buy you a winning lottery ticket.


Does this sound like student Nirvana? Of course! And yes, I exaggerate (but only slightly).


So what are we talking about? There is a newly released artificial intelligence application, called ChatGPT, that may be revolutionizing how students learn—or how students demonstrate what they have learned. On the other hand, this new application may not be that revolutionary after all. For this app, as with many other software applications, we need to sift through what it does, what it might be able to do, and the hype and exaggeration that often accompanies new product releases.


At its most basic, ChatGPT is a chatbot created by a non-profit company called OpenAI. ChatGPT was released to the public in November 2022. Anyone may interact with the chatbot, using simple conversation, to ask about research, apply a policy to a set of facts, or compose an essay. Depending upon current capacity, you may be able to access this chatbot and try it yourself. Your already-existing browser or search engine may let you access ChatGPT for a trial run. There is a rumor (at least as of today), that OpenAI may have an agreement with Microsoft to include this app or one of its progenies in the popular Microsoft Office suite.


Does this sound like an answer to a student’s prayers for homework help? Could ChatGPT be better than a personal tutor, especially if the chatbot will produce a paragraph or an entire essay that is grammatically correct and arguably on point?


Before we can decide whether using a ChatGPT to do homework is ethically permissible, we need to know a bit more about how it works. In other words, let’s kick the tires before we buy this new racing scooter.


In layman’s terms, ChatGPT scans a huge amount of existing written materials and probes what has been published to identify key, repeating ideas. By synthesizing what “everyone already knows,” the chatbot can produce a summary of that knowledge and communicate it using everyday language.


What could possibly be wrong with this approach? There are some assumptions in this logic that may impact our evaluation. First, is all human knowledge available in an on-line format that can be scanned and sampled? What information or insights are not already recorded in a machine-readable manner? Ancient history teaches us that modern humans have interacted with their environments for well over 200,000 years. We have only used written language, to record our experiences and thoughts, for around 5400 years. That’s a knowledge gap of 194,600 years.


The second assumption is one that I call a “lemming problem.” According to urban legend, lemmings are small rodents who are prone to follow the crowd and periodically charge off a cliff to their collective death. As a child, you may have heard your parent admonish that “just because everyone else does something, does not make it the right thing to do.” If an artificially intelligent chatbot collects and synthesizes information that is then represented as true, we are confusing popularity with accuracy and insight. We may also knowingly present this information as true, without checking it out for ourselves. There is an old and wise adage in the field of computer science that is worth considering: “Garbage in, garbage out.”


The third assumption is bias. If a chatbot relies upon “commonly accepted knowledge,” it risks repeating and amplifying information that may convey inherent biases. One of the early AI influenced tools was created by Amazon, to search through thousands of machine-readable resumes and identify the most qualified candidates. And what occurred? Candidates whose credentials and activities best matched the profiles of the program’s creators were the ones who were identified as most qualified. In other words, Amazon’s on-line recruiting software did not like women. In a field dominated by male software developers, talented women were not screened in – they were excluded from the profile of the “best” talent. Amazon, we are told, scrapped this recruiting tool in early 2018.


Why is bias a problem? Think of a chatbot as a creation that learns from its own experiences. If all the sampled information-- or even 50% of that data--reflects bias against persons belonging to a marginalized group, what the chatbot has learned and then reflects will be influenced by that inherent bias. ChatGPT reflects and then reinforces a kind of echo chamber.


An artificial intelligence application can reflect statistical patterns and correlations among information that is already recorded. It’s not likely that such an application can also use pattern recognition to discern nuances and complexities about a topic. Let’s pretend that you use ChatGPT to write a speech. Your computer-generated speech might sound good—but will it offer deep insights and reflect new ways of thinking about a challenge that we are facing? Such a speech might be entertaining if the speaker is dynamic, but after the speech is over, I may not be able to tell you what I found to be so compelling.


So, let us return to our primary question. If I use ChatGPT to do my homework, am I cheating?


When a student submits homework in response to a classroom assignment, the student is implicitly communicating ownership, authenticity, accuracy, and honesty. In essence, that student is saying:

  • I am responsible for this work,
  • this work represents my own ideas and thoughts,
  • my work is accurate, complete, and responds to the professor’s line of inquiry, and
  • the first three statements are truthful

Machine intelligence may indeed help me do my research. A quick look at Wikipedia can help form my hypothesis and research plan. Looking at Google Scholar can help me identify some of the seminal articles on my chosen topic and advise me which articles have been most frequently cited or challenged. Both of those steps help me organize my ideas, plan my work, and point me towards fruitful—or not so fruitful—avenues. Talking with a research librarian can help me dig more deeply into primary sources, identify analytical tools and methods, and introduce me to experts in the field.


Neither Wikipedia, Google Scholar, nor my local reference librarian will write the outline, conduct the research, prepare research notes, develop arguments, or write the paper. That is not their role or function.


However, if I ask ChatGPT to write my essay, I cannot really say that I am responsible for this piece of work, because I have not done the necessary scholarly work. The essay that I turn in does not represent my ideas and thoughts. I may believe that the essay is accurate, complete, and responsive but only if I trust that the artificial intelligence is accurate, complete, and responsive. Using our “is it cheating” test, I think that I have failed all four requirements. An essay generated by ChatGPT is not truly my work and probably does not reflect what I have learned.


So, should universities simply ban the use of ChatGPT and its spinoffs, future iterations, and competitors? I don’t think so—and here’s why. As we become more familiar with how to integrate artificial intelligence and machine learning into university life, we should be able to limit or focus our use of apps like ChatGPT. If ChatGPT helps me focus my research time, or generate a research outline, I can make the argument that it’s limited use is ethically permissible. That opinion is accompanied by a necessary condition. I must give attribution to the ideas and work that were generated by the chatbot. I would need to add a footnote, show my sources, and explain which ideas were generated through consultation with other sources—one of which may be ChatGPT.


The introduction of every new piece of technology carries some ethical risk. We can imagine some possible consequences of ChatGPT and its future iterations. Our assessment of harms and benefits, consequences, and impact will never be perfect: none of us are very good at foreseeing the future. To navigate these not-yet-known ethical risks, you can do one thing: show your work. If a professor or a university permits the use of ChatGPT, follow the rules and give attribution to when and where your work is influenced by an artificially intelligent chatbot.