KENNESAW, Ga. | Sep 12, 2023
There has been a lot of debate in recent months as to when using a generative AI tool, such as ChatGPT, is fair game and when it is not. Some universities have embraced its use, while some have banned it. Some companies are embracing it for coding support while some are discouraging it. In this blog, I’ll give some of my personal thoughts about where lines might be drawn.
I’ll focus on language generators such as ChatGPT because this is what’s getting the most attention. However, I think many of the points that follow apply more broadly as well. I welcome readers to comment on where you agree, disagree, or see other scenarios that I haven’t covered.
Note that I’ll use “ChatGPT” to represent the family of language models that includes its competitors because, much like Kleenex, ChatGPT has become a category label as much as a specific brand.
Clearly Fair: Research Support
One place that I see using generative tools as fully fair is when the output is used for research purposes and is intended to provide another input to your process of information gathering. It is widely accepted across all forms of research that you not only can, but should, research what others have already done and what information is already out there and incorporate those learnings in the context of what you are writing about.
Of course, any resource used extensively should be credited and so acknowledging generative AI output as a source will keep you in the fair zone. For someone who wants to be extra thorough, you could also provide your prompt text and provide a link to a document that archives a copy of the output you received based on that prompt. This is because even if you or someone else submits your same prompt later, there is no guarantee the same answer will be generated.
Fair In Most Cases: Grammar And Flow Clean Up
Many people submit their draft to ChatGPT and ask for it to be cleaned up in terms of grammar and wording. In most cases, this is probably fair because people expect that a document will be reviewed and edited by more than one person. If you’re creating a document for work that you intend to have multiple people to review anyway, adding ChatGPT as a reviewer isn’t something I see as a problem.
Note that here I am talking about submitting your nearly completed work and asking for tactical help, not asking ChatGPT to write new content for you. All of my books had a professional editor assigned by the publisher. Nobody buying a book would be surprised by this as it is standard practice. It would be different if the editor wrote and added a new chapter to a book. Then they would need credit.
The scenarios where even grammar and flow clean up wouldn’t be fair is if you are doing an assignment for school for a writing class where the teacher is asking you to write the document. If the intent is to see what you can produce, then it isn’t fair to have ChatGPT help clean it up anymore than it would be ok to ask another person to clean up your draft in such a situation. Also be careful in cases like college application essays or job application cover letters. Recipients are expecting that the essay or letter is substantively your own work produced to help them assess you. Outside help is expected to be minimal and very tactical.
Fair In Most Cases: Coding Support
Several studies have found that there are substantial productivity and speed gains by programmers using a code generator. Using one is probably ok in most cases, but with a few caveats.
First, you must remember that you are putting your name on any code you submit or deploy and so you must carefully check the generated code you are provided to make sure it works properly. I believe that, like low code tools and other code generators of the past, generative AI can help people who know what they are doing be more effective. However, it is dangerous for people to use it to code things they don’t know how to do themselves. As a result, carefully review and test any generated code before passing it on. Not doing so is going to lead to trouble.
Second, the context of your coding also matters. If, as part of a job interview, you are asked to produce code that achieves some goal, be sure to ask if they want you to literally code it yourself or if they are just asking you to get the job done. If they want to check your coding prowess, using a generative tool would be foul. If they want to see how fast you can solve a challenge using any means, then a generative tool would be fair.
Usually Not Fair: Creating Your Content
If you’re going to put your name on a document, article, or piece of code, you need to be honest about how much help you had. If you ask ChatGPT to create a 1,000-word blog on a given topic and then post that as your own, that is foul. You are misrepresenting something as your work that simply isn’t just as much as if a friend wrote it for you.
To keep such activity in the fair zone, you should only use ChatGPT output as another source as outlined in the research support example above. The more you use ChatGPT output as a starting draft as opposed to another input, the more you risk misrepresenting what you really contributed and what you didn’t. At minimum, you need to acknowledge that typical roles were reversed, and that generative AI did the substantial portion of the work while you applied tactical editing to its output.
The Simple Rule To Follow
You can probably break each of the above situations down into a simple rule: Be honest and transparent about what you contributed to any given output and what you had help with. Just as you credit other people, you should credit ChatGPT and other generative AI tools as well. In most situations, people won’t mind that you made use of a generative tool as long as you are clear on the extent to which you relied on it. At some point, however, substantially passing on generative AI output and calling it your own is not far removed from cutting and pasting from someone else’s document or code and claiming it to be your own.
Feel free to comment on where you agree, disagree, or have a different scenario to consider.
Originally posted in the Analytics Matters newsletter on LinkedIn