Hi, How Can I Help You?

If you were to ask ChatGPT, a newly released AI chatbot, to write an essay on the impact of the Great Depression on the 1950s, chances are that it will create a compelling and well-written piece about how the Great Depression changed American society and culture for decades to come. Similarly, if you were to ask it what the force of gravity would be on a planet one-half earth’s mass and twice its radius, it could tell you that the force would be one-eighth that of Earth’s. By now it is common knowledge that ChatGPT seemingly has answers to any question. ChatGPT (Generative Pre-Trained Transformer) is a chatbox prototype launched by the artificial intelligence development company OpenAI in November. ChatGPT’s widespread usage has highlighted certain complications with the application that are yet to be fixed, posing important questions: How reliable is ChatGPT? Should it be allowed for the greater public at this time period? Despite its immense popularity in the past few months, ChatGPT should not be a public platform until specific concerns are addressed. 

For one, the application is a prototype and still relatively new. It is important to note that ChatGPT is not a search engine like Google. While Google searches the entirety of the web within seconds to provide the best matches for what it is being asked, ChatGPT is restricted to the data that it has been trained in. As of right now, the system has no data after 2021. If the user were to ask ChatGPT who won the 2022 FIFA World Cup, it would tell you that it does not have a collection of any such information. Additionally, it has been recorded that some of ChatGPT’s responses have biased undertones, if not flat out disturbing responses. According to Bloomberg, while the program flags most concerning requests, Melanie Mitchell, a professor at Santa Fe Institute studying AI, says “[systems] rely on statistical associations among words and phrases to generate language, which itself can be biased in racist, sexist and other ways.” For example, one user got the program to write a song with the lyrics: “if you see a woman in a lab coat, she’s probably just there to clean the floor.” This suggests that the training data and goals of the program need serious reconsideration and revision. In short, there are ethical issues that need reexamination and the program has a serious lack of recent information and events. 

ChatGDP also poses ethical issues within academic settings, introducing a new way for students to cheat on essay writing assignments in school. For example, in South Carolina, a college professor of philosophy at Furman University caught his student using ChatGPT to write his essay for class. Darren Hicks, the professor, said in a New York Times interview that the most concerning part was that the program was able to mostly accurately replicate a “clean,” college-level writing style. Despite some of the AI detection softwares available like Originality.AI, this situation brings up important questions: How do ChatGPT’s abilities affect the originality and legitimacy of academic work? Based on this line of thought, students who lack the motivation to research a topic and come up with their own ideas might be enticed to use a program that offers them eloquent answers in a matter of seconds.

As of right now, everything about ChatGPT is too case-specific and limited to make a final decision about the program. The restrictions on data and clear bias in responses, in addition to the ethical issues within class settings, need to be addressed. ChatGPT should not be available yet to the greater public as it is a prototype that still needs a lot of editing and programming to make it a more reliable and, hopefully, ethical program—if programs can be ethical in the first place. AI will be the future, so working to improve it is the first step in the right direction.