ChatGPT is a powerful language model developed by OpenAI, capable of generating human-like text based on input prompts. It has been used in a wide range of applications, from generating news articles to creating personalized responses in chatbots.
However, despite its impressive capabilities, ChatGPT has several limitations that must be taken into account when using it in real-world applications.
- Bias and fairness: As with any AI system, the data that ChatGPT was trained on can influence its output. If the data contains biases, such as gender or racial biases, then ChatGPT will also exhibit these biases in its responses. This means that it is important to carefully evaluate the training data and address any biases before using ChatGPT.
- Lack of context: ChatGPT is a “prompt-based” language model, meaning that it generates text based solely on the input prompt. This means that it can be difficult for ChatGPT to understand the context in which the text is being generated, leading to unexpected or inappropriate responses.
- Limited understanding of the world: While ChatGPT has been trained on vast amounts of data, it still has a limited understanding of the world. This means that it may not be able to generate responses to questions or prompts that fall outside of its training data.
- Vulnerability to adversarial examples: As with any AI system, ChatGPT can be vulnerable to adversarial examples, where a malicious attacker inputs misleading or incorrect information with the intent of tricking the model.
Overall, while ChatGPT is a remarkable language model with many potential applications, it is important to understand its limitations and use it in appropriate ways. By carefully considering these limitations, organizations and individuals can ensure that ChatGPT is used effectively and ethically.
That’s not my writing. That’s what happened when I asked ChatGPT about its limitations.
When I write, I typically leave the introduction or first paragraph until the end, but this time I let ChatGPT write it. I didn’t expect the bot to tell on itself so completely! This transparency is not only welcome, it’s the perfect foundation for this examination of the impact of AI our work (which I did write). For the rest of this post, anything produced by ChatGPT will be shown in quotation marks.
What we asked ChatGPT about health and human services policy landscape in Ohio
ChatGPT won’t wade into the realm of opinion, emotion, or morality. Keeping the bots out of that space is ideal, because that’s how you get Skynet. But those limitations listed above means that there is a narrower field into which we could even invite AI; health and human services on its face demands a human-centric focus. With this context in mind, what can we do with ChatGPT as a data and policy context tool? Not a lot, at least for relevant data more recent than 2021.
What is the state of maternal health in Ohio?
ChatGPT: “I’m sorry, I don’t have the latest data on Ohio’s maternal health as my training data only goes up until 2021. However, as of 2021, maternal health in the United States, including Ohio, has been a concern with high rates of maternal mortality and morbidity. Factors contributing to this issue include lack of access to quality prenatal care, racial and socioeconomic disparities, and increasing rates of chronic conditions such as obesity and diabetes.”
Instead: visit our maternal and infant health page, with recent blogs and factsheets.
How much will the 2023 SNAP cuts be?
ChatGPT: “I’m sorry, but as a language model trained by OpenAI, I do not have access to the most current information on specific policy changes or budget updates. You can check the official government website or contact relevant government agencies for the most up-to-date information on the 2023 SNAP cuts.”
Instead: Read Rachel Cahill’s January 2023 blog about the SNAP cuts, with a planning list. You can even register for a webinar on this very topic Friday, February 10.
What is happening with the Ohio budget for 2024-25?
ChatGPT: “I’m sorry, but I do not have access to the latest information on the Ohio state budget for 2024-2025. Information on the budget and any related updates or changes would be available through official government websites or by contacting relevant government agencies. It is best to check for the most up-to-date information.”
Instead: This is a fair knowledge gap since the Blue Book just came out. However, we have a Budget resource that you can follow for current information and analysis throughout the process.
How many children are living in poverty in Cleveland?
ChatGPT: “I don’t have the exact number of children living in poverty in Cleveland as of my training data cut-off of 2021, but child poverty rates can be estimated by various organizations using data from the US Census Bureau and other sources. According to the US Census Bureau’s American Community Survey, the poverty rate for children under 18 in Cleveland, OH was estimated to be about 42.5% in 2019.”
Instead: Check out Emily Campbell’s September 2022 piece, examining how poverty fell overall for each age group including children, working age adults, and older adults in Cleveland.
What does the 2020 Census show about diversity in Cuyahoga County?
ChatGPT: “According to the 2020 US Census, Cuyahoga County, Ohio has a diverse population with a significant African American population and smaller populations of White, Hispanic, Asian and other racial and ethnic groups.
In 2020, the racial and ethnic composition of Cuyahoga County was estimated to be:
- African American: 31.3%
- White: 57.1%
- Hispanic or Latino: 7.4%
- Asian: 2.6%
It’s important to note that these numbers are subject to a margin of error and may not accurately reflect the entire population of Cuyahoga County.”
Can ChatGPT write a thoughtful blog on hard topics?
Community Solutions Research Fellow Alex Dorman asked ChatGPT to write a blog about white radicalization in Ohio. The tool generated a fairly-well-written, seven-paragraph essay, focused on social media as a recruitment method and ideological amplifier. There was a bit of thematic repetition and destination-less arguments. It closed with recommendations: a commitment to raising awareness, promoting critical thinking in young people, and legislation to deal with hate speech, race-based crimes, and violent zealotry.
Cogent enough, right? We would agree with this assessment and recommendations. The prevailing issue with what ChatGPT produced, and one we can’t overlook, is a lack of reliable sources. Dorman notes that, “while some of these sources may be well cited and properly sourced, we can’t say this for sure without checking them.” This negates the time-savings you might hope for by using an AI tool. We were unable to even find records of some of the legislative actions ChatGPT referenced.
In a word: supplement. Not replacement.
A few members of our Research staff agreed that ChatGPT is an excellent tool for generating ideas or scratching out a first draft that can be built out with human-vetted context and verifiable sources. But ChatGPT’s intended ease, with “conversational searches in the form of questions,” as conducted at the outset, above, appears to fall short. There’s a nice word-sketch, but little synthesis. At least with the most current data we have.
Entering specific terms, phrases, and concepts will yield more comprehensive results.
Conversely, entering specific terms, phrases, and concepts will yield more comprehensive results. But knowledge of those terms, phrases, and concepts demands an already-informed perspective from the user. If the goal is to offer an opportunity to ask 101 questions, the way you might pick the brain of a human expert over coffee, ChatGPT will not deliver the same depth, accuracy, or thoughtfulness. One benefit of AI over human interaction here could be endless patience! AI will continue to answer as many questions as you can generate. But then we have to worry about ChatGPT’s bias and limited understanding of the world, which it warned us about at the outset. Though, this describes people, too, right?
What about the obvious worry over ChatGPT?
OpenAI, the makers of ChatGPT introduced a second tool, just a week after ChatGPT hit peak news cycle and teacher handwringing. It’s meant to help distinguish between text written by a human and that written by its own AI platform and similar technology. Tech watermarks and innovations are dizzying.
Helping teachers and professors detect when students use ChatGPT to cheat or plagiarize is right on time. Some of the largest school districts in the country have banned the technology, concerned students will use it as a shortcut for essays or other writing assignments and exams. It will be interesting to follow the news arc of this new tool (and the inevitable ones after it).
What you can be certain of though…Community Solutions is keeping the THINK in think tank, and will continue to produce vetted, verifiable analysis and policy informed by that work. If we do use any tools such as ChatGPT to generate ideas, or discover information we haven’t found another way, we’ll make that known in our work. As a supplemental knowledge tool, AI makes transparency absolutely essential.
AI has been setting your iPhone reminders, ordering your Amazon packages, writing screen plays and generating artwork (a completely separate blog) for awhile now, to our delight and horror. But it can also help improve your resume, write jokes, or recommend something based on your interests and preferences.
Open-source AI isn’t going anywhere, so if you haven’t had a chance to check out ChatGPT, why not try it? Just use it for good. And check the sources!