Article

Using AI at a think tank

Download Fact Sheets
Click here to RSVP
Subscribe to our Newsletter
By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Download this as a PDF

If every piece of buzzword tech lived up to its hype, I would currently be lounging in the metaverse, trading NFTs on the blockchain, and typing to you from my Google Glasses. Instead, I’m sitting at a desk in the normal-verse, still unsure what ‘fungible’ means, and typing from a laptop in my boring glasses. But what about the current buzzword of the day, “artificial intelligence?” Artificial Intelligence is not some failed fad or gimmicky concept, though the loosely defined term is popping up everywhere these days.

Think-tanking requires humans thinking.

Community Solutions Policy Associate Kyle Thompson has written about the promises and pitfalls of AI, and how it is currently being leveraged in the field of behavioral health, and in Ohio’s public benefits programs. In early 2023, Communications Director Patti Carlyle wrote in depth about the benefits and limitations of utilizing AI in our line of work. Essentially think-tanking requires humans thinking. This piece is an exploration of a few ways some team members from the Research and the Policy teams have leveraged AI in their work. It is also an affirmation of our early conclusion: AI tools are a useful supplement, not a replacement.

How Research Fellow Alex Dorman uses AI

In 2023 Alex wrote a blog that involved interviewing three experts about the tragic rise in overdoses in Cuyahoga County. I used AI to assist with the development of this piece in the following ways:

  • I prompted ChatGPT to “develop some questions you could ask an expert on how to solve an impossible problem”
  • I used Otter.ai to automatically transcribe my interview recordings
  • I asked ChatGPT to make pieces of my writeup more succinct
ChatGPT gave me some great ideas for creative questions about complicated problems

ChatGPT developed perfect interview questions and summarized my writing flawlessly, and Otter.ai transcribed my interviews with zero errors. For anyone who’s ever used these tools, it’s obvious that I’m joking. For everyone else, I’m definitely joking!In reality, ChatGPT gave me some great ideas for creative questions about complicated problems. Otter.ai gave me a transcript riddled with mistakes, but still provided a helpful starting point for transcribing. While ChatGPT had some good ideas for shortening my prose, it couldn’t create a good final draft in my voice. Regardless of these shortcomings, these tools were still helpful for sparking creativity and saving time.

How Policy Associate Kyle Thompson uses AI

One use of AI in policy work is like an advanced grammar check. Use of generative AI boils down to the task at hand, and how much outside knowledge is required to complete it. Policy research usually requires precise, contextual information that is also as current as possible, which is an AI limitation.

Policy perspectives can’t be captured by AI models for a variety of reasons.

Policy perspectives can’t be captured by AI models for a variety of reasons, so I write out my original policy research and input it into generative AI to review it for any grammatical or syntax errors. This allows me to have an “outside” opinion on minor things like structure, flow, or clarity before I get deeper feedback from colleagues. Naturally, I ignore a lot of the hiccups that potentially emerge, especially because hallucinations are always risks that often emerge from AI models due to lack of information or biases.The effectiveness of AI models also depends on the accessibility of the service. For example, the free version of ChatGPT utilizes GPT 3.5, which is useful for basic tasks like making text summaries, but not for finding real-time information. GPT 3.5 has only been trained on data up until January 2022, which makes it extremely difficult to track the progression of policy such as the 988 Lifeline from its federal designation and the appropriations allocated to the Lifeline through the state budget.I also use AI to summarize data from high-level policy reports, which surfaces context and provides additional perspective. Even though these services are helpful for low-level tasks and basic analysis, I always remind myself that these technologies have biases that can miss contextual information about the policy landscape and specific components of legislative activity that impact health and human services.

Conclusion

AI has a small but specific set of use cases in the work that we do. We focus on input that we control: inspiration, summation, and clarification. And because our work is by nature complex and dense: simplification. How can I improve this headline? What are the main themes that are showing up in this large document, so I can focus where needed? Does this paragraph make sense?

AI has a small but specific set of use cases in the work that we do.

We are currently using the worst versions of any of these AI tools. They will only continue to get better, and potentially become more integrated in our day-to-day lives. Much like how smartphones, Google, and credit cards were once novel. Now we use all three of these tools reflexively and often in conjunction with each other.These new AI tools are here and available. It’s unclear which ones specifically will stick, but you should take some time to experiment with them to enhance your work (not replace it!). This is certainly not a call to use these tools mindlessly; quite the opposite. The Brookings Institute has detailed a meticulous process they developed to ethically incorporate AI tools into their work. This process involved convening an internal advisory group full of diverse perspectives, an all-staff survey, and a review of other institutions’ policies on using AI. The Center for Community Solutions aims to undertake a similar process soon, to ensure the most effective and ethical use of these tools in our work. At a minimum, it’s probably time to ChatGPT to explain what an “NFT” is to me like I’m five years old.

Think tanks, government organizations, and academic institutions with policy or position statements on the use of AI:

Download Fact Sheets
No items found.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Download report

Subscribe to our newsletter

5 Things you need to know arrives on Mondays with the latest articles, events, and advocacy developments in Ohio

Explore Topics

Browse articles, research reports, fact sheets, and testimony.

Maternal & Infant Health
Article

The Village of Joy: A new birth for Birthing Beautiful Communities

Taneisha Fair
April 29, 2024
Behavioral Health
Article

Reporting changes for non-fatal overdoses in Ohio

Dylan Armstrong
April 22, 2024
Maternal & Infant Health
Article

How do maternal mental health conditions affect birthing parents and their babies?

Natasha Takyi-Micah
April 22, 2024
Medicaid
Article

On Medicaid While Black

Brandy Davis
April 22, 2024