Generative AI has changed the world of research and search engines. From AI-generated summaries at the top of Google results to gen AI search tools that provide citations, it feels like we’ve entered a new era, one in which chatbots can be researchers, editors, and data analysts all in one. Many well-meaning organizations may absorb the gen AI hype and wonder: Why hire a team to produce a research report when, allegedly, AI can do it for you faster and better?
But the reality isn’t so simple.
In fact, AI-generated research has landed some people in the spotlight—and not for good reasons. One law firm cited cases that don’t exist, and even the White House published an AI-generated document that included citations ranging from inaccurate to outright fake. These aren’t isolated glitches; they’re part of a well-documented pattern. Gen AI models are prone to “hallucinations,” confidently fabricating information that isn’t true. And these hallucinations are getting worse with newer models. For example, OpenAI’s own test found that the hallucination rate for its o3 model was about 50 percent; for the new o4-mini model, it was nearly 80 percent. If not caught, these AI-generated mistakes are not only embarrassing—they’re damaging. Misinformation undermines credibility, and trust is hard to rebuild once it’s lost.
That said, gen AI is still a powerful and useful research companion. Used strategically, it can surface research we’ve missed, flag blind spots, and speed up tedious searches. But it should not and cannot be a substitute for human judgment. AI can’t distinguish fact from fiction. People can.
Yes, thorough research and fact-checking are time-consuming processes, but gen AI can make us more efficient if we’re smart about how we use it. To that end, we’ve identified a few dos and don’ts to get the best results out of gen AI systems while preventing fake data and fabricated sources from slipping through.
To get the best results, do the following with gen AI:
- Summarize complex research or explain keywords so you can evaluate the underlying material yourself
- Request links to sources—then double-check that the summaries actually reflect the original content
- Cross-check your own work, such as using it to spot errors in fact-checking or to identify shaky logic
But please don’t do these things:
- Provide overly broad or overly specific prompts, especially if you feed it made-up numbers or hypothetical prompts as examples; this can increase the risk of hallucination
- Generate facts, figures, or citations without verifying their accuracy and credibility and—once proven correct—without properly citing them
- Rely on gen AI as your primary research tool instead of consulting scientific journals, nongovernmental organizations, reputable news organizations, experts, or market research sources
It’s tempting to believe the hype about gen AI’s speed and capabilities. But if something sounds too good to be true, it usually is. While these tools are evolving fast, they’re still not reliable or accurate enough to take the lead on research. Instead, the best outcomes come from combining human rigor and discernment with AI efficiency—keeping people at the center of the process with powerful tools at their fingertips.
Because these tools are advancing so fast, it’s important to keep up to date on the latest AI news, trends, and developments. Stay tuned as we continue to track gen AI in our own work and the world around us.














