91做厙

Skip to main content Skip to search

YU News

YU News

Generative AI Makes Good Research Better, But Demands Human Discipline

Sy Syms Assistant Professor Travis Oh is a co-author of the report: New Tools, New Roles: A Managers Guide to Harnessing Generative AI for Marketing Insight.

By Dave DeFusco

On Monday morning, a marketing team sketches out a new product idea. By Friday, they have concept tests, customer reactions and a polished insights deck in hand, all generated with the help of artificial intelligence. What once took months now unfolds in days.

That compressed timeline is no longer hypothetical. It is the reality described in , New Tools, New Roles: A Managers Guide to Harnessing Generative AI for Marketing Insight, co-authored by Travis Oh, assistant professor of marketing at the Sy Syms School of Business, and his colleagues at Columbia Business School, Georgetown University, USC and the University of Tennessee. 

The report explores how generative AI is transforming every stage of marketing research, while warning that speed alone does not guarantee better decisions.

GenAI dramatically compresses the speed of insight generation, said Oh, but it doesnt remove the need for rigor. It actually increases it.

At the core of these tools are large language models, or LLMs, trained on vast amounts of text to predict what comes next in a sequence. Their fluency makes them powerful for drafting surveys, summarizing reports and even writing code. That same fluency, however, can be misleading because LLMs are probabilistic. They generate what sounds right, not necessarily what is correct.

It should fundamentally shift the mindset from acceptance to interrogation, said Oh. These systems are predicting what sounds right, not verifying what is right.

That distinction is critical as organizations rush to integrate AI into their workflows. In desk research, for example, a single prompt can produce a synthesis of dozens of studies in seconds. The efficiency is undeniable, but so is the risk. AI can conflate findings or misattribute sources, creating a polished narrative built on shaky ground.

Oh advises managers to treat these outputs as starting points, not conclusions. If it references multiple sources, open a few of them yourself before using the insight, he said. GenAI is excellent at surfacing themes, but you should never rely on it blindly for source accuracy.

The same balance of speed and scrutiny applies to internal data. Many organizations sit on years of underused surveys, reports and qualitative research. Generative AI can unlock that value, synthesizing fragmented knowledge into actionable insights. But doing so safely requires careful infrastructure.

Theres enormous value in internal data, but you have to treat access to it as a systems problem, said Oh. That means using enterprise-grade environments or local deployments and ensuring data never leaves controlled infrastructure.

Perhaps the most immediate impact of GenAI is in designing surveys and research instruments. Tasks that once required weeks of iteration can now be completed in seconds. Yet here, too, precision matters. Vague prompts can lead to subtle but important errorswhat researchers call construct drift.

Oh points to an example where a retailer asked AI to generate survey items measuring visual attractiveness of produce. The system instead produced statements about freshness and quality, which were related but fundamentally different concepts.

The best prompts explicitly define what the concept includes and what it excludes, he said. If you dont do that, the model will default to adjacent ideas that sound right but are theoretically different.

Beyond design, generative AI is reshaping how data is collected. Conversational AI tools now enable dynamic, adaptive interviews that respond to each participant in real time. The result is a new kind of research: qualitative depth at quantitative scale.

What excites me most is that were no longer forced to trade off depth for scale, said Oh. You can now get rich, probing responses across large samples.

That flexibility, however, introduces new challenges. Slight variations in how AI poses questions can introduce inconsistencies, making it harder to compare responses across participants.

The concern is subtle inconsistency, he said. Small variations can introduce noise, and that noise can look like meaningful variation if youre not careful.

On the analysis side, GenAI can code vast amounts of unstructured datatext, images, even videoin minutes. It can also generate and execute statistical code, lowering technical barriers for many teams. Still, Oh cautions against overreliance on outputs produced entirely within chat interfaces.

If you cant rerun the code in a proper analytics environment and get the same result, you dont really have a reliable analysis, he said. You have a convenient output.

Across all these applications, a common theme emerges: generative AI is not a replacement for human judgment but a force multiplier for it. Used well, it expands what teams can explore and how quickly they can move. Used poorly, it accelerates errors just as efficiently.

What I see most often is managers treating GenAI outputs as if theyre already insight, said Oh. The outputs are fluent and convincing, which creates a false sense of certainty.

The solution, he argues, is discipline. Treat outputs as hypotheses. Verify sources. Validate analyses. In short, pair new tools with new rules.

In practice, it means you use GenAI to expand your thinking, not to finalize it, said Oh. You move quickly when exploring, but you slow down deliberately when committing to decisions.

As generative AI continues to reshape marketing research, the organizations that succeed will not simply be the fastest. They will be the ones that balance speed with rigor by turning rapid insights into reliable ones.

Others will move quickly as well, said Oh. But theyll just be wrong faster.

Share

FacebookTwitterLinkedInWhat's AppEmailPrint

Follow Us