In the rapidly evolving landscape of product development, finding efficient methods for research and ideation has become increasingly important. While generative AI has faced some skepticism within product management circles, specific features like Deep Research in tools such as ChatGPT and Gemini (as well as DeepSearch in Grok 3) are proving to be remarkably valuable for product research workflows.
A Systematic Approach to AI-Powered Product Research
After experimenting with multiple AI tools and approaches, I’ve developed a formula that consistently yields high-quality research results with significantly less effort than traditional methods. Here’s the process I follow:
Step 1: Converting Problem Statements to Jobs-To-Be-Done
I begin with a clear problem statement that articulates the challenge I’m trying to solve. Then, I use a Large Language Model (LLM) to transform this statement into a structured “Jobs To Be Done” (JBTD) format. This transformation focuses the research by clearly defining what the user is trying to accomplish.
Step 2: Leveraging Deep Research for Comprehensive Analysis
Once I have a well-defined JBTD statement, I submit it to an AI tool with Deep Research capabilities, requesting:
- An analysis of existing solutions for the problem
- Identification of potential pain points that current solutions fail to address
The AI typically returns detailed findings that would otherwise require hours of manual research. The quality and comprehensiveness of this information have consistently impressed me, as the model can access and synthesize vast amounts of online information.
Step 3: Synthesis and Comparison
For the final step, I use a reasoning-focused model (such as OpenAI’s o3-mini-high) to synthesize the research into a comparative analysis of existing solutions. This creates a clear, structured overview that helps identify market gaps and opportunities.
A Real-World Example
To illustrate this process, I’ll walk through a practical example:
Initial Problem Statement
I started with this challenge:
“Right now, there are a lot of different LLMs that can do various tasks. Even a single LLM can do multiple tasks when prompted in different ways. Currently, when I want to do a multi-step task that requires different skills, I have created different prompt templates for each skill. I enter my request into the first template and submit it to the model of choice. Then I copy-paste the output into the next prompt template and send it to a new chat session (or another model). This solves my problem but is not very user-friendly. I’m thinking about creating a no-code platform that enables you to create custom prompt pipelines that allows you to create and connect different prompt templates. You should be able to provide custom instructions for each step of the pipeline and adjust different settings, such as which model it will use as well as more advanced settings such as temperature and output format. It will have a user interface and a toolbox that allows you to drag and drop different templates or create your own. You should also be able to bring in resources such as LLMs and custom data, which you can feed to your models. You should be able to save your pipeline and load it as an application. The goal is to enable product managers and developers to easily create prototypes for LLM applications without the need for extensive coding.”
Transformed into JBTD Statement
Using OpenAI’s o1 model, I transformed this into:
“When I need to build or experiment with a multi-step LLM workflow, I want a no-code platform that lets me visually create and connect different prompt templates, configure model settings, and integrate custom data, so I can quickly prototype LLM applications without writing code or manually shuffling outputs between models.”
Deep Research Process
I then submitted this JBTD statement to OpenAI’s Deep Research feature with specific instructions to identify:
- Current solutions for this problem
- Potential pain points for product managers that a new product could address
Interestingly, before conducting its research, the AI asked four clarifying questions that proved highly relevant. After I answered them, it worked for approximately 11 minutes before delivering a comprehensive report on various no-code LLM tools for both startups and enterprise applications.
Final Analysis
Using the o3-mini-high model, I created a summary table comparing the key features of all the solutions identified in the research.
Benefits and Limitations
While this approach doesn’t eliminate the need for human involvement, it dramatically increases research efficiency:
Benefits:
- Condensed what would have been several days of work into a few hours
- Discovered solutions I wasn’t previously aware of
- Provided a comprehensive market landscape to identify potential gaps
- Helped refine my product idea based on what already exists
Limitations:
- Still required several hours to review the analysis and cited sources
- Needed hands-on exploration of unfamiliar tools mentioned in the research
- Occasionally includes information not directly related to the query
- Can struggle with nuanced differences between semantically similar concepts
Conclusion
The combination of a Jobs-To-Be-Done framework with AI Deep Research capabilities offers a powerful toolkit for product professionals. While it’s not a complete replacement for traditional research methods. This approach serves as an excellent starting point that can save significant time and uncover insights.
As AI research tools continue to improve. This approach will likely become increasingly valuable for product managers and developers looking to accelerate their research process. Besides helping them make more informed decisions.
What’s your experience with using AI tools for product research? Have you found other effective combinations of AI features that enhance your workflow?
Comments are closed, but trackbacks and pingbacks are open.