The Current Expert Problem with ChatGPT

I was talking with my friends, Mehdi Zonji about this. And I thought I should share.

ChatGPT’s ability to generate responses across a vast range of topics is both a strength and a limitation. It’s not a sniper (answers specifically), it’s a shotgun (answers generically.) And that’s a problem.

On the technical side, ChatGPT does a really good job of a “Junior Engineer.” As the time of writing in Q1 2025, ChatGPT still misses the 2nd or 3rd order effect of its proposed technical solutions.

Take ChatGPT’s confidence in incorrect answers. Right now, ChatGPT is like a junior who never says “I don’t know.” It will give you an answer with confidence, whether or not it should and whether it’s indeed the right answer or not.

This is extremely dangerous for new tech engineers coming into the field where they copy/paste solutions (and mostly code) without understanding what’s inside. For research and experimental work, this is fine. For things to go to prod, this is not.

ChatGPT still is incredibly useful—as long as you treat it like a junior team member who needs review, context, and guardrails. And you, the senior engineer, still need to nudge it to use a specific design patterns, or use of OOP or correct it that a Lambda function has memory and execution limit and that the solution it proposed is not valid.

In Product Management, ChatGPT can help with surface-level tasks: generating user stories, writing PRDs, summarising customer interviews. That’s helpful, and in many ways mirrors a Junior PM—someone who’s learning the ropes and can produce artifacts quickly if given direction.

But it can’t replace the judgment of a Senior PM who’s making trade-offs between business value and tech complexity, who knows when to ship a rough cut vs. when to push for polish, and who aligns stakeholders without just regurgitating frameworks.

For example, if you ask ChatGPT how to prioritize features, it might give you RICE or MoSCoW. But it won’t challenge the underlying assumptions, like: Are we even building the right product? What’s the risk of not shipping this? It can’t feel the tension between short-term growth and long-term strategy. That requires context, lived experience, political awareness, and an instinct for timing that machines currently don’t have.

The danger is when people start treating ChatGPT like a senior.

When it comes to deep expertise, a human, a senior, is still needed.

This will change. And I hope it does.

 •  0 comments  •  flag
Share on Twitter
Published on April 08, 2025 23:13
No comments have been added yet.