AI’s Fundamental Problem in Negotiations
- May 27
- 2 min read

The fundamental problem AI faces in negotiations is not a lack of computational power, nor the energy required to support intensive calculations. It’s not the inability to “read” human body language, nor the unpredictable and often irrational behaviour of negotiators, nor even the absence of emotional intelligence or the presence of human scepticism.
The real problem lies in the necessity to train large language models (LLMs) on vast volumes of real-world human negotiation data—data that is almost never made public.
Consider this: how many deals are negotiated around the world each day? Hundreds of thousands? Millions? Perhaps even billions. From bargaining over a 10-dollar fake LV bag in an Asian bazaar to closing a multibillion-pound deal in the City, human behaviour in negotiations follows remarkably similar patterns. Ideally, all of these would be captured in training data for LLMs.
But here’s the issue—details of 99% of negotiations remain unknown, even to the parties involved. The other side is unlikely to tell you how much more they were willing to offer, and most other specifics remain confidential as well.
Without this data, LLM training becomes self-referential—feeding on its own outputs, which inevitably leads to errors. In essence, the AI begins to educate itself in isolation.
So, let me ask you: would you be willing to share the details of your recent negotiations with an AI company, even for a small compensation? And more importantly, do you have the time to do so?
One more point: Experienced negotiators can add substantial value to the development and adoption of AI-powered negotiation tools. Their insights help ensure more accurate training, realistic scenarios, and responsible use, bridging the gap between raw technology and real-world human dynamics.
I’ll be sharing more thoughts on the challenges and opportunities of AI in negotiation in future posts.