This new study – “The AI-Induced Disclosure Pressure Model and Empirical Evidence from MD&A Reporting” – is getting a lot of buzz between being mentioned in several prominent blogs in our space, as well as being highlighted in this WSJ opinion piece by the study’s author, Hebrew University Business School Professor Keren Bar-Hava.
The study dissects 107 SEC filings with MD&As and posits that the use of AI has changed the way analysts review MD&A. Professor Bar-Hava identifies three levels on which the “AI-induced disclosure pressure” operates:
- Exposure pressure. AI flags vague or evasive language. Companies feel compelled to sound confident, even when the outlook is uncertain.
- Competitive pressure. Algorithms benchmark tone across peer firms. If a competitor sounds stronger, you look weak by comparison.
- Reputational pressure. AI feeds analyst dashboards, investor platforms and news summaries. One poorly framed sentence can ripple fast.
As a warning to all of us drafting seeking to avoid liability for our disclosure, the study finds that the “most positive reports often came from the worst-performing firms.” In response, Dave Lynn posits in this blog: “It is a somewhat disturbing thought that we are moving down a path of having AI write our disclosures for analysis by AI. I feel that such a trend could only be welcomed by plaintiffs’ lawyers and SEC Enforcement lawyers. As Kevin LaCroix notes in his blog, ‘if a company is using AI to improve the way the company’s MD&A is scored under AI-driven analysis, the board must try to ensure that there is no gap between what’s said and what’s true.’”
Bearing in mind that disturbing trend, if we are now drafting with AI as one of several intended audiences, here are some practice pointers to bear in mind:
1. Write for Both Humans and Machines: MD&A disclosures should now be crafted with dual audiences in mind—human analysts and algorithmic readers. Be clear, consistent, and mindful of how tone and structure will be interpreted by AI tools as well as investors.
2. Treat AI as a De Facto Stakeholder: Consider AI systems as interpretive stakeholders. Their analysis can influence market sentiment, investor behavior, and regulatory risk—so write as though a machine will be your toughest reviewer.
3. Minimize Uncertainty Language: Watch your use of uncertainty terms (e.g., “might,” “could,” “possible”). These are strong signals to AI models that performance may be weak, and they’re statistically linked to lower profitability metrics.
4. Use Positive Tone Strategically—Not Excessively: Don’t overinflate optimism. The data shows that excessive positive tone may backfire, as markets and AI have become wise to performative cheerleading unbacked by fundamentals.
5. Reduce Obfuscation via Complexity: Avoid excessive length or syntactic complexity. Modern NLP tools detect obfuscation patterns—clarity is no longer optional, it’s essential for credibility.
6. Embrace Standardization—But Preserve Authenticity: While aligning disclosures to industry tone norms may help with AI benchmarking, avoid boilerplate language that diminishes narrative richness. Balance clarity with meaningful, firm-specific content.
7. Monitor Your Disclosure Feedback Loop: Recognize that today’s disclosures train tomorrow’s AI models. The tone and style choices you make now can set future industry expectations—so lead wisely.