There’s a possible sea change brewing in how institutional investors make their voting decisions as some investors have been using AI tools to help them decide how to vote their holdings this proxy season. Glass Lewis and ISS themselves have been evolving how they make voting recommendations as AI clearly is having an impact in this space (as it has in most areas of our lives). Beginning in 2027, Glass Lewis will no longer issue a standard benchmark recommendation, moving instead to AI-powered, client-specific perspectives that reflect an investor’s specific investment philosophies rather than a single advisor-determined house view.
Against this backdrop comes this new paper from Glass Lewis entitled “AI and the Fiduciary Test: A Guide for Institutional Investors in Evaluating AI Proxy Voting Solutions.” The paper makes three arguments:
- Proxy voting has structural properties, including fiduciary non-delegation, cross-jurisdictional data complexity, and the frequency of genuinely non-routine decisions, that make the quality of the underlying architecture a fiduciary matter, not merely a technical one.
- There are two fundamentally different approaches to AI in proxy voting, and the difference between them is material: one embeds human expertise, judgment, and oversight in the production process, the other applies human review to outputs after the fact.
- There are five specific questions that any asset manager can use to distinguish between the two approaches and determine whether any AI governance solution is built for the accountability context that fiduciary obligations create.
And then there are these five key takeaways:
- The critical variable in any AI proxy voting solution is not the sophistication of the model. It is the quality of the institutional expertise, data governance architecture, and production accountability underneath it. An AI system can only produce outputs as defensible as the foundation it is built on.
- There is a meaningful operational difference between AI systems that embed human expertise in the production process and systems that apply human review to AI-generated outputs after the fact. That difference determines whether the human is exercising governance or performing quality control, and it is the difference that matters in a fiduciary context.
- Fiduciary accountability in proxy voting can’t be delegated to an AI system. The regulatory framework is increasingly explicit on this point. The EU AI Act’s human oversight requirements take effect in August 2026. Stewardship codes in the UK, EU, and Japan all require that voting decisions be explainable and defensible at the process level, not just the output level. The regulatory direction across every major institutional market is toward a higher standard of accountability.
- Investment-grade governance data is the product of governed architecture: data models defined by domain experts, normalization rules applied consistently across regulatory regimes, and traceability maintained from source document to final output. Not all AI extraction approaches are designed to this standard, and the difference is not always visible at the recommendation level. It becomes apparent when a specific voting decision is scrutinized.
- Institutional investors should assess five important aspects of any AI proxy voting solution: the data governance architecture, the nature of human production roles, investment-grade data standards, the AI system’s designed scope, and exception-handling accountability.