ISS stopped providing its proxy recommendations data to academics post Covid. Glass Lewis never released its recommendations to academics anyway. No one in the governance universe seems to have noticed.
Most of you know that my co-authors and I crunch datasets related to governance and valuation for a living. We need relatively comparable data items across time to draw meaningful inferences related to how intermediaries in capital markets behave. Imagine our surprise when we discover that key data items supplied by such information intermediaries are yanked or simply disappear overnight. Scholars in the research community notice, complain under their breath and find workarounds. I wonder whether the usual set of corporate monitors (institutional investors, short sellers, litigators, and regulators) is even aware of these sudden omissions. As a result, no one seems to ask how such omissions hamper the role that academic research can play in uncovering potential wrongdoing in capital markets. Here are two recent examples:
Proxy voting firms:
Proxy voting firms, as most of you know, provide recommendations to institutional investors on how they might want to vote on individual proxy proposals filed at companies. Institutional investors may not have the resources to analyze thousands of these proposals, especially if they hold the entire stock market via passive index vehicles.
There are essentially two large proxy voting firms, ISS (Institutional Shareholder Services) and Glass Lewis, that reportedly have a 97% market share in the market for such advice. ISS is reportedly the larger of the two. These advisors are in essence a duopoly, and the Republicans have argued that they operate without much transparency and oversight. The House Financial Services Committee has been concerned about their influence on US corporate governance. I happened to be a witness at one such hearing in July 2023.
Voting recommendations on ISS databases:
After the hearing, I wanted to download proxy recommendations data to verify a few conjectures. I teamed up with my co-authors, Lubo Litov and Dhruv Aggarwal. However, we noted that ISS has stopped sharing its recommendations on proxy proposal voting since 2018. Other author teams seem to suggest that ISS recommendations were available at the beginning of the pandemic or for the year 2020. So, at some point during the pandemic, these data were removed from the WRDS ISS Voting Analytics (Company Vote Results) US dataset, the platform that most academics use to conduct research. I can grudgingly understand if this were driven by a need to sell an enhanced version of the ISS dataset to charge more from us by way of fees. However, it does not appear to be sold as a separate product either. The timing of the disappearance may or may not have coincided with the Republican pushback on proxy voting.
Perhaps unsaid and more interesting, ISS used to at least provide their recommendations to academics in the past. Glass Lewis never did. I don’t know what is worse. Complete radio silence or yanking data that we academics used to get earlier.
This has happened before with analyst names:
Alexander Ljungqvist, Chris Malloy and Felicia Marston wrote this amazing paper in 2006 where they compare two snapshots of the entire I/B/E/S (Institutional Brokers Estimates System) analyst stock recommendations database, taken in 2002 and 2004 but each covering the same time period 1993-2002. They found nearly 20,000 changes of an unusual nature: the selective removal of analyst names from historic recommendations (“anonymizations”).
To step back a bit and explain, specific analysts issue recommendations on whether to buy or hold the stock. Capital market participants, including institutional investors, are known to pay attention to these recommendations. We need an archival record of who recommended what for a particular stock and whether the calls were good or bad with hindsight. That is why these anonymizations matter because they essentially rewrite history!
Back to the paper, anonymizations turn out to be pervasive and non-random. Bolder recommendations were more likely to be anonymized, as were recommendations from more senior analysts, Institutional Investor “all-stars,” and those who remain in the industry beyond 2002.
Abnormal stock returns following subsequently anonymized buy recommendations are significantly lower (by up to 11.0% p.a.) than those following buy recommendations that remain untouched, suggesting that particularly embarrassing recommendations are most likely to be anonymized. Analysts whose track records appear brighter due to anonymizations experience more favorable career outcomes over the 2003-2005 period than their track records and abilities would otherwise warrant.
Guess what happened next? The dataset linking analyst names to their recommendations was not provided anymore by I/B/E/S. Clever researchers such as Kelvin Law at Nanyang Business School found a way to reverse engineer some of these anonymous ids. That’s good except that the link between the analyst names and recommendations should not have been yanked from academic view in the first place in the interests of transparency and accountability to the investing public.
Why is all this a big deal?
Who, other than academics, can crunch large datasets? Who, other than academics, has the incentive to write about patterns that suggest opportunistic behavior by capital market intermediaries? Quants in large banks and hedge funds have the training to expose such behavior. But they don’t have the incentives to write anything in public lest they offend clients or business associates of the fund or the bank they work for.
Depriving academics access to key data items in data sets, that are in the thick of a regulatory or practical debate, sets back informed public debate and policy making. As is, we seem to live in a world where the political debate about contentious topics such as ESG and proxy advisors operates in an “evidence free” environment. One can theoretically make arguments in either direction (proxy advisors are conflicted/woke, or they provide valuable services, for example). But without looking at the systemic, large sample data, it’s basically impossible to come to any reasonable conclusion. This is especially true in our hyper-polarized environment, where these debates become partisan (Republicans have the incentive to believe proxy advisors are compromised simply because their “side” is saying so). Honest academics perhaps are the only ones with the incentives and ability to use this disappearing data effectively.
I hope governance activists and regulators become aware of the damage caused to public discourse when intermediaries pull sensitive data out of academic circulation.