top of page

EXPERTS INSIGHTS

Search

AI Strangelove

Writer's picture: SEDA ExpertsSEDA Experts

How I Learned to Stop Worrying and Love the Bomb [1]



“AI is neither artificial nor intelligent.” Microsoft’s Kate Crawford made this observation in an interview with the Guardian about her book Atlas of AI.[3] Crawford’s point was that artificial intelligence (AI) systems are often not as autonomous as they appear and involve humans working behind the scenes; for example, the people who source, categorize, update inventory, and pack and ship your box when you ask Alexa to order toilet paper. Despite the interwoven dependence on human input, AI is often presented as omnipotent, without visibility into how decisions or results are generated. It is clear, however, that heavily regulated industries, like finance, will not be able to let the mysterious workings of AI unfold without explanation for long.


I have observed many technological innovations over my more than 30 years of experience as a legal and compliance professional serving registered investment adviser clients. New technologies pose the challenge of understanding their application within the investment advisory space, as well as the effects of existing and emerging regulations. This can sometimes be intimidating to legal and compliance professionals who may not understand code, and creates a need for ongoing education and collaboration with business and technology experts in order to develop compliance frameworks that are effectively adapted to incorporate technological advancements.


Adapting to new technologies has been a necessary feature of my career. My early working days were filled with little technology. Then came the introduction of telex machines, dial-up modems, fax machines that used paper rolls, and dot-matrix printers with perforated, pin-feed, edged paper. At the time, computers were not yet ubiquitous on every desk, and there were many word-processing systems before Microsoft Word became the standard. Microsoft Excel was brand new (but has endured remarkably well). Employers did not have to worry about employees surfing the internet because the World Wide Web did not become widely available in office settings until the ‘90s (reaching 100,000 million users in 1996[4]). More recent decades saw the development of algorithmic trading and electronic trading platforms and, alongside them, new regulatory frameworks to address the evolving landscape.



In the 1970s television crime show Columbo, long before the era of modern forensics, the eponymous and unassuming Lieutenant Columbo of the Los Angeles Police Department’s Homicide Unit solved murders using observation, interrogation, and fundamental logic in a way that would have made Occam proud. Perhaps because I am old enough to remember Columbo, I did not immediately embrace AI. But I have found through perseverance that the same foundational skills of observation, interrogation, and logic can be applied to learning new technologies, and while the pace and impact of AI may be accelerated as compared to other older technologies, it’s important to remind ourselves that we have been here before, compelled to evolve with technology.


Not surprisingly, this rapid acceleration of technology is being met with new regulatory frameworks that address its transformative nature. On July 26, 2023, the Securities and Exchange Commission (SEC) proposed new Rule 211(h)(2)-4 (the AI Rule) under the Investment Adviser’s Act of 1940 (Advisers Act) designed to regulate potential conflicts of interest associated with the use of predictive data analytics and other AI technologies [5]. In short, under the AI Rule, a firm would be responsible for understanding its AI applications and how the technology functions, and then be able to explain its assessment of conflicts of interest. As I read the sections of the AI Rule that discuss so-called “black box” technologies—where an adviser may not be aware of how the technology has reached a certain result or recommendation—I focused on language that suggests explainability features be incorporated in AI programs. Although the AI Rule, in its current form, does not prescribe this, I believe that “Explainable AI” (XAI) will be an inescapable necessity, especially as AI becomes more sentient.


The AI Rule

The AI Rule is designed to regulate potential conflicts of interest associated with the use of predictive data analytics and other AI technologies by registered investment advisers. The AI Rule is based on fundamental principles such fiduciary responsibility, conflicts of interest, recordkeeping, and disclosure.


The AI Rule does not prescribe or limit technologies (the rule release states that the AI Rule is “technology agnostic”), or how advisers should manage and monitor their AI technologies, provided, of course, they comply with the AI Rule. Inherent in these requirements is the need for advisers and their Chief Compliance Officers (CCOs) to understand and be able to explain their firms’ uses of AI.


As is often the case with proposed rulemaking, the SEC then initiated a sweep to determine how AI-based tools are being used by advisers. I read the rule release hoping that it would describe detailed use-case scenarios and corresponding compliance requirements. (Spoiler Alert for anyone who has not read it: It does not.)


Following an overview of the AI Rule, this article considers a hypothetical AI investment research use case that employs XAI.


AI Rule Requirements

Under the AI Rule and related amendments to Advisers Act Rule 204-2 (recordkeeping requirements), if adopted, advisers would be required to (italicized words are defined in Key Defined Terms below):


  • evaluate any use, or reasonably foreseeable potential use, by the adviser of a covered technology in any investor interaction

  • identify any conflicts of interest related to such use that place the adviser’s interests ahead of that of investors

  • eliminate or neutralize the effects of such conflicts

  • maintain written policies and procedures reasonably designed to prevent violations of the AI Rule, including:


a written description of the process for determining whether any conflict of interest results in an investor interaction that places the interest of the firm or person associated with the firm ahead of the interests of the investor


a written description of the process for determining how to eliminate, or neutralize the effect of such conflicts of interest


a review and written documentation of that review, no less frequently than annually, of the adequacy of the policies and procedures established pursuant to the AI Rule and the effectiveness of their implementation, as well as a review of the written descriptions established pursuant to the AI Rule


  • comply with certain record-keeping requirements related to the proposed rule, including:


the date on which each covered technology was first implemented (i.e., first deployed) and materially modified


the adviser’s evaluation of the intended use as compared to the actual use and outcome of the covered technology


a description of any testing of the covered technology, including:


the date when testing was complete

the methods used to conduct the testing

actual or reasonably foreseeable potential conflicts of interest identified as a result of the testing

a description of any changes or modifications made to the covered technology that resulted from the testing and the reason(s) for those changes

any restrictions placed on the use of the covered technology as a result of the testing


written documentation of the determination as to whether there was a conflict of interest


written documentation evidencing how the effect of any conflict of interest has been eliminated or neutralized, including a record of the specific steps taken by the adviser


written policies and procedures, including any written descriptions adopted and implemented (i.e., desk procedures)


a record of any disclosures provided to investors regarding the adviser’s use of covered technologies, including, if applicable, the date the disclosure was first provided and updated


records of each instance in which a covered technology was altered, overridden, or disabled, the reason for such action, and the date thereof


Key Defined Terms

Under the AI Rule, a conflict of interest would exist when an adviser “uses a covered technology that takes into consideration an interest of the [adviser or person associated with the firm]” and applies only when an adviser uses covered technology in an investor interaction.


The term “covered technology” would mean “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes.”[6]


The term “investor” would include a client or prospective client, and any current or prospective investor in a pooled investment vehicle (e.g., a fund) advised by the adviser. [7]


The term “investor interaction” would include engaging or communicating with an investor, including by exercising discretion with respect to an investor’s account (this includes investing).[8]


The term “conflicts of interest” broadly includes any consideration of firm-favorable information used by covered technology in an investor interaction and makes the adviser responsible for determining which are actual or potential conflicts of interests. While all conflicts of interest should be documented and monitored, only actual conflicts of interest must be eliminated or neutralized.


It is important to note that advisers are required to identify any conflict of interest where covered technology “takes into consideration” an interest of the adviser or a person associated with the firm regardless of whether the covered technology in fact places the interests of the adviser and/or a person associated with the firm ahead of investors’ interests. Conflicts of interest that put an adviser’s interests or the interests of a person associated with the firm first must be neutralized or eliminated. Conflicts that do not place the interests of the adviser and/or a person associated with the firm first must still be disclosed to investors in sufficient detail so that the investor may provide its informed consent.


Principles-Based Approach

As stated in the AI Rule release, “[t]he proposal is designed to be sufficiently broad and principles-based to continue to be applicable as technology develops and to provide firms with flexibility to develop approaches to their use of technology consistent with their business model, subject to the over-arching requirement that they need to be sufficient to prevent the firm from placing its interests ahead of investor interests.”


A principles-based approach also gives the SEC room for regulatory interpretation. Take for example Advisers Act Rule 206(4)-7, also known as “the Compliance Rule.” On its face, the Compliance Rule appears straightforward and lists only three requirements for advisers:


  • adopt and implement written policies and procedures reasonably designed to prevent violations of the Advisers Act and related rules

  • review and document in writing, no less frequently than annually, the adequacy of the policies and procedures and the effectiveness of their implementation

  • designate a CCO responsible for administering the policies and procedures


Over the years and the course of many examinations of advisers following the implementation of the Compliance Rule, the SEC made clear, through deficiency findings and enforcement cases, that to comply with the Compliance Rule, advisers and their CCOs are expected to do much more than appoint a CCO and draft and review policies and procedures. The phrase “reasonably designed” is interpreted to mean that policies and procedures must be tailored to the adviser’s business and how the adviser conducts its business, including, for example, investment research, investment strategies, trade rationales, trade execution, marketing and other communications, and operational and technological processes.


While the Compliance Rule does not mandate advisers to conduct risk assessments and maintain a conflict-of-interests log, it is understood that such logs demonstrate that compliance programs, including a firm’s compliance monitoring and surveillance programs and its policies and procedures, are indeed tailored to a firm’s business.


The Compliance Rule requires advisers to review at least annually the adequacy of the policies and procedures and the effectiveness of their implementation. Although not stated in the Compliance Rule, advisers understand that maintaining a violations log supports their review of the adequacy of the policies and procedures and the effectiveness of their implementation. It is difficult to argue that policies and procedures are effective in the face of numerous violations. And although the Compliance Rule does not expressly require CCOs to draft a formal annual compliance report to document their compliance program reviews, written compliance reports are essential to substantiate that those reviews were in fact conducted.


Advisers should anticipate that the SEC will interpret the AI Rule broadly, using a principles-based approach, as the SEC has already demonstrated with its recent AI sweep conducted before the adoption of the AI Rule (see Recent AI Sweep below).


A Shift in Legal Burden

Under current law, the SEC has the burden of proving that an adviser has a conflict of interest that puts the adviser’s interest first. A notable change under the AI Rule is that an adviser would have the burden of proving that its AI technology does not pose such a conflict. This shift is understandable because, as much as advisers have expressed it would be challenging for them to comply with the conflict-of-interest requirements under the AI Rule, it would be more challenging for the SEC to determine whether a firm’s AI presents conflicts of interest. The adviser, as the creator of the code, is in a better position to analyze the processes and outputs used by its AI, especially relating to machine learning modules, which by their nature are opaque and difficult to analyze.


In conjunction with an open meeting of the SEC’s Investment Advisory Committee on June 6, 2024, which featured an AI panel discussion, the SEC wrote that “while the presence of conflicts of interest between firms and investors is not new, firms’ increasing use of these PDA-like technologies in investor interactions may expose investors to unique risks. This includes the risk of conflicts remaining unidentified and therefore unaddressed or identified and unaddressed. The effects of such unaddressed conflicts may be pernicious, particularly as this technology can rapidly transmit or scale conflicted actions across a firm’s investor base.” The bottom line is that the SEC expects advisers, as the AI creators, to identify and manage conflicts of interest as advisers are in the best position to do so.


The AI Rule would also create a strict liability standard for regulation and enforcement when AI inadvertently or incidentally creates a conflict that the adviser did not detect, regardless of whether the adviser anticipated or intended the AI to take into consideration an interest of the adviser.


Recent AI Sweep

Toward the end of 2023, the SEC initiated a sweep of certain advisers to determine how AI-based tools are being used by advisers and how they manage AI-related conflicts of interests. A sample AI sweep document request letter defined AI as follows:


AI is a term used to describe computer systems and software programs designed to simulate human intelligence to perform tasks, such as investment analysis and decision-making, given a set of human-defined objectives. AI models reach conclusions through reasoning and self-correct to improve analysis. AI programs may autonomously execute trading decisions or may assist staff in making trading decisions. AI may include, but is not limited to, unsupervised machine-learning, supervised machine learning, deep learning, reinforcement learning, natural language processing, and neural networks. AI encompasses the idea of machines mimicking human intelligence, whereas (non-AI) computer algorithms are the specific instructions that enable computers to perform tasks. Algorithms are a component of AI, used to implement various AI techniques and approaches.

Among other requests, advisers under the sweep were directed to produce:


  • all disclosure and marketing documents to clients where the use of AI by the adviser is stated or referred to specifically in the disclosure

  • a written description of all distinct AI-based artificial intelligence models and AI techniques developed and implemented by the adviser to manage client portfolios or make investment decisions and transactions since inception

  • a list and description of all algorithmic trading signals generated by AI models, including, for each signal, all input data sources along with their vendor or, if generated by the adviser, the method of acquisition as well as primary data inputs

  • a list of all data sources utilized by the adviser’s AI systems, including the item name, description, source, manner of acquisition, and related trading or other strategy

  • a list of contracted data source providers utilized by the adviser and in-house alternative data sources, and a description of how each were obtained and are maintained (e.g., web scraping)

  • all written compliance and operational policies and procedures concerning the supervision of all AI systems utilized by the adviser

  • documents outlining how potential conflicts of interest related to AI outputs are managed

  • documents detailing contingency plans in case of AI system failures or inaccuracies

  • documentation on data security measures when using AI

  • reports on the AI models' performance over time and under various market conditions

  • reports on any incidents where AI use raised any regulatory, ethical, or legal issues

  • a list and description of all data acquisition errors and/or adjustments to algorithmic modifications due to data acquisition errors (i.e., web scraping problems), including the date and reason for adjustment


Although the AI Rule has not yet been adopted, a regulatory basis already exists for each item requested above, including Advisers Act Rule 206(4)-1 (the Marketing Rule), Rule 204-2 (the Recordkeeping Rule), and 206(4)-7 (the Compliance Rule). For example, the SEC has expressed concern about “AI Washing,” where firms overstate their AI capabilities in marketing materials, which constitutes a material misrepresentation under existing regulations, and a recent article on Law.com cited regulators on a panel hosted at Berkely Law as saying that “AI enforcement sweeps are reining in hucksters, not innovation.”[9]


The Black Box

One of the issues discussed in the AI Rule release concerns so-called “black box” technologies where an adviser may not be aware of how the technology has reached a certain result or recommendation or the use and possible corruption of data used by AI technologies.[10] While the AI Rule on its face is not prescriptive about the management of black box technologies, the release “suggests”:


“[I]f a firm is concerned that it may not be possible to determine the specific data points that a covered technology relied on when it reached a particular conclusion, and how it weighted the information, the firm could build ‘explainability’ features into the technology in order to give the model the capacity to explain why it reached a particular outcome, recommendation, or prediction. By reviewing the output of the explainability features, the firm may be able to identify whether use of the covered technology is associated with a conflict of interest. Developing this capability would require an understanding of how the model operates and the types of data used to train it.” [11]

The rule release goes on to confirm that the AI Rule would indeed apply to black box technologies, and that under the AI Rule advisers “would only be able to continue using them where all requirements of the proposed conflicts rules are met, including the requirements of the evaluation, identification, testing, determination, and elimination or neutralization sections.”


It would be difficult to interpret this any way other than advisers who use black box technologies should plan to imbed explainability features in their AI to satisfy the AI Rule requirements.


Explainable AI (XAI)

In simple terms, XAI involves the ability to comprehend and describe the workings of an AI program using various processes and methods, including standard and ad-hoc reporting and/or an interrogation interface. The ability to query AI can promote transparency, trust, and confidence to users, programmers, supervisors, compliance personnel, and regulators that the technology is functioning as designed. XAI provides actionable information to diagnose and debug programs, evaluate, and adjust data inputs and the weight given to data inputs, and to identify decay and degradation in the system and its components. In my view, XAI should be applied to every constituent of the AI program, including each data component, algorithmic program, and machine learning process.


The European Union’s General Data Protection Regulation (GDPR) already includes provisions related to automated decision-making,[12] and on July 12, 2024, the European Union adopted the European Union (EU) Artificial Intelligence Act, making it the first comprehensive framework for regulation of AI systems across the EU when it entered into force across all 27 EU member states on August 1, 2024, with enforcement to commence on August 2, 2026.


The importance of XAI has been recognized by other entities, including the Defense Advanced Research Projects Agency (DARPA), which serves as the central research and development organization of the United States Department of Defense, with respect to its developing AI technologies, including AI systems used in warfighters.[13] A 2018 DARPA project concluded that “the effectiveness of [AI] systems is limited by the machine’s current inability to explain their decisions and actions to human users,” and that XAI will be essential for users “to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.” According to the DARPA project, XAI should answer questions such as: “Why did you do that? Why not do something else? When do you succeed? When do you fail? When can I trust you? How do I correct an error?”


Hedge Fund ABC Hypothetical Investment Research Use Case

The section considers a basic investment research use case in the context of a hypothetical global, multi-strategy hedge fund, including potential applications and considerations for individual portfolio management teams and on a firm-level.


Hedge funds use a variety of fundamental, technical, and quantitative (algorithmic) tools to pursue investment strategies.[14] Fundamental analysis focuses on estimating the intrinsic value of a company by analyzing financial statements, the quality of management, and industry trends, among other factors. Fundamental research is detailed and time-consuming. Technical analysis focuses on detecting market trends and patterns, such as historical price and volume data, in search of buying and selling opportunities based on momentum. Technical analysts believe that stock prices reflect all available information about a company, including investor sentiment. Quantitative analysis uses mathematical models and statistical analysis to exploit certain market inefficiencies, such as undervalued stocks.


From gumshoe (Columbo-esque) fundamental analysis to technical and quantitative analysis, all three types of analyses require information and data to form an investment thesis and make investment decisions.


Hedge Fund ABC hypothetical assumptions[15]:


  • Hedge Fund ABC obtains information and data from the following sources, among others:


subscribed sell-side research

boutique/subscribed research and report

surveys

one-on-one meetings with company Management and Investors Relations

conferences, seminars, and industry events/dinners

expert network consultations

discussions with other buy-side firms

discussions with investment banks

financial statements

regulatory filings

Bloomberg

other vendors

press and other publicly available data

“big data” (i.e., subscribed data)


  • Hedge Fund ABC’s policies and procedures require investment personnel to:


seek pre-approval to use subscribed research and data

seek pre-approval to engage vendors

seek pre-approval for communications other than with sell-side personnel and certain public company personnel (Management and Investor Relations), some of which may require Compliance chaperoning depending on certain risk criteria

disclose communications other than communications with sell-side firm personnel

seek pre-approval to attend certain conferences, seminars, and industry events/dinners depending on the sponsor/host and/or whether the gathering will be widely attended (i.e., more than 30 distinct advisory firms in attendance)

disclose investment theses for companies leading into earnings announcements


  • Hedge Fund ABC uses AI to:


search for alternate research

record communications (when permissible)

summarize disclosures, notes, investment theses on a portfolio management team level and across the firm

evaluate data points to form conclusions, including investment recommendations for investment personnel to consider


  • Hedge Fund ABC’s investment personnel define investment research use cases working closely with some combination of:


internal and external programmers

vendors

Marketing, Legal, Compliance, IT, and Operations personnel


AI is challenging because its ability to gather data and perform processes exponentially outpaces the ability of humans to supervise and further develop the technology. The requirements outlined in Hedge Fund ABC’s policies and procedures are based on fundamental supervisory and regulatory considerations, including the need to confirm that information and data is lawfully obtained, used, and protected, and that the basis for investment recommendations is credible and does not violate the law (for example, by obtaining inside information or inadvertently sending signals to the market that may be interpreted as manipulative).


The SEC routinely requests records of communications (including emails, chats, and phone logs) when investigating potential insider trading, and since 2021, the SEC has been conducting a sweep of “off-channel” communications – such as text messages, iMessages, and WhatsApp – used by personnel of advisers, and has fined advisers and broker dealers hundreds of millions in fines based on violations of the Recordkeeping Rule.


In addition to confirming that information and data is lawfully obtained and used by AI, a central focus of surveillance programs, regulatory examinations, inquiries, and investigations alike is: What was the basis for the recommendation? Thus, there are two categories of concern, one relating to how the information was obtained and related issues, and the other relating to how the information was processed and the trustworthiness of AI output.


Assume Hedge Fund ABC’s AI program uses information and data that is programmed as well as information and data identified through machine learning. In considering an investment recommendation from AI, it would be critical for a research analyst and portfolio manager to understand what data was used or discarded and why, the weight that each data point was given and why, and the assumptions the program used to reach its conclusions and the bases for such assumptions, as just a few examples. Other supervisory staff (e.g., the Head of Equity, Head of Fixed Income, Head of Risk) would also want to understand why an investment decision was made. For example, XAI may explain that it discarded or gave a lower weight to a particular research note because it recognized hedging words, or because AI determined that the author of a research note did not have a “high batting average” in general or with respect to the particular company that was the subject of the research note.


It seems certain that human intervention will be necessary for the long foreseeable future, and that while AI is a powerful tool for aggregating and processing information and predictive analysis, XAI is an even more important tool to illuminate what is and is not working so that AI programs can be diagnosed, corrected, and improved, and so that AI can learn from the implementation of human corrections and improvements.


AI, and especially XAI, could potentially offer business supervisors and other departments the ability to oversee and monitor functions across the firm, and to analyze and curate the best ideas more efficiently. XAI could also be an extremely powerful compliance program tool which is one reason my way of thinking has turned around. I am now excited to “embrace the bomb.”


 

[1] Dr Strangelove. Directed by Stanley Kubrick, Columbia Pictures, 1964

[2] Special thanks to Andrea Woodruff for her insightful editing.

[3] Corbyn, Zoë “Kate Crawford: ‘AI is neither artificial nor intelligent,” The Guardian, June 6, 2021, at https://www.theguardian.com/technology/2021/jun/06/microsofts-kate-crawford-ai-is-neither-artificial-nor-intelligent

[4] It’s been reported that ChatGPT was estimated to have reached 100 million monthly active users in January 2023, just two months after launch, “making it the fastest-growing consumer application in history, according to a UBS study,” Compare that to TikTok (9 months), Instagram (2-1/2 years), and the internet (7 years). Krystal Hu: “ChatGPT Sets Record for Fastest-Growing User Base,” Reuters, February 2, 2023, at https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

[5] SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers, July 26, 2023, at https://www.sec.gov/newsroom/press-releases/2023-140

[7] Id. at 49-50

[8] Id. at 50

[11] Id. at 63-64 and footnotes 146-149.

[12] GDPR Article 22 provides indirect control over the use of AI on the basis that AI systems fall within the broad definition of “processing,” which includes activities conducted on personal data and data storage, and AI systems are frequently used to make automated decisions that impact individuals.

[13] Explainable Artificial Intelligence (XAI), DARPA, Project Lead Dr. Matt Turek at https://www.darpa.mil/program/explainable-artificial-intelligence

[14] For example, long/short equity, credit, global macro, market neutral, value, relative value, event driven, merger arbitrage, derivatives, leverage, quantitative, and distressed investing.

[15] This list is only for illustration purposes and does not represent all potential sources of information and data used by hedge funds. In addition, some advisers may have different compliance requirements.


 

EXPERT INVOLVED


Natasha Kassian

Natasha Kassian has over 30 years of buy-side experience providing legal and compliance guidance to registered investment advisers across a broad range of products, asset classes, investment strategies, and jurisdictions. She has served in roles including General Counsel and Chief Compliance Officer for firms that manage hedge funds, private equity funds, venture funds, retail and institutional separately managed accounts, mutual funds, and exchange-traded funds.

Ms. Kassian has extensive experience responding to examinations and inquiries by regulators in the US, EU, EMEA, and APAC, and is a recognized investment management compliance expert.



Learn more about SEDA at sedaexperts.com


Contact Us

+1 646-626-4555

0 comments
bottom of page