Buck Bond Group
Artificial Intelligence (AI) and actuarial services

Artificial Intelligence (AI) and actuarial services

by Tags:

Historically, actuaries have been quick to adopt or even develop tools and technologies that facilitate the rapid generation of answers to questions on both qualitative and quantitative matters. The intense publicity surrounding the recent release of publicly accessible artificial intelligence tools, such as OpenAI’s ChatGPT and Google’s Bard, raises the question of the potential applicability of these tools in the work of consulting actuaries.

The opportunity presented by these new tools lies in their ability to synthesize unique content by drawing on many sources on the Internet. Most actuaries are now accustomed to using the Internet to locate information formulate their conclusions and reports. Now, these new AI-based tools can find answers to specific questions and write reports on topics of interest far more quickly and in greater depth than realistically envisioned even in the recent past

Testing the tools

To test the capabilities of these new tools, we looked at a typical request from a pension plan sponsor for a simple but complete summary of what ERISA Section 4010 is and what must be done to comply with it.

The plan consulting actuary could sit down and write a memorandum on the subject, a task that would likely take at least an hour of the (billable) time of a knowledgeable consultant, plus time for peer review. Alternatively, the actuary could ask ChatGPT to respond to the question, “When does a pension plan administrator have to submit a Section 4010 filing, and how is that done?”

That elicited a 300-word response in a matter of seconds, starting with this paragraph:

“A Section 4010 filing is required by the Pension Benefit Guaranty Corporation (PBGC) for certain pension plans that are subject to the Employee Retirement Income Security Act (ERISA) of 1974. The Section 4010 filing requirement pertains to underfunded plans, specifically those where the plan sponsor has one or more underfunded plans with a total underfunding exceeding $15 million. This filing provides the PBGC with financial and actuarial information about the plan and its sponsor, helping the agency monitor the health of these plans and ensure the financial security of participants.”

The response went on to outline the timing and process of submitting the filing including specific information required, and concluded with the PBGC admonition to file on time and avoid penalties. The potential efficiencies that may be achieved from the use of AI-based tools are well illustrated by this example. Still, some of the press coverage of recently released tools has focused on the potential threat to certain types of employment they may pose.

Drawbacks

At the same time, it is essential for actuaries, or members of any other profession, to be aware of the shortcomings they presently possess.

Inaccuracies and errors: The new AI tools are quite capable of being wrong. In some cases, the errors in their output are obvious. In other cases, the inaccuracies are more subtle. The texts of responses to queries generated by the tools should be thoroughly reviewed before they are accepted or used.

To get a sense of their potential for error, we can consider that late last year a group of professors at the University of Minnesota Law School decided to find out whether ChatGPT could pass the final exams given in courses taken by first-year law students. The good news for ChatGPT is that it passed those exams. The bad news is that it did so by such small margins that a human student achieving the same scores would have been placed on academic probation.

An updated version of ChatGPT was released in mid-March by OpenAI, accompanied by a great deal of information about how much better than its predecessor the new version performed on standardized tests. The importance of this improvement is its implications for those who use the tool for legitimate purposes but must worry about its penchant for error.

Behind the times: The new AI tools have stated limits regarding their abilities to consider recent events. For ChatGPT, this is explicitly disclosed: “ChatGPT’s training data cuts off in 2021. This means that it is completely unaware of current events, trends, or anything that happened after its training.” Accordingly, at this writing, ChatGPT will tell you that the reigning monarch in the United Kingdom is Elizabeth II, and that it does not know what the exact provisions of the SECURE 2.0 Act are but can describe the important ones of predecessor legislation that was introduced in 2019.

Brute force: Obviously but importantly, the new AI tools are inanimate. They have little or no appreciation of nuance, of the sensitivities of those who will read their output to the presence or absence of certain matters in its responses, or of the sophistication and ability of its audience to comprehend and act upon what they are told.

Models and standards: Finally, actuaries using AI tools in their work should remember that these are known as large language models. When using models in their work, actuaries are subject to the requirements of Actuarial Standard of Practice No. 56, Modeling. These requirements include developing a basic understanding of the workings of the model, testing its performance, and reviewing its results. In addition, the use of models developed by others in performing actuarial work must be disclosed under Section 4 of the Standard.

Here to stay

There is little question that the use of artificial intelligence tools will grow significantly in the years to come, as will the capabilities of these tools. Actuaries who take advantage of all they have to offer while bearing in mind their limitations and adhering to professional standards that apply to their use will be able to enhance the quality and efficiency of their work.