What You'll Really Have to Know About Generative AI

Best Practices August 30, 2023 at 12:47 AM
Share & Print

The rate of innovation brought on by artificial intelligence in the last 12 months is enough to make your head spin.

ChatGPT has successfully passed industry exams such as bar exams, medical licensing exams, college admissions assessments, and many others.

Now more than ever, news organizations report that AI is automating many routine tasks and achieving significant efficiencies.

As a financial professional, you may wonder where AI leaves you and your career.

During my nearly 23-year career on the technology side of life insurance, I have seen many technological trends come and go. Each brings apprehension about how the new technology will disrupt the way we do business.

However, looking back on these years, I have not yet observed a technological trend that replaced a significant number of jobs, at least not industrywide, and not for the long term.

Generally, these trends tend to change job roles rather than replace them.

AI Vocabulary

To adapt to AI, you'll need to understand AI vocabulary, whether you apply the technology yourself or manage AI practitioners directly.

  • AI: Technology that gives computers the ability to learn to perform human-like processes without being directly programmed for these tasks.
  • Machine learning (ML): A subset of AI that involves a machine using data to learn new tasks.
  • Generative AI: Machine learning technology that gives computers the ability to learn how to generate new data, such as images, videos, audio files or text compositions.
  • Large language model (LLM): A generative AI system that has learned how to create text compositions by studying large sources of human language, such as Wikipedia.
  • Pre-training: Having an AI learn from a large, general language source before exposing it to specialized data related to specific tasks.

Famous AIs

ChatGPT is a well-known generative AI system that you can "chat" with.

The last three letters in its name are important.

The G stands for "generative," and the P stands for "pre-training."

The T stands for "transformer" — a neural network design that transforms one type of unstructured data into another.

Transformer technology is the advance now driving the generative AI revolution.

ChatGPT is an LLM that can transform your prompt — text that you enter — into another batch of text: a response.

Other generative AI systems may work with different inputs and outputs. Stable Diffusion, for example, is a popular transformer that outputs images in response to textual prompts.

Describe an idea in words, and Stable Diffusion will make a picture based on those words.

Other transformers work in reverse, transforming an image into a textual caption that describes that image.

AI Literacy

With those basics out of the way, here are three concrete skills that insurance professionals like you need to succeed in this new world of generative AI.

1. Prompt Engineering

I've used the term "prompt" a few times to describe the text you give the generative AI algorithm.

Creating these prompts is called prompt engineering, and it is rapidly becoming a sought-after AI skill.

As an insurance professional, you may see electronic health records, or EHRs, from many sources and vendors.

Your task is to extract and standardize certain vitals from this data.

To do this, you might construct a prompt as follows:

Your objective is to extract the most recent (by date) body temperature, pulse rate, respiration rate and blood pressure from the health record described between the brackets. Convert all values to metric. If you cannot find a value, return null for that value. [health record data]

The response should be a list of the most recent values for these vital signs in metric units.

This prompt could be further refined; you could specify exactly how the individual values are delimited and identified.

Additionally, you could specify the exact unit for each.

As you get better at prompt engineering, you can reduce the number of errors made by ChatGPT or other LLMs.

Using automation, you could now run this prompt over a large number of EHRs and output the results to a database.

2. Validating Results and Flagging Hallucinations

Ideally, the EHR prompt that we just developed will always get the appropriate data and return it to you. However, results from LLMs are not always reliable.

LLMs can sometimes return incorrect results or fabricate a result.

When an LLM makes up a result, the LLM is said to be "hallucinating" — another important generative AI term.

Hallucination can be particularly common when data is either obscure or missing.

Consider if the EHR data that our prompt should extract is missing.

Similarly, the EHR may not be clear enough for the LLM to find all the data you seek.

In cases where the information is missing, unpredictable results or hallucinations may easily occur.

It is always important to specify how to handle missing data in your prompt. As you can see, I requested the value "NULL" for missing values.

Usually, including such a request as part of your prompt will circumvent many errors.

However, the LLM may still attempt to make up a missing value.

Validating the results of LLMs is an important skill and requires a cycle of checking results and refining your prompt.

Hallucinations often occur when your prompt is not specific enough.

As you work with LLMs and validate results, you will learn additional techniques to refine your prompts and decrease hallucinations.

Once you have performed sufficient validation, you may be ready to trust your LLM with more responsibility.

3. Chunking Data for LLMs

When you present your prompt to an LLM, the prompt is automatically converted to tokens.

Tokens are parts of words, usually a few characters in length, that are the smallest units of data that an LLM deals with.

LLMs string tokens together to form words and sentences.

The input prompt, output response, and any information the LLM remembers between prompts must all fit into a token buffer.

This token buffer has a maximum size, which can range from a couple thousand tokens to tens of thousands, depending on the type of LLM.

Newer LLMs are increasing the size of their token buffers.

Because the token buffer is shared between your prompt, the output, and any temporary memory, managing the size of the prompt is very important.

You might ask the LLM a general question about human anatomy or insurance law, and the LLM will give you a detailed answer.

This is because the LLM was already trained on this information.

However, you may also wish to ask the LLM a question about your company's underwriting manual.

Such a prompt becomes difficult to handle, because your question and the entire underwriting manual will not simultaneously fit into the token buffer.

You have two ways to solve this problem.

The first option is to fine-tune the LLM on your underwriting manual.

For a document that does not change too often and will see a great deal of use, fine-tuning can make much sense.

To fine-tune, you take an already trained LLM and provide further training on proprietary company information — information that was likely not in the large body of information the LLM was originally trained on.

This extra training will take a decent amount of computing power; however, for commonly used company documents, fine-tuning can be helpful.

The second option is to break your large document into chunks.

The chunks must be small enough to fit into the token buffer with the prompt and anticipated output.

Learning how to properly chunk data for different document types is a critical skill to master.

One common approach is to design a prompt that submits the same question with each chunk but requests a response of NONE if the chunk contains no useful information.

Then, use another prompt to summarize all the answers that did not result in NONE.

Working With Your New AI Assistant

I believe that insurance professionals will work more with AI in the future.

Rather than eliminate jobs, these new technologies will prompt insurance professionals to improve their skill sets.

AI will likely become a sort of assistant to automate repetitive tasks, just as calculators, spreadsheets and databases have offered us new levels of efficiency.

The set of computer applications that we presently use will likely incorporate LLM technology to enhance efficiency.

Understanding how to engineer prompts, chunk data, and validate results will become critical skills for insurance professionals interacting with AI.


Jeff HeatonJeff Heaton is vice president, data science, at Reinsurance Group of America and an adjunct instructor at the Sever Institute at Washington University in St. Louis.

..

..

..

Credit: Daniel Chetroni/Adobe Stock

NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.

Related Stories

Resource Center