1 min read

AI fog hides the gem – explainable AI in finance

This note is intended to focus a lot of the dialogue in AI around the central issue, which is explainable AI. This will be important in the regulatory discussions around AI.
AI fog hides the gem – explainable AI in finance


The creation of generative AI has led to a rhetorical dust-up, as the interested parties spar on the virtues and threats of artificial intelligence. Unions strike and pundits opine on what AI is capable of and what it is not. The black-box nature of most AI models triggers a mixed feeling of magic and uncertainty. Nowhere is this debate more intense than in finance. AI is being applied in a fog of models and promises, each claiming to be the most important. But one critically important AI talent is often obscured in this fog —the potential of AI to explain its own outcomes. This is essential to the broad acceptance of AI among regulators. In the era of GPT, which is famous for its own hallucinations, trusting the model means explaining the model.

We often say that the past does not repeat exactly, but it rhymes. More formally we call this pattern recognition, and it is a special talent of properly trained AI systems. People are pretty good at identifying patterns in simple data, but an AI can search an enormous amount of numbers, and chatter in many dimensions for patterns only it can see and explain to people clearly. And it can help us understand the amount of risk we are in using all the historical evidence rather than hallucinations. eBooleant's collaborative partner IndicatorLab offers investors prediction and “risk-explainable” analysis from their unique AI, which stands out in all of the fog about AI.

#hedgefunds#AI#finance#investing#riskmanagement