Artificial Intelligence in Finance: Quo Vadis?

Fintech MK
7 min readOct 14, 2021
Fintech Thursdays — Branka Hadzi-Misheva

The global financial sector is undergoing a period of significant change and disruption. Advances in technology are enabling businesses to fundamentally rethink the way in which they generate value and interact with their environment. This disruption has taken the umbrella term Fintech and it denotes all technologically enabled financial innovation that results in new business models, applications, processes, products, and services.

At the centre of this disruption are the developments in information and internet technology which have fostered new web-based services that affect every facet of today’s economic and financial activity (Bank for International Settlements, 2020). This creates enormous quantities of data. Statista (2021) estimates that the volume of data created, captured, copied and consumed worldwide will raise to 180+ Zetabytes by 2025.

What is a Zetabyte? Well, it is a measure of storage capacity and it is equal to the 1021 or 1 sextillion bytes. Just to put things in perspective, we outline a quote from Shruti Jain, a Project and Program Manager at Cisco (https://blogs.cisco.com/author/shrutijain):

«If each terabyte in a Zettabyte were a kilometre, it would be equivalent to 1,300 round trips to the moon and back»

… and by 2025 we would have gone on 234,000 trips.

The question at this point is what should we expect for the future? Will the finance industry benefit from these massive quantities of data that societies are generating? On the one hand, the answer is a resounding yes. Financial service providers can use big data for a variety of use cases ranging from customization of services and products, to fraud detection and risk management. However, there is also a cause for concern. The massive quantities of data we are generating stand opposite of the key principles defining the history of computing and statistics: reduction and simplification. As elegantly put by Sonderegger (2013):

“From Archimedes to Newton, mathematicians sought to reduce the workings of the physical world to elegant equations. In the twentieth century, computing pioneers like Alan Turing and Claude Shannon saw that both textual and numerical information could be economically processed if reduced to binary bits. This brings us to the modern era, where software visionaries have spent the past 40 years reducing business processes to data models and application logic.”

We have done similarly in the world of statistics: throughout time, we have attempted to model economic trends, consumers’ behaviour and financial crisis, oftentimes compressing very complex patterns and non-linear dependencies to normality assumptions. Yet, big data is not about simplification nor reduction: it is about expansion hence, we might be ill-equipped to take full advantage of the potential offered by big data technologies.

This challenge notwithstanding, there is some good news on the horizon as well. Data volumes have surged hand in hand with the developments of specific techniques for their analysis. Researchers in computer science and statistics have developed advanced techniques to obtain insights from large data sets.

This brings us to artificial intelligence (AI). AI is broadly defined as the use of computational tools for the purpose of performing tasks that traditionally would require human intelligence (Joiner, 2018). Another term closely related with AI is machine learning (ML).

What is the difference? ML is about extracting knowledge from data. It is defined as a method of designing a sequence of actions to solve a problem that optimises automatically through experience and with limited or no human intervention (Sarker, 2021). Artificial intelligence, on the other hand, is vaster in scope and it is defined within the bounders of what is feasible at the moment. For example, few decades ago, chess playing was considered a skill that exclusively required human intelligence. Nowadays, a chess playing bot is part of almost every computer’s operating system. Hence, AI is more of a moving target.

What is for certain is that both AI and ML have gained significant popularity within the past few decades. However, it is worth remembering that neither of those terms are new. In fact, the AI industry has experienced many fluctuations in the past. Few decades ago, there was significant hype surrounding AI and prominent researchers from all over the world argued that human-level AI would very soon reach the plateau of productivity. However, unfulfilled expectations caused great disenchantment with the technology and let to a period of significantly reduced research interest and funding for the topic. In the beginning of 2012, we started to see a gearshift back to AI as deep learning marked significant progress in performing tasks which were impossible to execute with rule-based programming.

So, where are we now? In order to assess the potential of AI for the financial industry, we need to answer two main questions: (i) does AI remain a hype or it has the potential to offer true value; and (ii) how compatible is the technology with the properties of the finance industry?

To answer the first question, we turn to Gardner Hype Cycle, which provides a visual representation of the maturity and adoption of technologies and applications, and how relevant they are in solving real business problems.

Figure 1. Gartner Hype Cycle 2021. Available at gartner.com

Looking at the Hype Cycle for 2021, the conclusion is clear — Gartner identifies AI as an inescapable technology. Hence, the answer to our first question is: AI has true potential to fundamentally change how the world works.

The next question becomes how compatible are AI systems and the financial sector? Given the high volume of accurate historical records, and quantitative nature of problems in the finance world, very few industries are better suited for artificial intelligence.

But if both things are true (i.e., AI can bring significant value and it is perfectly suited for financial problem sets) the question becomes why have we not seen wide adoption of AI systems in finance? Where is the massive progress?

Well, first and foremost, deploying AI-systems in any practical context is very difficult. But probably one of the most relevant barriers for wider adoption of AI in the financial sector is related to the concept of explainability. AI solutions are often referred to as “black boxes” because typically it is difficult to trace the steps the algorithm took to arrive at its decision. Namely, in ML, these black box models are built on very complex logic, often with thousands of parameters interlinked with nonlinear dependencies, which in turn means that even for the people that have developed the models it can be very difficult to understand how variables are jointly related to reach the final prediction. This challenge is particularly relevant for European financial intermediaries as they are subjected to the General Data Protection Regulation (GDPR) which provides a right to explanation, enabling users to ask for an explanation as to an automated decision-making processes affecting them. Hence, explainability is without a doubt the name of the game.

As a result of these rising concerns, the concept of eXplainable AI (XAI) emerged introducing a suite of techniques attempting to explain to users how the model arrived at a certain decision. Namely, when ML models do not meet any of the criteria imposed to declare them explainable or transparent, a separate method must be developed and implemented to explain the inner workings of underlining black box (Arrieta et al., 2020). This is the purpose of post-hoc explainability methods which have the purpose of communicating understandable information about how an already developed model arrived at a certain decision for a specific set of inputs.

The need for explainable solutions marks the 3rd wave of AI which will define the research efforts around the field in the next decade. Only though explainable and trustworthy solutions we can drive forward the application of AI-based solutions in finance. The greater the trust in AI, the more finance service providers will deploy it.

References:

Arrieta, Alejandro Barredo, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, et al. 2019. ‘Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI’. ArXiv:1910.10045, December. http://arxiv.org/abs/1910.10045.

Use of big data sources and applications at central banks. 2020 survey conducted by the Irving Fisher Committee on Central Bank Statistics (IFC). (2020). Bank for International Settlements Report. Available at: https://www.bis.org/ifc/publ/ifc_report_13.pdf

Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2025. (2021). Statista. Available at: https://www.statista.com/statistics/871513/worldwide-data-created/

Joiner, I.A. (2018). Artificial Intelligence. Emerging Library Technologies. Available at: https://www.sciencedirect.com/topics/social-sciences/artificial-intelligence

Sarker, I.H. (2021). Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Computer Science 2, 160. Available at: https://link.springer.com/article/10.1007/s42979-021-00592-x

Sonderegger, P. (2013). Big Data At Work: The World Is Making A Digital Copy Of Itself. Forbes. Available at: https://www.forbes.com/sites/oracle/2013/09/09/big-data-at-work-the-world-is-making-a-digital-copy-of-itself/?sh=4b94c34d3c78

Author:

Branka Hadji Misheva

Senior researcher at ZHAW Zurich University of Applied Sciences, working on AI applications in finance, XAI methods, network models and fintech risk management. She holds a PhD in Economics and Management of Technology with a specific focus on network models as they apply to the operation and performance of P2P systems, from the University of Pavia, Italy. At her position at ZHAW, she leads several research and innovation projects on Artificial Intelligence and Machine Learning for Credit Risk Management. She is a research author of 17 papers in the field of credit risk modelling, graph theory, predictive performance of scoring models, lead behavior in crypto markets and explainable AI models for credit risk management.

--

--

Fintech MK

First Fintech Community in North Macedonia which aims at developing and enabling the Fintech Ecosystem regionally.