Algorithms that decide for us: are we ready for AI in banking?
Kyiv • UNN
Opinion column by financial expert Olena Sosedka

We live in a time when artificial intelligence has learned to invest, analyze risks, combat fraud, and perform banking compliance faster than my coffee machine can brew a morning espresso. The main thing is not to ask it for advice on what to wear to a board meeting. It still limps a bit there (but I believe, by GPT-6, it will get there too).
We see eloquent headlines about a "revolution in finance," but what's really going on? We speak honestly and without embellishment about the advantages and disadvantages of involving artificial intelligence in the banking sector.
Giant companies like JPMorgan Chase & Co. and The Goldman Sachs Group, Inc. are no longer just flirting with AI — they have a full-fledged serious romance. JPMorgan, for example, launched the COiN platform, which analyzes legal documents with such efficiency that it saves over 360,000 hours of human work annually. This is not just savings; it's a true corporate superpower. Lawyers can finally do what they love most — develop strategies, rather than searching for clause 14.46 on page 72 of a contract signed last century.
And JPMorgan Chase & Co. — a leading US financial corporation that combines commercial and investment banking, asset management, lending, and underwriting — also has the LOXM system. This is a kind of trading rocket that executes financial orders with minimal delay, less than 5 milliseconds. Most people don't even have time to blink in that time.
The Goldman Sachs Group, Inc. — one of the world's largest investment banks, specializing in investment banking, securities trading, asset management, and financial consulting — is not far behind. They have GS AI Assistant, which helps employees with thousands of routine tasks, and also an autonomous coder Devin, who writes software code independently.
Yes, the future is already here. And I'm not even talking about the BlackRock Aladdin platform, which sifts through gigabytes of data every second, analyzes risks, and makes decisions that affect entire markets.
And what about compliance, you ask? Previously, it was something like a closed room with a pile of papers and serious specialists who carefully studied every detail. But now it's a real high-tech show. AI systems themselves find suspicious transactions, connections between different operations, check people on sanctions lists, and immediately warn about risks. And all this while we calmly scroll through the news feed on our phones.
But! (And here I raise an eyebrow.) All these technological praises sound very beautiful until we stumble upon the very problem that has a somewhat ominous name — the "black box." What is it? It's a situation where artificial intelligence makes decisions, but no one can explain, and doesn't understand why exactly such a decision was made. Did the algorithm deny you a loan? You want to know the reason — and in response, silence. Money transfer blocked? And there are no explanations.
This is no longer just a trifle or a glitch; it's a serious question of trust in the system. After all, when a decision is made not by a person, but by a machine, and without the possibility of understanding its logic — it's like receiving a verdict in court: without your presence at the hearing, involvement of lawyers, and the opportunity to examine documents, evidence of guilt.
That is why regulators in the US, EU countries, and the UK are already developing rules that should force AI to be not just smart, but also understandable. This refers to the so-called "explainability" of algorithms: the system must be able to answer why it made a certain decision. And not in programmer's language, but in a way that a lawyer can understand.
In addition, ethical requirements are added: not to discriminate, not to distort, and also transparency: where the data comes from, who collected it, who taught the algorithm to think this way. After all, artificial intelligence is not a wild beast that we locked in a server room and pray that it will be well-behaved. It is a tool that can cause harm if used without instructions, understanding, and clear boundaries.
And one more "but," as Business Insider writes, four out of five bankers say they cannot effectively defend themselves against AI-armed hackers. This means that artificial intelligence is already playing on both sides — light and dark. And this is an important aspect of AI development that humanity should consider.
Meanwhile, Ukrainian banks are holding up very well. Monobank, UKRSIBBANK, Oschadbank — have long been playing in the AI field. They automate scoring, analyze customer behavior, and optimize processes.
My prediction? A bank that doesn't start implementing AI may turn into a museum: beautiful, respectable, archaic, but with a limited circle of clients. After all, people are already used to speed, flexibility, and personalization.
In conclusion, it's all very simple — AI is like a new colleague on the team. Smart, a bit boring, but incredibly productive. It won't replace team members, but it will make everyone work in a new way. And if you're still not thrilled — you just haven't been to a meeting where a bot reports to you, not an accountant.
Olena Sosedka, fintech expert and AI optimist at heart