Antony Jenkins, former CEO of Barclays and executive chair of fintech firm 10x Future Technologies; Rebecca Aston, then CISI head of professional standards; and Stuart Coleman, head of data, analytics and AI at 10x Future Technologies.
In an article for AMEinfo.com, a Middle East business news site, Hadi Khatib writes about an upcoming “new era” of artificial intelligence (AI) called ‘artificial general intelligence’ (AGI).
AGI will differ from AI in that it will be able to “master new skills quickly, perhaps by watching a single demonstration or just by reading, with no training at all, and maybe entirely at its own initiative”, compared to current AI, which is good at a specific skill, like facial recognition, after learning from millions of examples.
It “will make humans look slow and inadequate in comparison”, writes Khatib, and “would make all the technological wonders of today’s AI look as quaint as Stone Age axe heads”.
He says that big tech has invested money in its “quest for AGI”, giving examples of Microsoft and Alphabet Inc both sponsoring two separate research and development companies with a focus on developing advanced AI. He also refers to a report from US-based research firm Mind Commerce that forecasts that investment in AGI will reach US$50bn by 2023.
But AGI is still a way off, and, before we start considering its implications, there are important applications and ethical questions for the current ‘Stone Age’ technology that are the topic of a recent CISI podcast, outlined below.
Podcast for non-members Podcast for members – earn 45 minutes CPDThe ethics of AI in financial services
The podcast, published in December 2019, features Rebecca Aston, then head of professional standards at the Institute, chatting to Antony Jenkins, executive chair of fintech firm 10x Future Technologies, and Stuart Coleman, head of data, analytics and AI at 10x Future Technologies.
Coleman says that researchers in the field and the AI community say AGI isn’t a reality yet. “The image you see in films of a super-intelligent being is quite a long way away,” he says.
Applications of AI
“Areas where you make data-driven decisions, for example, credit risk assessment, the elimination of fraud, the targeting of products and services,” are the types of areas that will be impacted, Jenkins explains. He adds that automation will make the customer experience smoother.
Jenkins believes we are still “very much in the foothills” in terms of seeing improvement to customer experiences with AI, but he gives a specific use case of how it has worked in credit decisioning in the small business world.
Typically the customer would go through a scoring model and they would be approved, or not. The unapproved customers would go through a manual review. AI technology then automates that second step of the process, by having the machine observe how humans make decisions and learn autonomously how to improve that decision-making.
Jenkins explains that the AI reached a good level of ‘human’ decision-making, and the credit application process became cheaper.
Trust in the system
The podcast addresses how machine learning has progressed over the past five years, so that it now has an awareness of ethical issues. A lot of the issues that are raised surround the ideas of trust, transparency and ownership of data.
Jenkins says we must be clear what data belongs to whom, who it is linked to, what the rights to process that data are, and what the rights to monetise that data are. This is critical in the “new world” we live in, and is essential to working towards transparency, he explains. “It’s possible to paint a very dystopian and utopian set of outcomes around technology as powerful as this. It’s really for us to harness the technology in the ways I’ve described to make sure it’s beneficial,” says Jenkins.
Other ethical issues and considerations surrounding AI are connected to consent and the capacity to consent. Coleman explains that there have been techniques developed to increase transparency surrounding algorithms. He says that work is being done to break down algorithms so you can explain the lineage of data and how the decision was made, and what consent was given.
Jenkins also talks on the topic of responsibility and accountability of AI when things go wrong – who holds the power? “For me it’s clear. It sits with the organisation who has commissioned the technology … Once you allow an inanimate entity to take responsibility for human’s actions, you’re going to the wrong place,” he says.
Have you seen the benefits of machine learning as a customer or a professional? Leave your comments below.
Seen a blog, news story or discussion online that you think might interest CISI members? Email bethan.rees@wardour.co.uk.