Fascinating take below on the classic "trolley problem", except in this scenario, the decision maker is a machine, which has been programmed by someone. It raises the challenging question of ethics and governance in the use of data and AI (artificial intelligence). 

Whilst not as serious as a life and death situation in a driverless car, AI is becoming more and more embedded into the fabric of financial services organisations and taking control of significant decision making, such as lending, credit cards, limits, investments, onboarding / KYC and financial crime. In these use-cases, who is going to supervise the rules that define the algorithm? As a consumer, how do I trust that the algorithm does not have any inbuilt bias that will discriminate against me? 

This is a challenge that all banks will need to face up to in order to maintain customer trust and meet regulatory requirements, and will require them to develop new teams and skill-sets over the coming years.

Find out more about how AI is being applied in FS in our latest report with the World Economic Forum