• Linkedin
  • Twitter

Making sure your bot colleague is less biased than you!

Author(s): Frank De Jonghe

Date published: Feb 2020

SUERF Policy Note, Issue No 133
by Frank De Jonghe, Ghent University and EY


JEL-codes: C18, C44, C52.
Keywords: Machine learning, artificial intelligence, reputation, model transparency, risk management.

The topic of ethical implications of the exploding applications of big data, machine learning and genuine AI has quickly captured the attention of industry practitioners, public observers like journalists, politicians and definitely conference organisers over the past year. It is a vast topic in its own right, deserving all the multidisciplinary attention it gets. In this short commentary, I will limit myself to reviewing some salient features of bias in the context of models used in the financial industry, and, more importantly for the practitioner, suggest some process and governance measures that boards and senior management of financial services companies can take to identify, monitor and mitigate this risk exposure.
 
Read Full Text

SUERF Policy Note, Issue No 133SUERF Policy Note, Issue No 133

Making sure your bot colleague is less biased than you!Web version: Making sure your bot colleague is less biased than you!

© SUERF - The European Money and Finance Forum 2010-2018 .:. Société Universitaire Européenne de Recherches Financières

Privacy Policy .:. Legal notice

Design by draganmarkovic.net