Skip to main content

AI ‘won’t live up to the hype’, warns thinktank

Governments must gain the trust of their citizens when it comes to increasing the use of artificial intelligence (AI), warns a new report. The Centre for Public Impact (CPI) thinktank, which was founded by consultant Boston Consulting Group, said that public trust in AI is low. While AI has the potential in mobility to make public transport responsive to traveller needs in real time, for example, the influence of AI is viewed negatively by some. Launching an action plan for governments at the Tallinn Digi
October 16, 2018 Read time: 2 mins
Governments must gain the trust of their citizens when it comes to increasing the use of artificial intelligence (AI), warns a new report.


The Centre for Public Impact (CPI) thinktank, which was founded by consultant 4055 Boston Consulting Group, said that public trust in AI is low. While AI has the potential in mobility to make public transport responsive to traveller needs in real time, for example, the influence of AI is viewed negatively by some.

Launching an action plan for governments at the Tallinn Digital Summit in Estonia, CPI said that many governments are not adequately prepared, and are not taking the right steps to engage and inform citizens of where and how AI is being used.

Such information is vital to give AI “trust and legitimacy”, CPI believes. Programme director Danny Buerkli says: “When it comes to AI in government we either hear hype or horror; but never the reality.”

Its paper ‘How to make AI work in government and for people’ suggests that governments:

  • Understand the real needs of your users - understand their actual problems, and build systems around them (and not around some pretend problem just to use AI)
  • Focus on specific and doable tasks
  • Build AI literacy in the organisation and the public
  • Keep maintaining and improving AI systems - and adapt them to changing circumstances
  • Design for and embrace extended scrutiny - be resolutely open towards the public, your employees and other governments and organisations about what you are doing


Boston Consulting Group said that a survey of 14,000 internet users in 30 countries revealed that nearly a third (32%) of citizens are ‘strongly concerned’ that the moral and ethical issues of AI have not been resolved.

Related Content

  • December 4, 2020
    Dignity should be key measure of MaaS success
    Money isn’t everything: what if we made dignity into the key measure of success for MaaS? Crissy Ditmore sets out her vision statement for the industry’s developers
  • October 12, 2022
    AI: a means to an end
    Artificial intelligence is a powerful tool to create a balance between safety, resilience, sustainability and inclusivity when it comes to connected and automated driving, says Margriet van Schijndel of TU/e
  • July 19, 2019
    AV drivers need help for safe handovers, says RAC
    Drivers will need help preparing for unexpected situations where their autonomous vehicle (AV) hands back control, warns the RAC Foundation. RAC carried out a study in the UK with the Human Factors Research Group at the University of Nottingham on 49 people of varying ages using a driving simulator on a ‘commute-style’ journey for five days in a row. During the trial, the drivers demonstrated significant lateral movement (lane swerving) when control was handed back to them, even after being provided
  • November 1, 2021
    Don’t understand network infrastructure? Don’t worry
    Rapid changes in technology mean ITS managers now need to understand network infrastructure as well as electrical engineering, says EtherWan’s Jim Toepper. But don’t worry, help is at hand…