Skip to main content

AI ‘won’t live up to the hype’, warns thinktank

Governments must gain the trust of their citizens when it comes to increasing the use of artificial intelligence (AI), warns a new report. The Centre for Public Impact (CPI) thinktank, which was founded by consultant Boston Consulting Group, said that public trust in AI is low. While AI has the potential in mobility to make public transport responsive to traveller needs in real time, for example, the influence of AI is viewed negatively by some. Launching an action plan for governments at the Tallinn Digi
October 16, 2018 Read time: 2 mins
Governments must gain the trust of their citizens when it comes to increasing the use of artificial intelligence (AI), warns a new report.


The Centre for Public Impact (CPI) thinktank, which was founded by consultant 4055 Boston Consulting Group, said that public trust in AI is low. While AI has the potential in mobility to make public transport responsive to traveller needs in real time, for example, the influence of AI is viewed negatively by some.

Launching an action plan for governments at the Tallinn Digital Summit in Estonia, CPI said that many governments are not adequately prepared, and are not taking the right steps to engage and inform citizens of where and how AI is being used.

Such information is vital to give AI “trust and legitimacy”, CPI believes. Programme director Danny Buerkli says: “When it comes to AI in government we either hear hype or horror; but never the reality.”

Its paper ‘How to make AI work in government and for people’ suggests that governments:

  • Understand the real needs of your users - understand their actual problems, and build systems around them (and not around some pretend problem just to use AI)
  • Focus on specific and doable tasks
  • Build AI literacy in the organisation and the public
  • Keep maintaining and improving AI systems - and adapt them to changing circumstances
  • Design for and embrace extended scrutiny - be resolutely open towards the public, your employees and other governments and organisations about what you are doing


Boston Consulting Group said that a survey of 14,000 internet users in 30 countries revealed that nearly a third (32%) of citizens are ‘strongly concerned’ that the moral and ethical issues of AI have not been resolved.

Related Content

  • December 4, 2012
    Assessing the potential of in-vehicle enforcement systems
    Jason Barnes considers the social and ethical ramifications of using in-vehicle safety technologies to fulfil enforcement functions. Although policy documents often imply close correlation between enforcement, compliance and safety – in part, as a counter to accusations that enforcement is rather more concerned with revenue generation – there is a noticeable reluctance among policy makers and auto manufacturers to exploit in-vehicle safety systems for enforcement applications. From a technical perspective t
  • April 27, 2020
    Smart cities: first, define your strategy
    How smart are we really being about smart mobility? Martin Howell of Worldline UK and Ireland reckons we could do better – but to do so you have to start asking the right questions…
  • January 25, 2018
    Fara keeps data delivery simple
    Simplifying the delivery of data and information gathered by traffic management, ticketing and other systems can improve travel efficiency and the traveller’s experience. Having quantified and analysed the previously unmonitored movement of road vehicles, trains, metros, cyclists and pedestrians, the ITS sector is a prime example of the digital world. Patterns discerned from those previously random happenings enable authorities to design more efficient transport systems, allow transport operators to run
  • May 7, 2024
    Tolling Matters: "We want people to share their experiences and not be judged or silenced"
    Wayne Reed of AtkinsRéalis explains why IBTTA's Open Space sessions have the potential to generate great ideas through meaningful discussion - and to have an impact way beyond a 'talking shop'