On Thursday 11th September 2025 I attended the “AI in Government and Academia Summit 2025” hosted by Manchester Metropolitan University (MMU), the Department for Science, Innovation, & Technology (DSIT), and Government Digital Service (GDS).

The day was co-chaired by Dr Tommaso Spinello (DSIT) and Professor Keeley Crockett (MMU) and opened by Professor Darren Dancey. 

Starting events was Professor Tom Crick (Chief Scientific Advisor at Department for Culture, Media, and Sport, Professor of Digital Policy, Deputy Pro-Vice-Chancellor at Swansea University) with a keynote on “Shaping AI for the Public Good”.  My key take aways from this keynote were: 

  • Government can be seen to be moving too fast with AI, yet also Government can be seen to be moving too slow with AI.
  • There is a lot of focus on the different types of AI, but maybe not enough focus on the ethics and societal impact of AI (Transparency, ethical AI, fairness, privacy).
  • Three key themes: “AI for the public good”, “Trust, transparency, participation”, and “Partnerships between government, academia and civic society”.

Professor Crick’s presentation made me further wonder how regular citizens, especially ones that do not understand AI or the decision-making process, would be able to question the decisions AI systems may make in the future.

Next up was Professor Antonio Cordella (London School of Economics) with a keynote on “AI in the Public Sector: The Values Conundrum”. This keynote looked at the hidden values in algorithms and data. My key take way was that every AI design embeds value choices disguised as technical ones, and that AI systems are not neutral tools. Professor Cordella gave examples of algorithmic choices (efficiency vs. equity decisions, speed vs. thoroughness trade-offs) and real-world examples (criminal justice decisions, social benefit distribution, healthcare allocation systems).

The data dilemma of historical data perpetuating past context, i.e. that datasets reflect our past value history and have embedded biases was also discussed. Professor Cordella delivered the four pillars of value driven AI in public service:

  • Value-Explicit AI Design (Make algorithmic values transparent and debatable)
  • Historical Bias Auditing (Acknowledge and correct injustices in datasets)
  • Democratic Oversight (Align technical choices with public values, not organisational interests)
  • Continuous Ethical Monitoring (Ethics is ongoing not a check box exercise)

Dr Edward Steele (IT Fellow, Data Science at Met Office) followed discussing the growth of AI in weather and climate science. I was quite impressed by how many different sectors and areas that the Met Office provide weather data / forecasts to. Dr Steele showed that the Met Office provides a return on investment of 19:1 and the majority of Government services use the Met Office in some way. Dr Steele discussed how AI is rewriting the rules on how we forecast weather and model the climate, with:

  • Unprecedented volumes of data
  • Increasing availability of powerful computing capacity
  • Expert scientists equipped with data science and tools

Dr Steele then showed the rise of machine learning models between January 2019 and January 2023 when the “lost Christmas” of ML weather models such as Pangu Weather, Deep Mind, FuXi, Feng Wu, and Neural GCM arrived. The Met Office has an “AI 4 Everyone” programme that encircles AI projects such as “AI for Numerical Weather Prediction”, “AI for Climate”, “Twinning Capability for the Natural Environment (TWINE)”, and the use of M365 Copilot.

After a break for lunch and networking, which is something I really need to work on, it was time for the panel session.

The panel was focused on “Forging the Future: Cross-Sector AI Collaboration for Public Good”. It was hosted by Madeline Hoskin (Assistant Director of Technology, North Yorkshire Council) and the panel members included: Phil Swan (Director of Digital, Greater Manchester Combined Authority), Sherelle Fairweather (Digital Strategy Lead, Manchester City Council), Dr Moria Nicolson (Lead Behavioural Scientist, Cabinet Office, Visiting Fellow, UCL), Professor Andy Miah (Salford University), Professor Julia Handl (University Manchester).

From this panel I took the scale of the challenge of AI for public good, the fears of a potential two tier society (those with AI, those without AI), the need for new systems to tackle systemic problems. I liked the acknowledgment that discussing risk can put people off, and that other ways of looking it may be needed to keep those people involved. It was interesting to hear that there may be distrust in public services using data and again bringing people on the journey whilst being transparent is a solution.

Discussion rose around ethics, and how although we use the word it doesn’t always mean we know what those ethics are. This resonated with me as my current apprenticeship module is also discussing ethics, and it is such a big subject that is subjective. A duty of care and trust of the people using the technology is needed, and this tied back to the earlier keynotes. A few other key sentences for me were: “Society as creators not just consumers”, “research to ask the public what they want AI to do”, “keep an informed human in the loop”.