Monday 13 November 2023
Belinda Schwehr’s report of the meeting
Click here for the slides presented by one of the speakers, Chair of London Futurists David Wood.
Farnham Maltings, (Barley Room) – Bridge Square, Farnham GU9 7QR (Satnav for the carpark: GU9 7QN)
FREE PUBLIC talk and discussion
David Wood, Chair London Futurists
Daniel Dancey, IT Engineer and AI speaker
- 7.30pm Welcome. Farnham Maltings refreshments available.
- 7.45 pm Main Speakers after Introduction by Belinda Schwehr Chair Farnham Humanists
- 8.30 pm: Contributions from the Floor – Passionate amateurs’ insights are most welcome!
- 9pm: Chaired Questions to the Speakers
- 9.30pm to 10pm: Close and Thanks
Farnham Humanists have secured the insights of some AI experts to steer us all, as members of the public – no doubt interested, but uninitiated, as yet – towards thinking hard, about how to think about AI and AGI – and hopefully galvanising us out of the equally unattractive extremes of scaremongering, paralysis or complacency.
We’re not the only ones with concerns: having set aside a £100m fund for safe development of AI models in the UK, Prime Minister Rishi Sunak and other world leaders will discuss the possibilities and risks posed by AI at an event in November, covering a dozen challenges, including bias, privacy, misrepresentation, transparency, copyright and employment.
Do come and join in. All are welcome!
Collection for a tech-related international charity at the end.
More about our Speakers for the event:
Chair of London Futurists, a non-profit forum with over 9000 members.
David has a Maths MA from Cambridge and an honorary DSc from the University of Westminster. He is a full-time speaker and analyst, after a career in the smartphone industry. His books include “The Singularity Principles”, “The Abolition of Aging”, “Sustainable Superabundance”, and “Vital Foresight”.
Daniel Dancey, Dorset Humanists
Daniel currently works as a software engineer, with previous experience in cyber security. He is the Treasurer of Dorset Humanists.
He has given a previous talk on Artificial Intelligence to Dorset Humanists, and in this talk hopes to give an optimistic vision of our future with AI, without disregarding the real risks associated with it.
For anyone who’d like to make a longer contribution from the Floor –
- Please plan on offering 5-7 minutes max, please, focusing on your experience of AI to date, rather than what’s been written about or by AI in the first place.
- Please liaise in advance via Belindaschwehr@btinternet.com to avoid duplication if at all possible.
If you would like questions asked on the night through the Chair, written questions can be emailed to Belinda at the same email address Belindaschwehr@btinternet.com . Otherwise questions and comments will be taken from the floor.
Some thoughts about AI in advance:
Article: A day in the life of AI | Artificial intelligence (AI)
Discussions about AI often focus on the futuristic threat posed by superhuman intelligence. But AI is already woven into the fabric of our daily lives. The way we travel, the food we eat, how we spend our money, the news we read and our social interactions – the influence of AI is everywhere …
“Despite its name, there is nothing ‘artificial’ about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.”
Dr Fei-Fei Li, Co-Director of Stanford University’s Human-Centered AI Institute
“I am very much of the belief that there’s far too much scaremongering about AGI and the unknowns, and not enough attention about the negative effects of AI on the ground now (e.g., sustainability, hidden labour, bias). I think that AI can be a tool for good if (and it’s conditional on this) it is used alongside human knowledge and human experience….”
Dr Kate Devlin, Reader in Artificial Intelligence & Society in the Department of Digital Humanities, King’s College London.
“The question is what kind of moral, intellectual and political value system the economic power behind today’s AI will be used to sustain: one where thinking matters? Or one where it doesn’t?”
Dr Shannon Vallor, the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh
“I don’t think there is any concrete scientific argument that would suggest that machines can’t be conscious,” he said, albeit probably different from human consciousness, adding “If we’re not giving control of something lethal to AI, then it’s much harder to see how [it] could truly represent an existential risk.”
Prof Michael Wooldridge, who will be delivering this year’s Royal Institution Christmas lectures in late December, and who is sanguine about the risks.
Preparation and Logistics for the event:
To get everyone started, here are two links to YouTube presentations by the speakers, which one can usefully watch in advance for insight into what AI can do for us right now, and what it can’t:
And to whet your appetites further, a written overview from Anthony Lewis, Windsor Humanists – https://www.humanisticallyspeaking.org/post/don-t-panic-ai-will-save-us-all