What is your background?
I studied psychology and completed a PhD in social psychology at The University of Queensland in Australia. Since 2018 I’ve been working at BehaviourWorks Australia at Monash University, in Melbourne, Australia. This role is focused on applied behaviour science research and consulting with government departments and large organisations (corporate and non-profit). In 2019, I co-founded Ready Research with Peter Slattery and Michael Noetel. Ready is a volunteer group that conducts behaviour science research on EA-aligned topics.
What is your research area?
My focus at BehaviourWorks Australia with Monash University is on increasing the reach and impact of behaviour science evidence on policy and practice. This includes running a COVID-19 survey of behaviours and behavioural drivers for 18 months with the Victorian state government, creating a free evidence-based toolkit for scaling up behaviour change interventions, and identifying and prioritising behaviours relevant to net zero.
What are you planning to focus on in the future?
Outside of Monash, I’ve been trying to improve my knowledge and networks about the governance of advanced artificial intelligence (i.e, how we develop, deploy, and use AI) - here I’m especially interested interested in what we can learn from existing work on socio-technical transitions, policymaking under uncertainty, and how behaviour science can help make concrete progress on improving transitions to a world with advanced AI systems.
Do you want help or collaborators, if so who?
If you’re interested in working on AI safety in a non-technical or governance way, by applying behaviour science or other social science methods, I’d be keen to connect.
If you’re looking to conduct surveys or scale up behaviour change interventions, I’d be happy to help.
Do you want to share some of your work?
Here’s a poster I presented at an “AI showcase” at Monash University. Feel free to remix or adapt if useful. It tries to briefly make the case for (1) advanced AI will have significant opportunities and risks; (2) we should try and improve decisions and behaviours about AI; (3) behaviour science can offer methods to improve decisions and behaviours about AI.
Here’s a draft of an EA forum post I wrote stepping out why I think behaviour science can help AI governance. It also includes a worked example. I’d be happy to receive any feedback or talk about it some more.
Here is a preprint manuscript (under review) describing factors that influence successful scale up of behaviour change interventions. You can also use the toolkit we created.
You can contact Alexander at email@example.com
Want to be profiled? Submit a profile here