Image
Tim Baldwin
MBZUAI, Abu Dhabi

Keynote Title: (Un)fairness in Fairness Evaluation
Abstract: Natural language processing (NLP) has made truly impressive progress in recent years, and is being deployed in an ever-increasing range of user-facing settings. Accompanied by this progress has been a growing realisation of inequities in the performance of naively-trained NLP models for users of different demographics, with minority groups typically experiencing lower performance levels. In this talk, I will discuss the complexities of the evaluation of model "fairness", and how standard evaluation practice has led to unfair/misleading claims in the literature.
Bio: Tim Baldwin is Associate Provost (Academic and Student Affairs) and Head of the Department of Natural Language Processing, Mohamed bin Zayed University of Artificial Intelligence in addition to being a Melbourne Laureate Professor in the School of Computing and Information Systems, The University of Melbourne.
Tim completed a BSc(CS/Maths) and BA(Linguistics/Japanese) at The University of Melbourne in 1995, and an MEng(CS) and PhD(CS) at the Tokyo Institute of Technology in 1998 and 2001, respectively. Prior to joining The University of Melbourne in 2004, he was a Senior Research Engineer at the Center for the Study of Language and Information, Stanford University (2001-2004). His research has been funded by organisations including the Australia Research Council, Google, Microsoft, Xerox, ByteDance, SEEK, NTT, and Fujitsu, and has been featured in MIT Tech Review, IEEE Spectrum, The Times, and ABC News. He is the author of over 450 peer-reviewed publications across diverse topics in natural language processing and AI, with around 20,000 citations and an h-index of 66 (Google Scholar), in addition to being an ARC Future Fellow, and the recipient of a number of awards at top conferences.


Image
Maria Liakata
Queen Mary University of London

Keynote Title: Personalised Longitudinal Natural Language Processing
Abstract: In most of the tasks and models that we have made great progress with in recent years, such as text classification and natural language inference, there isn't a notion of time. However many of these tasks are sensitive to changes and temporality in real world data, especially when pertaining to individuals, their behaviour and their evolution over time. I will present our programme of work on personalised longitudinal natural language processing. This consists in developing natural language processing methods to: (1) represent individuals over time from their language and other heterogenous and multi-modal content (2) capture changes in individuals' behaviour over time (3) generate and evaluate synthetic data from individuals' content over time (4) summarise the progress of an individual over time, incorporating information about changes. I will discuss progress and challenges this far as well as the implications of this programme of work for downstream tasks such as mental health monitoring.
Bio: Maria Liakata is Professor in Natural Language Processing (NLP) at the School of Electronic Engineering and Computer Science, Queen Mary University of London and Honorary Professor at the Department of Computer Science, University of Warwick. She holds a UKRI/EPSRC Turing AI fellowship (2020-2025) on Creating time sensitive sensors from user-generated language and heterogeneous content. The research in this fellowship involves developing new methods for NLP and multi-modal data to allow the creation of longitudinal personalized language monitoring. She is also the PI of projects on language sensing for dementia monitoring & diagnosis, opinion summarisation and rumour verification from social media. At the Alan Turing Institute she founded and co-leads the NLP and data science for mental health special interest groups. She has published over 150 papers on topics including sentiment analysis, semantics, summarisation, rumour verification, resources and evaluation and biomedical NLP. She is action editor for the ACL rolling review and regularly holds senior roles in conference and workshop organisation.


Image
Pascale Fung
Hong Kong University of Science & Technology

Keynote Title: Mitigating Risks while Forging Ahead with AI Progress
Abstract: The accelerated progress in AI technology has provoked fears and controversy around their safe use. Some insist that we should not stop making scientific progress simply because of media pushback. Others seek to prevent further development of what they consider dangerous AI. I will present the case study of hallucination in large foundational models and its possible mitigation. Hallucination is the undesirable generation of content and is one of the most salient risks of one of the most powerful recent developments. I propose that it is possible to continue making AI progress while mitigating its risks.
Bio: Pascale Fung is a Chair Professor at the Department of Electronic & Computer Engineering at The Hong Kong University of Science & Technology (HKUST), and a visiting professor at the Central Academy of Fine Arts in Beijing. She is an elected Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) for her "significant contributions to the field of conversational AI and to the development of ethical AI principles and algorithms", an elected Fellow of the Association for Computational Linguistics (ACL) for her “significant contributions towards statistical NLP, comparable corpora, and building intelligent systems that can understand and empathize with humans”. She is an Fellow of the Institute of Electrical and Electronic Engineers (IEEE) for her “contributions to human-machine interactions” and an elected Fellow of the International Speech Communication Association for “fundamental contributions to the interdisciplinary area of spoken language human-machine interactions”. She is the Director of HKUST Centre for AI Research (CAiRE), an interdisciplinary research centre on top of all four schools at HKUST. She co-founded the Human Language Technology Center (HLTC). She is an affiliated faculty with the Robotics Institute and the Big Data Institute at HKUST. She is the founding chair of the Women Faculty Association at HKUST. She is an expert on the Global Future Council, a think tank for the World Economic Forum. She represents HKUST on Partnership on AI to Benefit People and Society. She is on the Board of Governors of the IEEE Signal Processing Society. She is a member of the IEEE Working Group to develop an IEEE standard - Recommended Practice for Organizational Governance of Artificial Intelligence. Her research team has won several best and outstanding paper awards at ACL, ACL and NeurIPS workshops.