Katholieke Universiteit te Leuven (KU Leuven)

Early Stage Researcher (PhD Position) NoBIAS

 
Published
WorkplaceLeuven, Flemish Region, Belgium
Category
Position
Occupation rate50%

Description

Early Stage Researcher (PhD Position) NoBIAS

Do you want to help reduce one of AI’s biggest problems: the perpetuation and production of social bias, working in a vibrant international environment composed by AI/machine learning and other researchers, and getting a KU Leuven PhD in the process? The PhD project "Discovering biased representations of people" is part of the EU Marie-Curie project "NoBIAS: Artificial Intelligence without Bias".

1. The project context: NoBIAS

"NoBIAS: Artificial Intelligence without Bias" is a project funded by the European Union’s Horizon 2020 research and innovation programme under the Marie SkÅ‚odowska-Curie Grant Agreement No. 860630.

NoBIAS aims at developing novel methods for AI-based decision making without bias by taking into account ethical and legal considerations in the design of technical solutions. The core objectives of NoBIAS are to understand legal, social and technical challenges of bias in AI-decision making, to counter them by developing fairness-aware algorithms, to automatically explain AI results, and to document the overall process for data provenance and transparency.

NoBIAS will train a cohort of 15 ESRs (Early-Stage Researchers) to address problems with bias through multi-disciplinary training and research in computer science, machine learning, artificial intelligence, law and social science. ESRs will acquire practical expertise in a variety of sectors from healthcare, telecommunication, finance, marketing, media, software, and legal consultancy to broadly foster legal compliance and innovation. Technical, interdisciplinary and soft skills will give ESRs a head start towards future leadership in industry, academia, or government.

2. Your tasks

The PhD project "Discovering biased representations of people" takes a computer-science and interdisciplinary approach to one of the key sources of bias in and through AI: how are people represented in AI models, what problems does this cause, and how can we solve or at least mitigate errors and resulting problems?

You will focus on representational adequacy, i.e., the extent to which data represent what is (legally and/or ethically) objectionable about bias. The project will develop a modelling language to integrate background knowledge, requirements engineering methods to elicit adequate representations, and composition methods to integrate different representations.

The Machine Learning group is part of the DTAI research group at the Department of Computer Science at KU Leuven. It follows an artificial intelligence approach to the analysis of data. It investigates a wide variety of machine learning, data mining and data analysis problems. This includes socially aware data mining, which recognizes the interaction between people and (especially data-analysis) technology in workplaces and other usage contexts, as well as the interrelations between this technology and other stakeholders who may be non-users. It integrates computational, cognitive, social, legal and economic considerations.

The Machine Learning group is part of the DTAI research group at the Department of Computer Science at KU Leuven. It follows an artificial intelligence approach to the analysis of data. It investigates a wide variety of machine learning, data mining and data analysis problems. This includes socially aware data mining, which recognizes the interaction between people and (especially data-analysis) technology in workplaces and other usage contexts, as well as the interrelations between this technology and other stakeholders who may be non-users. It integrates computational, cognitive, social, legal and economic considerations.

You have a Masters degree in Computer Science, Artificial Intelligence, or a similar discipline. You have strong data science skills. Skills in knowledge representation and other fields of AI are a plus. You care not only about AI, but also about the ethical dimension of informatics, and you are motivated to learning and taking a critical and interdisciplinary approach that values the social sciences while leveraging your computer scientist’s understanding and skills. Prior experience with topics with an ethical dimension is a plus (privacy/data protection, fairness/non-discrimination, dealing with bias and misinformation, ...).

General eligibility criteria: To be eligible, the applicant must satisfy the mobility requirements of Marie Sk³odowska-Curie actions. At the time of recruitment, the potential candidate

  • "must not have resided or carried out their main activity (work, studies, etc.) in the country of the recruiting beneficiary for more than 12 months in the 3 years immediately before the recruitment date. Compulsory national service, short stays such as holidays, and time spent as part of a procedure for obtaining refugee status under the Geneva Convention are not taken into account ".
  • "be in the first four years (full-time equivalent research experience) of their research careers and have not been awarded a doctoral degree".

You work efficiently and reliably, independently as well as in teams. You have very good communication skills and scientific curiosity. Your English is very good. You are flexible and prepared to travel and to integrate into the local teams, while keeping a focus on your PhD and on delivering scientific and other project-related output.

We welcome all applicants, but specifically encourage people from traditionally under-represented groups to apply.

Applications that do not meet the eligibility criteria will not be considered.

KU Leuven seeks to foster an environment where all talents can flourish, regardless of gender, age, cultural background, nationality or impairments. If you have any questions relating to accessibility or support, please contact us at diversiteit.HR [at] kuleuven[.]be.

Web

In your application, please refer to myScience.be and reference JobID 2945.

Related News



This site uses cookies and analysis tools to improve the usability of the site. More information. |