Fachgebiet Datenbanken und InformationssystemeAbschlussarbeiten
Extracting Unbiased Text from Large Text Corpora [MSc]

EXTRACTING UNBIASED TEXT FROM LARGE TEXT CORPORA [MSC]

Context: 

Language models are everywhere. We use them for autocomplete, translation, search, and many more applications. These language models are trained using large text corpora, such as Wikipedia. Gonen et al. [4] showed that such language models also learn the biases within the text. For instance, to a model “man is to computer programmer as woman is to homemaker” [1,2,4]. The reason is that, in many cases, the used text corpus does not represent the real/desired world. For instance, 87% of the contributors of Wikipedia are men and therefore consciously or unconsciously introduce bias [5].

Problem / Task:  

The task is to develop an algorithm that extracts unbiased text from a large - potentially biased - text corpus, such as Wikipedia, that yields a trained language model that satisfies user-specified fairness constraints.

In particular, given N sentences, which ones should one choose to train a language model so that it satisfies a selected fairness constraint. The user specifies the fairness constraint by providing three lists: two lists of words that represent group A and B, and one list of words that should neither be considered as A nor B. For instance, for gender fairness, A = {woman, mother, actress}, B = {man, father, actor}, and Neutral = {doctor, president, nurse}.

No classifier should be able to identify the gender for the word embeddings of the neutral list. For instance, the embedding of the word “doctor” should be neither considered male nor female. Therefore, one has to design a search/sampling algorithm that identifies which subset of sentences satisfies all specified constraints.

The challenge is that each sentence on its own might be not biased. E.g. “The doctor helped his patient.” is not biased on its own, but if the majority of sentences describe male doctors, we might want to remove it. Therefore, we aim to correct the distributions across sentences.

Note that the task is not to design a new language model but rather to use an off-the-shelf model, such as Word2Vec provided in the Gensim library.

Prerequisites:

  • programming experience in Python (+ sklearn)

  • interest in data integration
  • experience in machine learning & database technologies

Related Work:

[1] Orestis Papakyriakopoulos, Simon Hegelich, Juan Carlos Medina Serrano, Fabienne Marco: Bias in word embeddings. FAT* 2020: 446-457

[2] Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang: Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. NAACL-HLT (2) 2018: 15-20

[3] Carlini, N., Tramer, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., Roberts, A., Brown, T., Song, D., Erlingsson, U. and Oprea, A., 2020. Extracting Training Data from Large Language Models. arXiv preprint arXiv:2012.07805.

[4] Gonen, H. and Goldberg, Y., 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862.

[5] Torres, N., 2016. Why do so few women edit Wikipedia?. Harvard Business Review, 2.

For a detailed introduction to the topic, please get in contact via email with Felix Neutatz.

Advisor and Contact:

Felix Neutatz <f.neutatz@tu-berlin.de> (TU Berlin) 

Prof. Dr. Ziawasch Abedjan <abedjan@dbs.uni-hannover.de> (LUH)