I work on developing trustworthy NLP systems aligned with human values. My recent research has been focused on fairness, particularly in characterizing and measuring biases in a way that is grounded in real-world harms and is inclusive of diverse, fluid, and intersectional identities. Using these insights, I am working on self-correcting methodologies for language models to identify ethical issues and revise their own output.
Papers •
MISGENDERED: Limits of Large Language Models in Understanding Pronouns
ACL 2023.
Tamanna Hossain, Sunipa Dev, Sameer Singh.
PDFVideoBibTeX
Evaluating the generalisability of neural rumour verification models
Information Processing & Management 2023.
Elena Kochkina, Tamanna Hossain, Robert L. Logan IV, Miguel Arana-Catania, Rob Procter, Arkaitz Zubiaga, Sameer Singh, Yulan He, Maria Liakata.
PDFBibTeX
COVIDLies: Detecting COVID-19 Misinformation on Social Media
EMNLP NLP Covid19 Workshop 2020. Best Paper Award
Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, Sameer Singh.
PDFVideoBibTeX
Blog Posts •
- 2023/08/05 Paper Summary: Whose Opinions Do Language Models Reflect?
- 2022/01/08 Paper Summary: Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies
- 2021/11/28 Weird Stuff in High Dimensions
- 2021/11/09 Box Embeddings: Paper Overviews
- 2021/06/28 Orthogonal Procrustes
- 2019/12/25 Five Year Anniversary Trip to Maui
- 2018/11/01 A Brief Introduction to fMRI Analysis
- 2018/05/18 Viz : The Rohingya Exodus
- 2018/05/16 Viz : Drawing [e]
- 2018/02/22 Viz : US Same Sex Marriage Laws
- 2018/02/11 Viz : The Words of Larry Nassar Survivors
- 2017/12/22 Viz : Encoded Photo - An Anniversary Card
- 2017/12/03 Viz : Text Mining Stranger Things (Season 1 vs. Season 2)
- 2017/11/19 Viz : #MeToo Twitter
- 2017/11/19 Viz : #MeToo Twitter