You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sequence Models repository for all projects and programming assignments of Course 5 of 5 of the Deep Learning Specialization offered on Coursera and taught by Andrew Ng, covering topics such as Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU), Long Short Term Memory (LSTM), Natural Language Processing, Word Embeddings and Attention Model.
Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.
The project contains code and data leveraged in the research paper "Evaluating the Effectiveness of the Double-Hard Debias Technique Against Racial Bias in Word Embeddings"
This repo contains two jupyter notebooks that demonstrates how to identify and correct algorithmic bias in machine learning models using Python and Julia.
Code for paper: "Power of Explanations: Towards automatic debiasing in hate speech detection", DSAA 2022 (https://ieeexplore.ieee.org/document/10032325/). Repository maintained by Yi Cai.
Repository for research done into the methods used to debias ML models. Specifically looking into the role that measurements, metrics, and benchmarks can be in reducing the bias of a model.