Article Excerpt: When artificial intelligence models pore over hundreds of gigabytes of training data to learn the nuances of language, they also imbibe the biases woven into the texts.
Computer science researchers at Dartmouth are devising ways to home in on the parts of the model that encode these biases, paving the way to mitigating, if not removing them altogether.
Full Article: http://tinyurl.com/mrky46s8
Article Source: Dartmouth News