New study finds large language models are prone to social identity biases similar to the way humans are—but LLMs can be trained to stem these outputs

1 month ago 28
New study finds large language models are prone to social identity biases similar to the way humans are—but LLMs can be trained to stem these outputs submitted by /u/thebelsnickle1991
[link] [comments]
Read Entire Article