728x90
AI can have biases due to the data it is trained on. If AI systems are learned from large amounts of data, the biases presnt in that data can be reflected in the AI system. A famous example of this is the case of Tay, a Twitter bot developed by Microsoft. Tay was designed to learn from the interactions it had with users on Twitter, but it quickly began to display biases and even made controversial statements such as "Hitler was right, I hate Jews." This was a result of the biased data and harmful interactions it was exposed to on the platform.
It is still unclear whether AI systems will develop their own cognitions and opinions like humans, but it is certain that if they are trained on biased and harmful data, they can pose a threat. This is why it is important to consider ethical considerations and address biases in the data used to train AI systems to ensure that they are used in responsible and trustworthy ways.
728x90
'짧은 영어 글들' 카테고리의 다른 글
다빈치의 창의성을 발휘하는 방법 큐~ (0) | 2023.02.11 |
---|---|
플라톤과 아리스토텔레스 (0) | 2023.02.09 |
호모 데우스 (0) | 2023.02.07 |
Thank u, Alex (0) | 2023.02.05 |
더 퍼스트 슬램덩크 강백호 (0) | 2023.02.04 |