The new computational model successfully predicts human emotions. That kind of system might be prohibited in the EU if the new AI limitation or AI-control laws are ratified. The systems that predict human emotions are one of the most risky things in the world. Sometimes people ask what kinds of risks are hiding in the systems that can predict or follow what humans are feeling. The AI that can uncover feelings can use the computer's web camera. So if somebody uses that kind of tool during a web meeting. That thing can uncover things like if somebody on the table is somehow stressed. That AI-based software can also find out if somebody is telling lies. And this is one of the things that breaks privacy. When we think about this kind of software there should be a blinking warning that this kind of algorithm is in use. "MIT neuroscientists have created a computational model that successfully predicts human emotions in social scenarios, using the prisoner’s dilemma game as a base....