Even so, people who played a game with the robot -- a commercially available humanoid robot known as Pepper -- performed worse when the robot discouraged them and better when the robot encouraged them.
"One participant said, 'I don't like what the robot is saying, but that's the way it was programmed so I can't blame it,'" said lead study author Aaron M. Roth.
Some of the 40 study participants were technically sophisticated and fully understood that a machine was the source of their discomfort.
The researchers found that, overall, human performance ebbed regardless of technical sophistication.
The study is a departure from typical human-robot interaction studies, which tend to focus on how humans and robots can best work together.
"This is one of the first studies of human-robot interaction in an environment where they are not cooperating," said co-author Fei Fang, assistant professor in the Institute for Software Research.
"We can expect home assistants to be cooperative," she said, "but in situations such as online shopping, they may not have the same goals as we do".
The researchers used the game called "Guards and Treasures" which is a typical game used to study defender-attacker interaction in research on security games.
Each participant played the game 35 times with the robot, while either soaking in encouraging words from the robot or getting their ears singed with dismissive remarks.
Although the human players' rationality improved as the number of games played increased, those who were criticized by the robot didn't score as well as those who were praised.
"It's well established that an individual's performance is affected by what other people say, but the study shows that humans also respond to what machines say," said Afsaneh Doryab, systems scientist at CMU's Human-Computer Interaction Institute (HCII).