Robot says “NO” to human

robo

Well, conventional human-robot interaction is limited to “master-slave” commanding (i.e., goal speci-fication) and monitoring (e.g., of status information). More precisely, the interaction model is essentially one-way: the human “speaks” and the robot “listens” (perhaps asking for clarification). As a result, system performance is strictly bound to the operator’s skill and the quality of the user interface. To improve system capability, increase flexibility, and create synergy, human-robot communication needs to be richer and occur in both directions.

Human Robot Interaction (HRI) is developed to design an interaction model in which humans and robots communicate as peers.Specifically,building a dialogue system that allows robots to ask questions to the human when necessary (urgent) and appropriate (human at a work breakpoint), so that robots are able to obtain human assistance with cognition and perception tasks. Two key benefits of this system are that it:
(1) allows humans and robots to communicate and coordinate their actions and
(2) provides interaction support so that humans and robots can quickly respond and help the other (human or robot) resolve issues as they
arise.

Watch this video to see how a robot cutely rejects his master’s command :

Robots, just like humans, have to learn when to say “no.” If a request is impossible, would cause harm, or would distract them from the task at hand, then it’s in the best interest of a ‘bot and his human alike for a diplomatic no, thanks to be part of the conversation.
Simple stuff, but certainly essential for acting as a check on human error.
 Researchers Gordon Briggs and Matthias Scheutz of Tufts University developed a complex algorithm that allows the robot to evaluate what a human has asked him to do, decide whether or not he should do it, and respond appropriately. The research was presented at a meeting of the Association for the Advancement of Artificial Intelligence.
For robots wielding potentially dangerous-to-humans tools on a car production line, it’s pretty clear that the robot should always precisely follow its programming.But building clever robots and giving them the power to decide what to do all by themselves is the requirement. This leads to a tricky issue: How exactly do you program a robot to think through its orders and over-rule them if it decides they’re wrong or dangerous to either a human or itself?

This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting human orders.
The robot asks himself a series of questions related to whether the task is do-able. Do I know how to do it? Am I physically able to do it now? Am I normally physically able to do it? Am I able to do it right now? Am I obligated based on my social role to do it? Does it violate any normative principle to do it?

The strategy works similarly to the process human brains carry out when we’re given spoken orders. It’s all about a long list of trust and ethics questions that we think through when asked to do something. The questions start with “do I know how to do that?” and move through other questions like “do I have to do that based on my job?” before ending with “does it violate any sort of normal principle if I do that?” This last question is the key, of course, since it’s “normal” to not hurt people or damage things.

The Tufts team has simplified this sort of inner human monologue into a set of logical arguments that a robot’s software can understand, and the results seem reassuring. For example, the team’s experimental android said “no” when instructed to walk forward though a wall it could easily smash because the person telling it to try this potentially dangerous trick wasn’t trusted.

Engineers used artificial intelligence to teach robots to disobey commands. The robot analyses its environment to assess whether it can perform a task. If it finds the command too dangerous, it politely refuses to carry it out. The concept is designed to make human robot interactions more realistic.

They have programmed a pair of diminutive humanoid robots called Shafer and Dempster to disobey instructions from humans if it puts their own safety at risk.So, in short we can say that Roboticists have started to teach their own creations to say no to human orders
The result is a robot that appears to be not only sensible but, one day, even wise.The idea of machine ethics cannot be separated from artificial intelligence — even our driverless cars of the future will have to be engineered to make life-or-death choices on our behalf. That conversation will necessarily be more complex than just delivering marching orders.