Robot says “NO” to human

robo

Well, conventional human-robot interaction is limited to “master-slave” commanding (i.e., goal speci-fication) and monitoring (e.g., of status information). More precisely, the interaction model is essentially one-way: the human “speaks” and the robot “listens” (perhaps asking for clarification). As a result, system performance is strictly bound to the operator’s skill and the quality of the user interface. To improve system capability, increase flexibility, and create synergy, human-robot communication needs to be richer and occur in both directions.

Human Robot Interaction (HRI) is developed to design an interaction model in which humans and robots communicate as peers.Specifically,building a dialogue system that allows robots to ask questions to the human when necessary (urgent) and appropriate (human at a work breakpoint), so that robots are able to obtain human assistance with cognition and perception tasks. Two key benefits of this system are that it:
(1) allows humans and robots to communicate and coordinate their actions and
(2) provides interaction support so that humans and robots can quickly respond and help the other (human or robot) resolve issues as they
arise.

Watch this video to see how a robot cutely rejects his master’s command :

Robots, just like humans, have to learn when to say “no.” If a request is impossible, would cause harm, or would distract them from the task at hand, then it’s in the best interest of a ‘bot and his human alike for a diplomatic no, thanks to be part of the conversation.
Simple stuff, but certainly essential for acting as a check on human error.
 Researchers Gordon Briggs and Matthias Scheutz of Tufts University developed a complex algorithm that allows the robot to evaluate what a human has asked him to do, decide whether or not he should do it, and respond appropriately. The research was presented at a meeting of the Association for the Advancement of Artificial Intelligence.
For robots wielding potentially dangerous-to-humans tools on a car production line, it’s pretty clear that the robot should always precisely follow its programming.But building clever robots and giving them the power to decide what to do all by themselves is the requirement. This leads to a tricky issue: How exactly do you program a robot to think through its orders and over-rule them if it decides they’re wrong or dangerous to either a human or itself?

This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting human orders.
The robot asks himself a series of questions related to whether the task is do-able. Do I know how to do it? Am I physically able to do it now? Am I normally physically able to do it? Am I able to do it right now? Am I obligated based on my social role to do it? Does it violate any normative principle to do it?

The strategy works similarly to the process human brains carry out when we’re given spoken orders. It’s all about a long list of trust and ethics questions that we think through when asked to do something. The questions start with “do I know how to do that?” and move through other questions like “do I have to do that based on my job?” before ending with “does it violate any sort of normal principle if I do that?” This last question is the key, of course, since it’s “normal” to not hurt people or damage things.

The Tufts team has simplified this sort of inner human monologue into a set of logical arguments that a robot’s software can understand, and the results seem reassuring. For example, the team’s experimental android said “no” when instructed to walk forward though a wall it could easily smash because the person telling it to try this potentially dangerous trick wasn’t trusted.

Engineers used artificial intelligence to teach robots to disobey commands. The robot analyses its environment to assess whether it can perform a task. If it finds the command too dangerous, it politely refuses to carry it out. The concept is designed to make human robot interactions more realistic.

They have programmed a pair of diminutive humanoid robots called Shafer and Dempster to disobey instructions from humans if it puts their own safety at risk.So, in short we can say that Roboticists have started to teach their own creations to say no to human orders
The result is a robot that appears to be not only sensible but, one day, even wise.The idea of machine ethics cannot be separated from artificial intelligence — even our driverless cars of the future will have to be engineered to make life-or-death choices on our behalf. That conversation will necessarily be more complex than just delivering marching orders.

Advertisements

Control the WORLD with gestures- Project SOLI

Project Soli

Modern technology is simply an advancement of old technology. The impact of technology in modern life is unmeasurable. Modern technology increases human capabilities. With the advancement of technology, the whole world is becoming a gadget that we interact with, with software everywhere, which raises the question how can we react with the entire world?

Google has answered this question and gave us a technology so advanced, precise and small in size that it works even on the smallest of displays.
Forget touchscreens and buttons, Google’s Project Soli lets you control gadgets using hand gestures made in AIR.Called Project Soli, the system identifies subtle finger movements using radar built into tiny microchips to track our finger movements creating virtual dials, touchpads, and more.

Screen-Shot-2015-06-10-at-6.27.47-PM
Hand is the best method you have for interaction with devices, but not everything is a device. Project Soli wants to make your hands and fingers the only user interface you’ll ever need. Project Soli is really a radar that is small enough to fit into a wearable like a smartwatch. The small radar picks up on your movements in real-time, and uses movements you make to alter its signal.Moving the hand away from or side-to-side in relation to the radar changes the signal and amplitude. Making a fist or crossing fingers also changes the signal.

Rather than go hands-free, Project Soli makes your hands the UI — which may already  be cooler than voice control ever was. Project Soli is using radar to enable new types of touchless interactions — one where the human hand becomes a natural, intuitive interface for our devices. The Soli sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale, and can be used inside even small wearable devices.

So, From Where, the idea came from?
The idea behind Soli is similar to Leap Motion and other gesture-based controllers: A sensor tracks the movements of your hands, which control the input into a device.But unlike other motion controllers, which depend on cameras, Soli is equipped with radar, which helps it track sub-millimeter motions at high speed and accuracy,

SO WHAT ACTUALLY IS PROJECT SOLI?
Project Soli was revealed during Google I/O 2015 on May 29, 2015.Project Soli is an input technology developed by Google ATAP. It utilizes radar to detect the movements and gestures of your hands and fingers. Shaped as a chip the size of a quarter, Soli can be embedded in most wearables and electronic devices. The initial applications for the radar system are smartphones and smartwatches.

How Does It Work?
Soli is a 60GHz RF transmitter that uses board beam radar to measure everything from spectrogram to doppler image to IQ. Using these information, The tiny circuit board is able to determine the hand size, motion and velocity. It then uses machine-learning to translate these movements to pre-programmed commands.

ct-bsi-google-project-soli-photos-20150601

Features

  1. Translates hand gestures and finger movements into commands for your smart devices.
  2. The sensor can track sub-millimeter motions at high speed and accuracy.
  3. Small gestures such as turning of a nub or sliding of a page work fine.
  4. Works through materials such as fabric

At the end, it can be said that this type of technology can help us to interact with the real world more conveniently and easily.We can now say that an era is coming where human and technology in collaboration with each other can possibly do those things which are earlier very difficult to achieve for the humans alone.

Future of Artificial Intelligence -IBM Watson

IBM Watson is a “reasoning” computer system capable of answering questions posed in natural language developed in IBM’s DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM’s first CEO and industrialist Thomas J. Watson.

In February 2013, IBM announced that Watson software system’s first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan Kettering Cancer Centre in conjunction with health insurance company WellPoint.

So what exactly is WATSON?

IBM Watson is at the forefront of a new era of computing, cognitive computing. It is radically a new type of computing, very different from those old programmable systems. Conventional computing solutions based on mathematical principles that emanate from the 1940’s are programmed based on rules and logics intended to derive mathematical precise answers, often following every decision tree approach. But with today’s wealth of big data and need for more complex evidence-based decisions, such a rigid approach often breaks or fails to keep up with available information. Cognitive computing enables people to create a profoundly new kind of value and finding answers and insights locked away in volumes of data. Whether we consider a doctor diagnosing a patient, a wealth manager advising a client on their retirement portfolio or even a chef creating a new recipe, they need a new approach to put into context the volume of information they deal with on a daily basis in order to derive value from it. This process serves to enhance human expertise

 Watson is a system that solves the problems just like a human does.Just as humans become experts by going through the process of observation, evaluation and decision making. Cognitive systems like Watson uses similar process to reason about the information they read .Watson can do this at massive speed and scale.

a1

How does Watson do it?

Unlike conventional approaches to computing which can only handle neatly organised structured data such as what we store in a database. Watson can understand unstructured data which is 80% of the data today. Watson relies on natural language which is govern by rules of grammar, context and culture. When It comes to text Watson doesn’t just look for keyword matches, or synonyms like search engine, but it actually reads and interprets text like a person. It does this by breaking down the sentence grammatically, relationally and structurally discerning meaning from the semantics of the written material.

Watson understand context. Watson tries to understand the real intent to the user’s language and uses that understanding to possibly extract logical responses and draw inferences to potential answers through a broader way of linguistic models and algorithms.

When Watson goes to work in a particular field, it learns language, the jargon and the motive thought of that domain. Take the term cancer for instance. There are many different types of cancer and each type has different symptoms and treatments. However those symptoms can also be associated with diseases other than cancer. Treatments can have side effect and affect people differently depending on many factors. Watson evaluates standard of care practices and thousands of pages of literature that capture the best science in the field and from all of that, Watson identifies the therapies that are best choices for the doctors in the treatment of the patient.

A must watch video that tells how artificial intelligence will change our life: IBM WATSON

With the guidance of human experts Watson collects the knowledge required to have literacy in a particular domain what’s called a corpus of knowledge. Collecting a knowledge starts with loading the relevant body of the literature into Watson, building the corpus also require some human intervention to call through the information and discard anything that is out-of-date, poorly recorded or immaterial to the problem domain. Then the data is pre-processed by Watson building indices and other metadata that makes working with that content more efficient. This is known as ingestion.

At this time Watson may also create a knowledge graph to assist in answering more precise questions. Now that Watson has ingested the corpus, it needs to be trained by human expert to learn how to interpret the information. To learn the best possible responses and acquire the ability to find patterns, Watson partners with experts who train in using an approach called machine learning. An expert will upload training data into Watson in the form of question answer pairs that service ground truth. Watson is now ready to respond to highly complex situations and quickly provide a range of potential responses and recommendations that are backed by evidence

Today Watson is revolutionizing the way we make decisions, becoming expert and sharing expertise in fields as diverse as law, medicine and even cooking. Further Watson is discovering and offering answers in patterns we hadn’t known existed faster than any person or group of people ever could in ways that make a material difference every day. Most important of all Watson learns, adapts and keeps getting smarter just as we do.