While there continues to be confusion about the terms artificial intelligence (AI) and robotics, they are two separate fields of technology and engineering. However, when combined, you get an artificially intelligent robot where AI acts as the brain, and the robotics acts as the body to enable robots to walk, see, speak, smell and more.
Let’s look at the separate fields of artificial intelligence and robotics to illustrate their differences.
What is artificial intelligence (AI)?
Artificial intelligence is a branch of computer science that creates machines that are capable of problem-solving and learning similarly to humans. Using some of the most innovative AIs such as machine learning and reinforcement learning, algorithms can learn and modify their actions based on input from their environment without human intervention. Artificial intelligence technology is deployed at some level in almost every industry from the financial world to manufacturing, healthcare to consumer goods and more. Google’s search algorithm and Facebook’s recommendation engine are examples of artificial intelligence that many of us use every day. For more practical examples and more in-depth explanations, cheque out my website section dedicated to AI.
What is robotics?
The branch of engineering/technology focused on constructing and operating robots is called robotics. Robots are programmable machines that can autonomously or semi-autonomously carry out a task. Robots use sensors to interact with the physical world and are capable of movement, but must be programmed to perform a task. Again, for more on robotics cheque out my website section on robotics.
Where do robotics and AI mingle?
One of the reasons the line is blurry and people are confused about the differences between robotics, and artificial intelligence is because there are artificially intelligent robots—robots controlled by artificial intelligence. In combination, AI is the brain and robotics is the body. Let’s use an example to illustrate. A simple robot can be programmed to pick up an object and place it in another location and repeat this task until it’s told to stop. With the addition of a camera and an AI algorithm, the robot can “see” an object, detect what it is and determine from that where it should be placed. This is an example of an artificially intelligent robot.
Artificially intelligent robots are a fairly recent development. As research and development continue, we can expect artificially intelligent robots to start to reflect those humanoid characterizations we see in movies.
Self-aware robots
One of the barriers to robots being able to mimic humans is that robots don’t have proprioception—a sense of awareness of muscles and body parts—a sort of “sixth sense” for humans that is vital to how we coordinate movement. Roboticists have been able to give robots the sense of sight through cameras, sense of smell and taste through chemical sensors and microphones help robots hear, but they have struggled to help robots acquire this “sixth sense” to perceive their body.
Now, using sensory materials and machine-learning algorithms, progress is being made. In one case, randomly placed sensors detect touch and pressure and send data to a machine-learning algorithm that interprets the signals.
In another example, roboticists are trying to develop a robotic arm that is as dexterous as a human arm, and that can grab a variety of objects. Until recent developments, the process involved individually training a robot to perform every task or to have a machine learning algorithm with an enormous dataset of experience to learn from.
Robert Kwiatkowski and Hod Lipson of Columbia University are working on “task-agnostic self-modelling machines.” Similar to an infant in its first year of life, the robot begins with no knowledge of its own body or the physics of motion. As it repeats thousands of movements it takes note of the results and builds a model of them. The machine-learning algorithm is then used to help the robot strategize about future movements based on its prior motion. By doing so, the robot is learning how to interpret its actions.
A team of USC researchers at the USC Viterbi School of Engineering believe they are the first to develop an AI-controlled robotic limb that can recover from falling without being explicitly programmed to do so. This is revolutionary work that shows robots learning by doing.
Artificial intelligence enables modern robotics. Machine learning and AI help robots to see, walk, speak, smell and move in increasingly human-like ways.
Comments