AI to build robotic bees


 

In the near future, we might have to be a little more careful about swatting pesky bees while we’re trying to enjoy some time outdoors. British researchers at the Universities of Sussex and Sheffield are developing a computer model of a bee’s brain that they hope can help scientists better understand the brains of more-complex animals, such as humans, and perhaps power artificial intelligence systems for bee-like robots.
Called “Green Brain,” the project is trying to advance the science of AI beyond systems that just follow a predetermined set of rules, and into an area where AI systems can actually act autonomously and respond to sensory signals. The researchers are focusing on the parts of a bee’s brain responsible for vision and sense of smell, and will expect the system to be able to find the “source of particular odors or gases in the same way that a bee can identify particular flowers,” among other things.
Although a very difficult mission to accomplish, the relatively narrow focus of this project should make it easier to pull off than other AI efforts that focus on more-complex human brains. Scientists have tried modeling human decision-making for decades, but humans’ irrationality and seemingly random choices make it difficult to do so outside of specific situations or controlled experiments.
The Green Brain team suggests its AI system could be used to power robotic bees that can help pollinate plants in the face of declining bees populations worldwide, and also could be beneficial in search-and-rescue missions. In order to carry out any of these tasks, researchers have to design systems that are capable of adapting to the world around them. Especially when acting as a research tool for understanding how bees react to sensory stimuli in the manners they do, too strong a reliance on fixed rules and instructions about how it should act might limit the effectiveness of a robotic bee.





The researchers working on Green Brain think its work on AI might physically manifest itself in a project like RoboBees, which is currently underway by a group at Harvard University. Aside from pollination and search-and-rescue, the RoboBees team suggests its robotic bees could be used for weather-mapping, traffic -monitoring and even military surveillance. That project also focuses heavily on bees’ colony behavior to coordinate group decision-making and action.
Of course, robotic bees are as much hardware as they are artificial intelligence — how they consume and process data will affect how the decisions they ultimately make — which is why various research projects might want to combine their forces to some degree. Whereas Green Brain has partnered with GPU manufacturer Nvidia to ensure fast modeling and fast calculations within the bees’ brains, RoboBees is working on the whole package. It’s building sensors, wings and everything else necessary to make a robotic bee fly and sense the world like an actual insect.
Feature image courtesy of Shutterstock user Andrej Vodolazhskyi.

Robots and Artificial Intelligence

Artificial intelligence (AI) is arguably the most exciting field in robotics. It's certainly the most controversial: Everybody agrees that a robot can work in an assembly line, but there's no consensus on whether a robot can ever be intelligent.

 artificial intelligence robot, artificial intelligence, robot
 Like the term "robot" itself, artificial intelligence is hard to define. Ultimate AI would be a recreation of the human thought process -- a man-made machine with our intellectual abilities. This would include the ability to learn just about anything, the ability to reason, the ability to use language and the ability to formulate original ideas. Robotics are nowhere near achieving this level of artificial intelligence, but they have made a lot of progress with more limited AI. Today's AI machines can replicate some specific elements of intellectual ability.
Computers can already solve problems in limited realms. The basic idea of AI problem-solving is very simple, though its execution is complicated. First, the AI robot or computer gathers facts about a situation through sensors or human input. The computer compares this information to stored data and decides what the information signifies. The computer runs through various possible actions and predicts which action will be most successful based on the collected information. Of course, the computer can only solve problems it's programmed to solve -- it doesn't have any generalized analytical ability. Chess computers are one example of this sort of machine.
Some modern robots also have the ability to learn in a limited capacity. Learning robots recognize if a certain action (moving its legs in a certain way, for instance) achieved a desired result (navigating an obstacle). The robot stores this information and attempts the successful action the next time it encounters the same situation. Again, modern computers can only do this in very limited situations. They can't absorb any sort of information like a human can. Some robots can learn by mimicking human actions. In Japan, robotics have taught a robot to dance by demonstrating the moves themselves.
Some robots can interact socially. Kismet, a robot at M.I.T's Artificial Intelligence Lab, recognizes human body language and voice inflection and responds appropriately. Kismet's creators are interested in how humans and babies interact, based only on tone of speech and visual cue. This low-level interaction could be the foundation of a human-like learning system.
Kismet and other humanoid robots at the M.I.T. AI Lab operate using an unconventional control structure. Instead of directing every action using a central computer, the robots control lower-level actions with lower-level computers. The program's director, Rodney Brooks, believes this is a more accurate model of human intelligence. We do most things automatically; we don't decide to do them at the highest level of consciousness.
The real challenge of AI is to understand how natural intelligence works. Developing AI isn't like building an artificial heart -- scientists don't have a simple, concrete model to work from. We do know that the brain contains billions and billions of neurons, and that we think and learn by establishing electrical connections between different neurons. But we don't know exactly how all of these connections add up to higher reasoning, or even low-level operations. The complex circuitry seems incomprehensible.
Because of this, AI research is largely theoretical. Scientists hypothesize on how and why we learn and think, and they experiment with their ideas using robots. Brooks and his team focus on humanoid robots because they feel that being able to experience the world like a human is essential to developing human-like intelligence. It also makes it easier for people to interact with the robots, which potentially makes it easier for the robot to learn.
Just as physical robotic design is a handy tool for understanding animal and human anatomy, AI research is useful for understanding how natural intelligence works. For some robotics, this insight is the ultimate goal of designing robots. Others envision a world where we live side by side with intelligent machines and use a variety of lesser robots for manual labor, health care and communication. A number of robotics experts predict that robotic evolution will ultimately turn us into cyborgs -- humans integrated with machines. Conceivably, people in the future could load their minds into a sturdy robot and live for thousands of years!
In any case, robots will certainly play a larger role in our daily lives in the future. In the coming decades, robots will gradually move out of the industrial and scientific worlds and into daily life, in the same way that computers spread to the home in the 1980s.
The best way to understand robots is to look at specific designs. The links on the next page will show you a variety of robot projects around the world.