- robot
-
/roh"beuht, -bot/, n.1. a machine that resembles a human and does mechanical, routine tasks on command.2. a person who acts and responds in a mechanical, routine manner, usually subject to another's will; automaton.3. any machine or mechanical device that operates automatically with humanlike skill.adj.4. operating automatically: a robot train operating between airline terminals.[ < Czech, coined by Karel Capek in the play R.U.R. (1920) from the base robot-, as in robota compulsory labor, robotník peasant owing such labor]
* * *
Any automatically operated machine that replaces human effort, though it may not look much like a human being or function in a humanlike manner.The term comes from the play R.U.R. by Karel Čapek (1920). Major developments in microelectronics and computer technology since the 1960s have led to significant advances in robotics. Advanced, high-performance robots are used today in automobile manufacturing and aircraft assembly, and electronics firms use robotic devices together with other computerized instruments to sort or test finished products.* * *
Introductionany automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner. By extension, robotics is the engineering discipline dealing with the design, construction, and operation of robots.The concept of artificial humans predates recorded history (see automaton), but the modern term robot derives from the Czech word robota (“forced labour” or “serf”), used in Karel Čapek (Čapek, Karel)'s play R.U.R. (1920). The play's robots were manufactured humans, heartlessly exploited by factory owners until they revolted and ultimately destroyed humanity. Whether they were biological, like the monster in Mary Shelley (Shelley, Mary Wollstonecraft)'s Frankenstein (1818), or mechanical was not specified, but the mechanical alternative inspired generations of inventors to build electrical humanoids.The word robotics first appeared in Isaac Asimov (Asimov, Isaac)'s science-fiction story "Runaround" (1942). Along with Asimov's later robot stories, it set a new standard of plausibility about the likely difficulty of developing intelligent robots and the technical and social problems that might result. "Runaround" also contained Asimov's famous Three Laws of Robotics:● 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.● 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.● 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.This article traces the development of robots and robotics. For further information on industrial applications, see the article automation.Industrial robotsThough not humanoid in form, machines with flexible behaviour and a few humanlike physical attributes have been developed for industry. The first stationary industrial robot was the programmable Unimate, an electronically controlled hydraulic heavy-lifting arm that could repeat arbitrary sequences of motions. It was invented in 1954 by the American engineer George Devol and was developed by Unimation Inc., a company founded in 1956 by American engineer Joseph Engelberger. In 1959 a prototype of the Unimate was introduced in a General Motors Corporation die-casting factory in Trenton, New Jersey. In 1961 Condec Corp. (after purchasing Unimation the preceding year) delivered the world's first production-line robot to the GM factory; it had the unsavoury task (for humans) of removing and stacking hot metal parts from a die-casting machine. Unimate arms continue to be developed and sold by licensees around the world, with the automobile industry remaining the largest buyer.More advanced computer-controlled electric arms guided by sensors were developed in the late 1960s and 1970s at the Massachusetts Institute of Technology (MIT) and at Stanford University, where they were used with cameras in robotic hand-eye research. Stanford's Victor Scheinman, working with Unimation for GM, designed the first such arm used in industry. Called PUMA (Programmable Universal Machine for Assembly), they have been used since 1978 to assemble automobile subcomponents such as dash panels and lights. PUMA was widely imitated, and its descendants, large and small, are still used for light assembly in electronics and other industries. Since the 1990s small electric arms have become important in molecular biology laboratories, precisely handling test-tube arrays and pipetting intricate sequences of reagents.Mobile industrial robots also first appeared in 1954. In that year a driverless electric cart, made by Barrett Electronics Corporation, began pulling loads around a South Carolina grocery warehouse. Such machines, dubbed AGVs (Automatic Guided Vehicles), commonly navigate by following signal-emitting wires entrenched in concrete floors. In the 1980s AGVs acquired microprocessor controllers that allowed more complex behaviours than those afforded by simple electronic controls. In the 1990s a new navigation method became popular for use in warehouses: AGVs equipped with a scanning laser triangulate their position by measuring reflections from fixed retro-reflectors (at least three of which must be visible from any location).Although industrial robots first appeared in the United States, the business did not thrive there. Unimation was acquired by Westinghouse Electric Corporation in 1983 and shut down a few years later. Cincinnati Milacron, Inc., the other major American hydraulic-arm manufacturer, sold its robotics division in 1990 to the Swedish firm of Asea Brown Boveri Ltd. Adept Technology, Inc., spun off from Stanford and Unimation to make electric arms, is the only remaining American firm. Foreign licensees of Unimation, notably in Japan and Sweden, continue to operate, and in the 1980s other companies in Japan and Europe began to vigorously enter the field. The prospect of an aging population and consequent worker shortage induced Japanese manufacturers to experiment with advanced automation even before it gave a clear return, opening a market for robot makers. By the late 1980s Japan—led by the robotics divisions of Fanuc Ltd., Matsushita Electric Industrial Company, Ltd., Mitsubishi Group, and Honda Motor Company, Ltd.—was the world leader in the manufacture and use of industrial robots. High labour costs in Europe similarly encouraged the adoption of robot substitutes, with industrial robot installations in the European Union exceeding Japanese installations for the first time in 2001.Robot toys (toy)Lack of reliable functionality has limited the market for industrial and service robots (built to work in office and home environments). Toy robots, on the other hand, can entertain without performing tasks very reliably, and mechanical varieties have existed for thousands of years. (See automaton.) In the 1980s microprocessor-controlled toys appeared that could speak or move in response to sounds or light. More advanced ones in the 1990s recognized voices and words. In 1999 the Sony Corporation introduced a doglike robot named AIBO (see the photograph—>), with two dozen motors to activate its legs, head, and tail, two microphones, and a colour camera all coordinated by a powerful microprocessor. More lifelike than anything before, AIBOs chased coloured balls and learned to recognize their owners and to explore and adapt. Although the first AIBOs cost $2,500, the initial run of 5,000 sold out immediately over the Internet.Robotics researchDexterous industrial manipulators and industrial vision have roots in advanced robotics work conducted in artificial intelligence (AI) laboratories since the late 1960s. Yet, even more than with AI itself, these accomplishments fall far short of the motivating vision of machines with broad human abilities. Techniques for recognizing and manipulating objects, reliably navigating spaces, and planning actions have worked in some narrow, constrained contexts, but they have failed in more general circumstances.The first robotics vision programs, pursued into the early 1970s, used statistical formulas to detect linear boundaries in robot camera images and clever geometric reasoning to link these lines into boundaries of probable objects, providing an internal model of their world. Further geometric formulas related object positions to the necessary joint angles needed to allow a robot arm to grasp them, or the steering and drive motions to get a mobile robot around (or to) the object. This approach was tedious to program and frequently failed when unplanned image complexities misled the first steps. An attempt in the late 1970s to overcome these limitations by adding an expert system component for visual analysis mainly made the programs more unwieldy—substituting complex new confusions for simpler failures. In the mid-1980s Rodney Brooks (Brooks, Rodney Allen) of the MIT AI lab used this impasse to launch a highly visible new movement that rejected the effort to have machines create internal models of their surroundings. Instead, Brooks and his followers wrote computer programs with simple subprograms that connected sensor inputs to motor outputs, each subprogram encoding a behaviour such as avoiding a sensed obstacle or heading toward a detected goal. There is evidence that many insects function largely this way, as do parts of larger nervous systems. The approach resulted in some very engaging insectlike robots (see the photograph—>), but (as with real insects) their behaviour was erratic, as their sensors were momentarily misled, and the approach proved unsuitable for larger robots. Also, this approach provided no direct mechanism for specifying long, complex sequences of actions—the raison d'être of industrial robot manipulators and surely of future home robots (note, however, that in 2004 iRobot Corporation sold more than one million robot vacuum cleaners capable of simple, insectlike behaviours, a first for a service robot). Meanwhile, other researchers continue to pursue various techniques to enable robots to perceive their surroundings and track their own movements. One prominent example involves semiautonomous mobile robots for exploration of the Martian surface. Because of the long transmission times for signals, these “rovers” (see the photograph—>) must be able to negotiate short distances between interventions from Earth.A particularly interesting testing ground for fully autonomous mobile robot research is football (football (soccer)) (soccer). In 1993 an international community of researchers organized a long-term program to develop robots capable of playing this sport, with progress tested in annual machine tournaments. The first RoboCup games were held in 1997 in Nagoya, Japan, with teams entered in three competition categories: computer simulation, small robots, and midsize robots. Merely finding and pushing the ball was a major accomplishment, but the event encouraged participants to share research, and play improved dramatically in subsequent years. In 1998 Sony began providing researchers with programmable AIBOs for a new competition category; this gave teams a standard reliable prebuilt hardware platform for software experimentation.While robot football has helped to coordinate and focus research in some specialized skills, research involving broader abilities is fragmented. Sensors—sonar and laser rangefinders, cameras, and special light sources—are used with algorithms that model images or spaces by using various geometric shapes and that attempt to deduce what a robot's position is, where and what other things are nearby, and how different tasks can be accomplished. Faster microprocessors developed in the 1990s have enabled new, broadly effective techniques. For example, by statistically weighing large quantities of sensor measurements, computers can mitigate individually confusing readings caused by reflections, blockages, bad illumination, or other complications. Another technique employs “automatic” learning to classify sensor inputs—for instance, into objects or situations—or to translate sensor states directly into desired behaviour. Connectionist neural networks containing thousands of adjustable-strength connections are the most famous learners, but smaller, more specialized frameworks usually learn faster and better. In some, a program that does the right thing as nearly as can be prearranged also has “adjustment knobs” to fine tune the behaviour. Another kind of learning remembers a large number of input instances and their correct responses and interpolates between them to deal with new inputs. Such techniques are already in broad use for computer software that converts speech into text.The futureNumerous companies are working on consumer robots that can navigate their surroundings, recognize common objects, and perform simple chores without expert custom installation. Perhaps about the year 2020 the process will have produced the first broadly competent “universal robots” with lizardlike minds that can be programmed for almost any routine chore. With anticipated increases in computing power, by 2030 second-generation robots with trainable mouselike minds may become possible. Besides application programs, these robots may host a suite of software “conditioning modules” that generate positive- and negative-reinforcement signals in predefined circumstances.By 2040 computing power should make third-generation robots with monkeylike minds possible. Such robots would learn from mental rehearsals in simulations that would model physical, cultural, and psychological factors. Physical properties would include shape, weight, strength, texture, and appearance of things and knowledge of how to handle them. Cultural aspects would include a thing's name, value, proper location, and purpose. Psychological factors, applied to humans and other robots, would include goals, beliefs, feelings, and preferences. The simulation would track external events and would tune its models to keep them faithful to reality. This should let a robot learn by imitation and afford it a kind of consciousness. By the middle of the 21st century, fourth-generation robots may exist with humanlike mental power able to abstract and generalize. Researchers hope that such machines will result from melding powerful reasoning programs to third-generation machines. Properly educated, fourth-generation robots are likely to become intellectually formidable.Hans Peter MoravecAdditional ReadingHans Moravec, Robot: Mere Machine to Transcendent Mind (1999), expands on the topics of this article. Peter Menzel and Faith D'Aluisio, Robo Sapiens: Evolution of a New Species (2000), provides dramatic images and commentary on robotics research. Rolf Dieter Schraft and Gernot Schmierer, Service Robots (2000), gives a broad pictorial survey of nonindustrial robots. Silvia Coradeschi et al. (eds.), Robocup 2001: Robot Soccer World Cup V (2002), contains technical presentations by the participants.Hans Peter Moravec* * *
Universalium. 2010.