How do humans balance

how do humans balance


Historically, robotics in industry meant automation, a field that asks how machines perform more effectively than humans. These days, new innovation highlights a very different design space: what people and robots can do better together. Instead of idolizing machines or disparaging their shortcomings, these human-machine partnerships acknowledge and build upon human capability. From autonomous cars reducing traffic accidents, to grandparents visiting their grandchildren by means of telepresence robots, these technologies will soon be part of our everyday lives and environments. What they have in common is the intent to support or empower the human partners with robotic capability and ultimately complement human objectives.

Human cultural response to robots has policy implications. Policy affects what we will and will not let robots do. It affects where we insist on human primacy and what sort of decisions we will delegate to machines. One current example of this is the ongoing campaign by Human Rights Watch for an international treaty to ban military robots with autonomous lethal firing power—to ensure that a human being remain “in the loop” in any lethal decision. No such robots currently exist, nor does any military have plans to deploy them, nor is it clear when robotic performance is inferior, or how it is different from human performance in lethal force situations. Yet the cultural aversion to robots with the power to pull the trigger on their own is such that the campaign has gained significant traction.

Cultural questions will become key on a domestic, civilian level too. Will people be comfortable getting in an airplane with no pilot, even if domestic passenger drones have a much better safety record than human pilotedcommercial aviation? Will a patient be disconcerted or pleasantly surprised by a medical device that makes small talk, terrified or reassured by one that makes highly accurate incisions? Sociability, cultural background and technological stereotypes influence the answers to these questions.

My background is social robotics, a field that designs robots with behavior systems inspired by how humans communicate with each other. Social roboticists might incorporate posture, timing of motion, prosody of speech, or reaction to people and environments into a robot’s behavioral repertoire to help communicate the robot’s state or intentions. The benefit of such systems is that they enable bystanders and interaction partners to understand and interact with robots without prior training. This opens up new applications for embodied machines in our everyday lives—for example, guiding us to the right product at Home Depot.

My purpose in this paper is not to provide detailed policy recommendations but to describe a series of important choices we face in designing robots that people will actually want to use and engage with. Design considerations today can foreshadow policy choices in the future. Much of the current research into human-robotic teams seeks to explore plausible practical applications given improved technological knowhow and better social understandings. For now, these are pre-policy technical design challenges for collaborative robots that will, or could, have public policy implications down the road. But handling them well at the design phase may reduce policy pressures over time.

From driverless cars to semi-autonomous medical devices to things we have not even imagined yet, good decisions guiding the development of human-robotic partnerships can help avoid unnecessary policy friction over promising new technologies and help maximize human benefit. In this paper, I provide an overview of some of these pre-policy design considerations that, to the extent that we can think about smart social design now, may help us navigate public policy considerations in the future.

About the Author

%img src="" /%

Heather Knight is a PhD candidate at Carnegie Mellon and founder of Marilyn Monrobot, which produces robot comedy performances and an annual Robot Film Festival. Her current research involves human-robot interaction, non-verbal machine communications and non-anthropomorphic social robots.

%img src="" /%

This paper is part of series focused on the future of civilian robotics, which seeks to answer the varied legal questions around the integration of robotics into human life.

Read other papers in the series »

Human Cultural Response to Robots

If you are reading this paper, you are probably highly accustomed to being human. It might feel like nothing special. But after 12 years in robotics, with researchers celebrating when we manage to get robots to enact the simplest of humanlike behaviors, it has become clear to me how complex human actions are, and how impressive human capabilities are, from our eyesight to our emotive communication. Unlike robots, people are uniquely talented at adapting to novel or dynamic situations, such as acknowledging a new person entering the room while maintaining conversation with someone else. We can identify importance in a complex scene in contexts that machines find difficult, like seeing a path in a forest. And we can easily parse human or social significance, noticing that someone is smiling but is clearly blocking your entry, for example, or knowing without asking that a store is closed. We are also creative and sometimes do unpredictable things.

By contrast, robots perform best in highly constrained tasks—for example, looking for possible matches to the address you are typing into your navigation system within a few miles of your GPS coordinates. Their ability to search large amounts of data within those constraints, their design potential for unique sensing or physical capabilities–like taking a photograph or lifting a heavy object--and their ability to loop us into remote information and communications, are all examples of things we could not do without help. Thus, machines enable people, but people also guide and provide the motivation for machines. Partnering the capabilities of people with those of machines enables innovation, improved application performance and exploration beyond what either partner could do individually.

To successfully complete such behavior systems, the field of social robotics adapts methodology from psychology and, in my recent work, entertainment. Human coworkers are not just useful colleagues; collaboration requires rapport and, ideally, pleasure in each other’s company. Similarly, while machines with social capabilities may provide better efficiency and utility, charismatic machines could go beyond that, creating common value and enjoyment. My hope is that adapting techniques from acting training and collaborating with performers can provide additional methods to bootstrap this process. Thus, among the various case studies of collaborative robots detailed below, I will include insights from creating a robot comedian.

Will people be comfortable getting on an airplane with no pilot?

Robots do not require eyes, arms or legs for us to treat them like social agents. It turns out that we rapidly assess machine capabilities and personas instinctively, perhaps because machines have physical embodiments and frequently readable objectives. Sociability is our natural interface, to each other and to living creatures in general. As part of that innate behavior, we quickly seek to identify objects from agents. In fact, as social creatures, it is often our default behavior to anthropomorphize moving robots.

Animation is filled with anthropomorphic and non-anthropomorphic characters, from the Pixar lamp to the magic carpet in Aladdin. Neuroscientists have discovered that one key to our attribution of agency is goal-directed motion. 1 Heider and Simmel tested this theory with animations of simple shapes, and subjects easily attributed character-hood and thought to moving triangles. 2

To help understand what distinguishes object behavior from agent behavior, imagine a falling leaf that weaves back and forth in the air following the laws of physics. Although it is in motion, that motion is not voluntary so we call the leaf an object. If a butterfly appears in the scene, however, and the leaf suddenly moves in close proximity to the butterfly, maintaining that proximity even as the butterfly continues to move, we would immediately say the leaf had “seen” the butterfly,

and that the leaf was “following” it. In fact, neuroscientists have found that not attributing intentionality to similar examples of goal-directed behavior can be an indication of a social disorder. 3 Agency attribution is part of being human.

One implication of ascribing agency to machines is that we can bond with them regardless of the machine’s innate experience, as the following examples demonstrate. In 2008, Cory Kidd completed a study with a robot intended to aid in fitness and weight loss goals, by providing a social presence with which study participants tracked their routines. 4 The robot made eye-contact (its only moving parts), vocalized its greetings and instructions, and had a touch-screen interface for data entry. Seeking to keep in good standing, it might have tried to re-engage participants by telling them how nice it was to see them again if they had not visited the robot in a few days. Its programming included a dynamic internal variable rating its relationship with its human partner.

As social creatures, it is often our default behavior to anthropomorphize moving robots.

When Kidd ran a study comparing how well participants tracked their habits, he compared three groups: those using pen and paper, touchscreen alone, or touchscreen with robots. While all participants in the first group (pen and paper) gave up before the six weeks were over, and only a few in the second (touch screen only) chose to extend the experiment to eight weeks when offered (though they had all completed the experiment), almost all those in the last group (robot with touch screen) completed the experiment and chose to extend the extra two weeks. In fact, with the exception of one participant who never turned his robot on, most in the third group named their robots and all used social descriptives like “he” or “she” during their interviews. One participant even avoided returning the study conductor’s calls at the end of the study because she did not want to return her robot. With a degree of playfulness, they had treated these robots as characters and perhaps even bonded with them. The robots were certainly more successful at engaging them into completing their food and fitness journals than nonsocial technologies.

Sharing traumatic experiences may also encourage bonding, as we have seen in soldiers that work with bomb-disposal robots. 5 In the field, these robots work with their human partners, putting themselves in harm’s way to keep their partners from being in danger. After working together for an extended period, the soldier might feel that the robot has saved his life again and again. This is not just theoretical. It turns out that iRobot, the manufacturers of the Packbot bomb-disposal robots, have actually received boxes of shrapnel consisting of the robots’ remains after an explosion with a note saying, “Can you fix it?” Upon offering to send a new robot to the unit, the soldiers say, “No, we want that one.” That specific robot was the one they had shared experiences with, bonded with, and the one they did not want to “die.”

Of course, people do not always bond with machines. A bad social design can be difficult to interpret, or off-putting instead of engaging. One handy rubric referenced by robot designers for the latter is the Uncanny Valley. 6 The concept is that making machines more humanlike is good up to a point, after which they become discomforting (creepy), until you achieve human likeness, which is the best design of all. The theoretical graph of the Uncanny Valley includes two lines, one curve for agents that are immobile (for example, a photograph of a dead person would be in the valley), and another curve with higher peaks and valleys for those that are moving (for example, a zombie is the moving version of that).

%img src="" /%

In my interpretation, part of the discomfort in people’s response to robots with very humanlike designs is that their behaviors are not yet fully humanlike, and we are extremely familiar with what humanlike behavior should look like. Thus, the more humanlike a robot is, the higher a bar its behaviors must meet before we find its actions appropriate. The robotic toy Pleo makes use of this idea. It is supposed to be a baby dinosaur, an animal with which we are conveniently unfamiliar. This is a clever idea, because unlike possible robotic pets as dogs or cats, we have nothing to compare it against in evaluating its behaviors. In many cases, it can be similarly convenient to have more cartoonized or even non-anthropomorphic designs. There is no need for all robots to look, even somewhat, like people.


Reuters - (L) Japan's largest toymaker Bandai Co Ltd President Takeo Takasu holds the company's new talking toy robot based on the popular cartoon character named "Doraemon", a robot cat from the future; (R) Actor Richard Eden dressed as Robocop

Cultural Variations in Response to Robots

Our expectations of robots and our response to their designs varies internationally; the Uncanny Valley curve has a different arc depending where you are. Certainly, our storytelling diverges greatly. In Japan, robots are cute and cuddly. People are apt to think about robotic pets. In the United States, by contrast, robots are scary. We tend to think of them as threatening.

Cultural response matters to people’s willingness to adopt robotic systems. This is particularly important in areas of service, particularly caregiving services of one sort or another, where human comfort is both the goal and to some degree necessary for human cooperation in achieving that goal.

News reports about robot technologies in the United States frequently reference doomsday scenarios reminiscent of Terminator or RoboCop, even when the innovation is innocuous. My PhD advisor, Reid Simmons, jokes that roboticists should address such human fears by asking ourselves, “How prominent does the big red button need to be on the robots we sell?” Although there are also notable examples to the contrary (Wall-E, Johnny-5, C3P0), it is true that Hollywood likes to dramatize scary robots, at least some of the time (Skynet, HAL, Daleks, Cylons).

One explanation for the differing cultural response could be religious in origin. The roots of the Western Terminator complex may actually come from the predominance of monotheistic faiths in the West. If it is God’s role to create humans, humans that create manlike machines could be seen as usurping the role of God, an act presumed to evince bad consequences. Regardless of current religious practice, such storytelling can permeate cultural expectations. We see this construct in Mary Shelley’s story of Frankenstein, first published in 1818. A fictional scientist, Dr. Frankenstein, sews corpses together then brings his super-creature to life in a lightning storm. Upon seeing it animated, he is horrified at the result, and in its abandonment by its creator, the creature turns to ill behavior. This sense of inevitability is cultural, not logical.

In Japan, robots are cute and cuddly. In the United States, robots are scary.

In Japan, by way of contrast, the early religious history is based on Shintoism. In Shinto animism, objects, animals, and people all share common “spirits,” which naturally want to be in harmony. 7 Thus, there is no hierarchy of the species, and left to chance, the expectation is that the outcome of new technologies will complement human society. In the ever-popular Japanese cartoon series Astroboy, we find a very similar formation story to Frankenstein, but the story’s cultural environment breeds an opposite conclusion. Astroboy is a robot created by a fictional Ministry of Science to replace the director’s deceased son. Initially rejected by that parent figure, he joins a circus where he is rediscovered years later and becomes a superhero, saving society from human flaws.


Category: Forex

Similar articles: