Negative aspects of applying social robots in education essay

At the same time, serious innovations can’t help bringing serious disadvantages. First of all, social robots are not able today to manage the class, motivate students, or form an individual approach to specific students. They can’t fully express their emotions, can’t play a guitar and sing songs, can’t cope with jokes and antics of students (Sabanovic 2007). That is, in fact, a robot cannot perform the full range of current functions of a person-teacher, but only a small portion of these functions. Besides, modern robot understands only the most basic voice commands. Social robots amaze with their communication skills, fantastic imitation of emotional contact, but they are beings, operating at the level of instinct and are incapable of introspection: certain keywords just trigger in them strict stereotype response (Becker 2006).
Another significant disadvantage concerns national specificity. The attitude of students to the robot in Japanese (Korean, Chinese) schools and in Western Europe will be different, and, accordingly, the students will have different motivations for learning in conjunction with the robot (Sabanovic 2007). In addition, the situation with the lack of skilled workers and, accordingly, unemployment rates is relevant for different countries in varying degrees.
Another important drawback is that robots reproduce (and so far, for obvious reasons, do not reproduce) the existing paradigm of learning process organization, created more than 300 years by Jan Comenius and representing: class-lesson learning of “standardized” students selected by age in one place. In this system, the function of a tutor as the keeper and reproducer of knowledge is now greatly replaced by books, television, CDs, Internet, various communities, etc.) (Tanaka 2007). Moreover, this approach has been created in the era of the Industrial Revolution with the aim of training for the industry. Today, and tomorrow, pedagogy should be on the next stage in the development of industry, culture and humanity, and so should the contemporary robotic teachers.
Nowadays, the proponents of the idea of artificial intelligence are trying to make the program “more than a program.” Some of them hope that consciousness will be gained by the system that has the information model of itself (Ramey 2006). But this alone is not enough, e.g., a computer with a webcam aimed at a mirror, already has an information model of itself, but it makes little difference.
Others, following Norbert Wiener, the founder of cybernetics, believe that intelligence can be obtained by a program capable of self-learning and self-programming (Sabanovic 2007). However, for this, it would have to develop for a long time, living in the real world. In addition, such a program will need emotions to understand people and meaning of their actions, to realize values, strive for success and avoid failure. In short, a self-learning machine would need to become a man to learn to think and communicate in the human world.
In this case, self-learning, i.e. the ability to consciously change the habitual ways of behaving and thinking, is based on consciousness, the capacity for comprehension and understanding (Ramey 2006). At the same time, trying to make an artificial intelligence model, we do not always know how normal brain work. So, until experts create the neurological circuit that describes how our brains calculate results of our actions and form responsibility, there is no reason to talk about technological breakthroughs (Saerbeck 2010).
Social robot certainly can be considered the highest achievement in the field of robot interaction with a man, but its fame is steadily related to a feeling of fear, and perhaps, here lays the greatest danger posed by social robots.
Let us recall the famous example of the reaction of human brain to observing robots, called “uncanny valley” and first described by Masahiro Mori. In his article of 1970, describing a graph that shows the measure of friendliness of a human in relation to a machine, he made a general conclusion that people are usually more pleasant to deal with the mechanism having anthropomorphic features. However, this law was to force a certain threshold, and when the machine becomes too much like a man, the psyche of the observer launches the mechanism of anxiety – similar to observing some dead or diseased human being (Duffy 2003). At this point, there is a failure in the graph (“uncanny valley”), and then the curve stretches upward again, as the simulation becomes more and more like a real person, that is the perfect android.
In the “uncanny valley” we are not facing pure fear, but a mixture of recognition and fear, with intertwined sympathy and disgust. This is an example of cognitive dissonance our minds just cannot get along with. Developers of social robots have been long struggling to jump over the “uncanny valley”, but still only naturalistically modeled dolls are made at the laboratories (Duffy 2003).
On the other hand, a person, especially a child, is simply defenseless against the emotional attachment to certain objects, so the question is if it is decent to speculate on that. Sherry Turkle studied the strong attachment that occurs in humans with respect to such robots as Paro, and experimentally confirmed that children playing with robotic dolls seriously see them as rational beings, endowed with emotions (Tanaka 2007). Researchers (Fong 2003; Ramey 2006) are well aware that widespread social robots are dangerous for the healthy perception of reality, although their spreading is inevitable.
Patrick Lin, the member of the board on ethics in the U.S. Naval Academy, believes that the ethical framework are necessary not only for the robots on the battlefield. He states that quite possibly, social robots represent to the average person a greater danger than fighting robots; though such robots don’t have weapons, very soon we’ll regularly encounter them face to face (Saerbeck 2010). Virtually all researchers agree that to communicate with robots – in particular, in areas related to children and their future- we need at least some ethical guidelines (Ramey 2006). It’s not that we should lay in robots a sort of moral principles, but their creators are now really forced to work in a situation of ethical and legal vacuum. When the debate is bogged down in abstract reasoning and is not supported by sufficient empirical data, a set of clear moral guidelines could serve as a sort of insurance policy. At the same time, modern legislation has lagged behind the situation. We are close to time, when we won’t be able to find application for new robots simply because of the atmosphere of total legislative confusion.
Conclusion

Some analysts predict that by 2015 the size of social robots market will reach $15 billion, but experts still call us to be careful, because the decision to entrust the shrine of human communication to impersonal machines is a scary perspective. Great concern should be caused by the fate of those for whom this new industry is developed – children in schools who cannot afford to hire full staff of teachers. Surely, the army of robots looking after students will be much cheaper than a thousand teachers. But it is difficult to predict the way the new generation growing among robot-friends-teachers will navigate in the world of human relations.
One of possible solutions is to forget for a while about the achieved autonomy of robots and use social robots simply as toys (Huggable, etc.). In his recent article about the dangerous trends in modern manufacturing of robotic toys, Noel Sharkey, the professor of the Department of Artificial Intelligence and Robotics at the British University of Sheffield, mentioned that in contrast to the situation where a fully autonomous or nearly autonomous robot takes over the functions of tutors or nurses, such remote-controlled machines do not cause ethical concerns (Ramey 2006). However, after being upgraded, social robotic teachers may probably serve the purpose to be the first caution experience, a vaccination against fears and distorted representations of previous generations.

Bibliography

Becker, B 2006, ‘Social robots – emotional agents: Some remarks on naturalizing man-machine interaction’, International Review of Information Ethics, vol. 6, no. 12, pp. 38-44.
Duffy, BR 2003, ‘Anthropomorphism and the social robot’, Robotics and Autonomous Systems, vol. 42, pp. 177–190.
Fong, T, Nourbakhsh, I and Dautenhahn, K 2003, ‘A survey of socially interactive robots’, Robotics and Autonomous Systems, vol. 42, pp. 143–166.
Ramey, CH 2006, ‘Conscience as a design benchmark for social robots’, Proceedings of ROMAN06: The 15th IEEE International Symposium on Robot and Human Interactive Communication: Getting to Know Socially Intelligent Robots, Toward Psychological Benchmarks in Human-Robot Interaction, Hatfield, UK, pp. 486-491.
Sabanovic, S, Michalowski, MP & Caporael, LR 2007, ‘Making Friends: Building Social Robots Through Interdisciplinary Collaboration’, Multidisciplinary Collaboration for Socially Assistive Robotics: Papers from the 2007 AAAI Spring Symposium, Technical Report SS-07-07, pp. 71-77.
Saerbeck, M, Schut, T, Bartneck, C & Janse, M 2010, ‘Expressive robots in education – Varying the degree of social supportive behavior of a robotic tutor’, Proceedings of the 28th ACM Conference on Human Factors in Computing Systems (CHI2010), Atlanta, pp. 1613-1622.
Tanaka, F, Cicourel, A & Movellan, JR 2007, ‘Socialization between toddlers and robots at anearly childhood education center’, Proceedings of the National Academy of Sciences of the United States of America, vol. 104, no.46, pp. 17899–17900.



Author: essay
Professional custom essay writers.

Leave a Reply