To use the notion of roboethics or robot ethics, it is important to consider ethical challenges that take place with robots. Firstly, there is a great challenge with two meanings in using the word roboethics as the professional ethics of roboticists or when dealing with moral code programmed into the robots themselves (Lin, Abney, & Bekey, 2011). Secondly, robot ethics is connected with the self-conscious ability to do ethical reasoning by robots. (Wiener, 1988) .While in the former, the main subject is to consider human practices in utilizing robots and robots in the latter can have their own ability to reflect themselves, which is very arguable. However, it is impossible to study robots without any consideration on human beings. In this point, this work would make it evident that the subject of roboethics or robot ethics is human beings, even though many robots engineers have developed the technology of robots. Conversely, this work would be a bias that it seems inevitable that there is a parallel gap between ethics and robotics, which is difficult to find any overlapping points accepted by all researchers. Therefore, to draw nearer to the points based on the field of ethics, this work would explain and criticize the general views of robots, explore ethical approaches and their main issues, and suggest significant challenges in an understanding of human lives with robots in the future.
As the starting point of the discussion on roboethics, the most fundamental thing to do is to ask what robots are and what are socio-economic as well as the legal issues that concern them. The questions should be explored in ontological terms. To be specific, the first question will be what are robots? Might be in a separate place from robotics in academic areas so that we can consider specific features of lives with robots in practices.For example, as reported by Wallach & Allen (2008), it shows four general views of robots, the first one is that Robots are nothing but machines, secondly, they have ethical dimension, Thirdly, they are seen as moral agents, and finally, they can evolve into new species. On the other hand, through every view, they follow three ethical positions that individuals might have in robotics as suggested by Baniela Cerqu: not interested in ethics, Interested in short-term ethical questions, and Interested in long-term ethical concerns (Robot Ethics | ARTECA, n.d). Through this, any ontological position of robots tends to be closely related to a problem of how to utilize them, which raises an ethical issue. Robots could be functioned in the similar level of human beings at least in care service if they are used as moral agents that can communicate, for example, with human patients who robots have to care even in programs ordered by human doctors. General attention, therefore, would be paid to three views of robots as they are.
The first view is that robots are only machines. There is no room for ethical issues of robots, for they cannot be given with consciousness, free will, or with the level of autonomy (Wallach & Allen, 2010). This is also in the sights of most ethicists. As is well known, however, the fact that the advance of technology and industry has been and will be intelligent as well as automatic makes it difficult for them to confirm the view of only machines. This is certainly worried about any possibility of the loss of robots as machines. But roboticists are likely to be neutral because robots are machines themselves.According to Lin & Bekey,(2011)robots in society will be as ubiquitous as computers are today. The nature of robots as machines, that is, to replace human labors is in the same context of the history of robotics from golden servants in Homers Iliad (around 1190 BCE),to the robotics industry today via Leonard da Vincis a mechanical knight (Robot ethics book released by MIT Press | Center for Internet and Society, n.d.). Only in the name of robots itself, therefore, it can be agreed that a robot is a kind of machines created by human beings. As seen above, for ethicists, robots today may be beyond the ontological position as just machines regarding social applications, even though roboticists would argue that they are still like computers. This is connected with our next concerns of robots positions in society. The second and the third point we will deal with are arguably since robots have been created and utilized in human society. Both ethicists and roboticists could have some conflicts. For example, is it right or desirable that robots are designed to make semi-autonomous or autonomous decisions by their accumulated and collective data, or without the forms of human control? While the former is obviously against those kinds of robots, in the latter, robots with a certain boundary of autonomy can be seen as effective or good for humans. In fact, as Lin shows, robots in society have performed various affairs from 3 Ds-dull, dirty, and dangerous-jobs to military weapons like UAVs through entertainment, personal and companions. Therefore, it is natural that the first ontological view of robots as machines is not of interest in ethics, but of interested short-term or long-term ethical questions with ethical concern in utilizing robots in society. And to use Lins words, to a wide range of issues is led in producing and applying robots in society(Lin & Bekey,2011).
As is the most arguable between robotics and ethics, the second affirmation is that robots will be able to be moral agents. In ethics, it is not in a long history that in its stream the notion that any species besides human beings is likely to be moral agents has been acceptable, particularly in interdisciplinary approaches with sociobiology. By the way, robots are not social animals like bees and ants or dogs, but machines that have specific physical features and can interact with humans for health care. Only concerning the case of moral action having the particular influence on others, it is plausible that robots could be seen as honest agents in doing their acts by autonomous or automatic programs and systems for human beings. As the first feature of robots as moral agents, to be narrow, it is argued that robots are accepted as entities that can perform actions, again for good or evil, (Crnkovic, & Curuklu,2012). This does not have to be represented as evidence for moral values like conscience, free will, and responsibility of individuals. In other words, robots do only actions rooted in the input of the program by humans, concerning its near-future aspect, to use Bekeys terms, there are five areas of great innovations. In particular, the social expansion of robotics will enable us to be faced with a new morality in robotics. As Abney points out, our ethical concerns are with creating robots to follow the rules or evince a good character (Abney & Bekey, 2008). At that moment, robots shortly will be moral agents of immeasurable and unmanageable consequences by their ability to learn moral rules in programs.
Therefore; we might make a decision on robots as moral agents or on those as one of the new species that are equal to humans. Rather than the former, the latter has been a typical feature in science fictions, but the strictness of our reasoning has been neglected. Let us come back to the critical view of the first that robots are just machines created by human beings. If it is accepted, humans do not have any power to create new beings and robots should never be seen as moral agents; conversely, the meaning of robots as moral agents is not machining but a new species or being (Crnkovic, & Curuklu,2012). At the moment, robots should create their things; have their name by their wills, regardless of human affairs. As mentioned above, the main goal of robot ethics is not justified by the intervention of human beings, but therefore by robots themselves. Now we have to divide the dimension of the ontological views of robots in ethics and robotics. They have different interests in robots. What ethicists express their concerns about robots is usually the ontological feature connected with independent decisions and actions, which roboticists believe can be controlled by technology(Wallach & Allen, 2010).At first glance, it seems like serious conflicts, but in fact, this work will guess an overlapping point that we can give our consent this view.
In the situation that robots abilities could become more human-like in robotics, we need to ask a more serious question from creating to using robots: what kind of effect can they have? At first glance, the question seems to be in non-moral dimensions, but it should be regarded as the essential feature of the moral sense in that any consideration of their effect is reflected in human morality. At that point, it is notable in Letwins notions of morality that we are capable of doing more intelligent, purposive performances accountable for other-regarding virtues, regardless of self-regarding interests as the starting points of acts (Ten, 2013). To be utilized, robots are to be the reflective outcome of human morality underlying their ontological views and practical approaches. Autonomously or automatically, robots practicing their performances mean that they have a connection with human lives so that they are evaluated only regarding their effects on humans, even though in robotics they are made for self-regarding interests, for example, to earn a lot of money to those managing them.
In that sense, literally as human agents, robots can have specific dimensions of other-regarding virtues from the first design of their applications to human lives, in that they are created to promote human virtues that reflect human morality. As it is dealt with the central issues of roboethics, it is at least severe to deny the fact that robots can be seen as quasi-agents and that robots can do certain levels of intelligent, purposive performances input by human morality, just as moral agents of humans in near-future robotics. Subsequently, this feature will give any help to deal with some tasks, for example, to consider a desirable ethical system to establish robots responsibly and to make experts of robotics and ethics responsible, ethical roboticists in one of the fields of applied ethics.
Reference
Abney, K. Lin P & Bekey, G. (2008). Autonomous military robotics: Risk, ethics, and design.
California Polytechnic State Univ San Luis Obispo
Arkin, R. (2009). Governing Lethal Behavior in Autonomous Robots. CRC Press.
Crnkovic, G. D., & Curuklu, B. (2012). Robots: ethical by design. Ethics and Information
Technology, 14(1), 61-71.
Lin, P., Abney, K., & Bekey, G. A. (2011). Robot Ethics: The Ethical and Social Implications of
Robotics. MIT Press.
Robot ethics book released by MIT Press | Center for Internet and Society. (n.d.). Retrieved
October 26, 2017, from http://cyberlaw.stanford.edu/blog/2011/12/robot-ethics-book-released-mit-pressTen, C. L. (2013). Routledge History of Philosophy Volume VII: The Nineteenth Century.
Routledge.
Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford
University Press.
Wallach, W., & Allen, C. (2010). Moral Machines: Teaching Robots Right from Wrong. Oxford
University Press.
Wiener, N. (1988). The Human Use Of Human Beings: Cybernetics And Society. Perseus Books
Group.
Â
Request Removal
If you are the original author of this essay and no longer wish to have it published on the collegeessaywriter.net website, please click below to request its removal:
- Essay Example on Nylon Fiber
- TCP/IP Concept - Essay Example on Computer Science and IT
- Essay Sample on Robot Challenge: Automatic Lawn Mowing System
- Pioneering ESA Mission - Essay Example
- Application of Information Technology in Healthcare Administration - Paper Example
- Informative Speech about SmartWatch - Paper Example
- Essay on Will Humanity Lose Jobs in the Future Due to AI