Description here. Updated with support to fetch the tweets of the users somebody follows.
java twitterFetcher <feed url> <file> <username> <password>
After almost a year, Honours Project delivered and presented!
Some more information can be found on the poster summary published at LISA ’09, “Integrated Configuration of Virtual Infrastructures“, as well as on this presentation from the recent LCFG Users Day ’09, “LCFG meets Virtualisation“.
Thousand of years have passed since the appearance of the mechanical servants of the Greek god of fire, Hephaestus, to the term robot by the Czech novelist Karel Capek, and from the novelist Isaac Asimov who places robots among humans, to the 21st century HONDA humanoid robot, ASIMO. Despite the temperament of people that changes through the ages, the technologically and the, debatable, spiritually progress of the society, there are questions that have been answered and at the same time practice shows that they remain unanswered.
Nowadays, robot-like behaviour exists within products we use everyday. Computers, mobile phones, cars and airplanes make our life easier and enjoyable, but at the same time, the same underlying technology is used by governments to control their people, to produce political, or ideological, propaganda, and if necessary turn this technology against their own people, or others.
Intelligent, and non-intelligent, machines are developed and programmed by humans, and the human nature is both good and evil. In the next few decades to follow, perhaps the novels of Isaac Asimov will be reality. Humanoid robots being among us and dealing with them as we deal with normal people. How will those humanoid machines act and “think”? Will they be able to develop their own “personality” or they will be given a fixed “personality” depending on their creator/developer/engineer believes? In the chapter of Genesis in Bible, it is said that God created man in his image and model. Keeping a clear distance from religion, are the years to follow the years that this idea will become reality? Machines being developed and acting in image and model of their creators?
Isaac Asimov defined the Three Laws of Robotics. However, these laws have been commented to be laws for slaves, while we would like robots among us to behave more human-like, and therefore being ethical in the same way as people are (James Gips, 1991). Using the same underlying idea, that we should interact with robots in a human-like way, Richard Epstein have written a book in 1997 named “The Case of the Killer Robot”. In this book, Epstein develops a fictional scenario, and he tells the story of how a programming error during the development of a robot, led to the death of the robot operator (Richard Epstein, 1997). A number of other scenarios by Epstein followed this book and they presented the “ethical and social implications of artificial intelligence and virtual reality within domains such as Business and Commerce, Human Relationships, Privacy and Personal Security, Philosophy and Thought, Medicine, Government and Law, Education, Culture and Arts, Psychology, Spirituality, and Ethics and Values” (Richard Epstein, 1997). The importance of the later writings lay to the fact that they do not investigate the “killer” behaviour of a robot, which is a very common subject in fictional concepts for novels and movies, but the way robots could affect the society, their relation with humans as members of sectors that exist and evolve within the society. In our days, we use intelligent agents everyday, from our mobile phones to our washing machines and from the airplanes to normal cars. Artificial intelligence systems are used to look into the past and re-create it in order to understand it. This is an ideal use of artificial intelligence for exploring our past, but the problem that arises is when artificial intelligence is used to replace human creativity (Richard Epstein, 2000) and human relationships (T. Bickmore & R. Picard, 2005).
Despite the negative impacts that robots may impose, well programmed intelligent agents can play a positive role is society. Researchers bring as an example the use of intelligent agents in the training and therapy procedure of autistic persons that need to follow an intensively repetitive process where “agents could be used to potentially augment such interactions”. Moreover, intelligent agents could provide help with personal networking where they could contact persons on behalf of their owner, similar to a secretary. (T. Bickmore & R. Picard, 2005).
To follow and understand the concept in answering the question if robots need to know about ethics we have to consider a very similar question: do humans need to know about ethics? Apparently yes. The issue of ethics has been discussed since ancient times by the Greek philosophers and especially Aristotle in his writing “Nicomachean Ethics”.
Aristotle differentiates the intellectual and moral virtues. He suggests that intellectual virtues can be taught while moral virtues are obtained by living a correct life, something that comes by experience. The question is how can a robot understand that. It all depends on its implementation, on the algorithms that will be used for the robot to determine if what it does is correct and ethical or not (James Gips, 1991).
Indeed, this consideration is true. We consider that every person develops his personality, his ethical and moral values, depending on its surrounding environment, be it the close one of the family, or the extended one of the society. A robot could, in the same way, decide “on its own” and understand from its surrounding what is ethical and what is not. But in order to do so the engineers and the developers that have programmed the robot, or the intelligent agent, will need to have implemented an algorithm for the system to be able to determine what is ethical and what is not. The problem that arises is that the engineer may develop an algorithm for his own purposes, according to his personal ethics and values, or of course according to his employer’s. Robots however, should act for the common good of the society. Before performing an action, the system should evaluate the current situation, the consequences of the action that will perform, how much good this action will deliver and what is the best action within a range of options that would deliver the most good. Those decisions need to be based on the implemented algorithms and the patterns that are provided to the system in order to evaluate situations and decisions. Intelligent decision making systems targeted for specific purposes do not require to consider that much ethics and moral values. Nevertheless, they need to evaluate properly the specific situation they operate on, analyse the data, simulate the operation and decide upon the probable results. Think of medical systems and weapons management systems for instance. The appliance of these systems and that of ethical robots may differ a lot in some aspects, however, both are intelligent systems that are developed to evaluate conditions based on circumstances, on data and on outcome of a specific action. In both cases, the development is solely based on their appliance. An issue that has been raised by researchers under this concept, is the use of “rational agents to provide care, understand and empathy while machines can not have emotions on their own” (Picard &Klein, 2002). While this is not something directly connected to the ethics of the intelligent agents, it is directly connected with its development and implementation. What it considers as right and wrong.
With no doubt, robots and intelligent agents must be aware of ethics and moral values. In order to get ethical robots we need to have ethical engineers and ethical scientists that will design, develop and implement ethical algorithms. Education and common understanding are the keys behind such decisions and development, which unfortunately is difficult to get at a high level in the age of intolerance and lobbies. Perhaps, the question that we should ask is not if robots and intelligent agents need to know about ethics, but how actually will scientists and engineers will be able to design and implement ethical intelligent machines that have a kind of understanding of their actions. Actions that should reside within globally accepted standards. Even then, would such systems be acceptable by those who decide for ourselves and would they find a place within the society? It certainly needs time to find out, and judging from the current situation, a pure idea can easily be converted and used for money, personal advantage and control over people.
Technological progress is integral connected with scientific progress. Science, and therefore technology, primarily want to model the world, understand it, help the mankind, make our lives easier and more enjoyable. We should not allow the fear of misuse to prevent technological progress but we should target at the educational concerns and purposes that lead to misuse, and how these can be decreased. The use of technology is subject to human behaviour and human ethics. These derive from within the society, the education and the environment of every single individual. As far as people and the society are ethical, their creations will also be.
Nuclear weapons are pure physics, mathematics and chemistry. Nobody though turns against the sciences themselves, leaving aside fanatical religious blindness. People forget that, in fact, technology combines these sciences together, and speaking of artificial intelligence, then cognitive science, sociology and psychology are involved as well. Technology must be seen as the mean to move forward, the “tool” that is able to combine everything in order to understand everything, following the idea of the Greek philosopher Heracleitos who once said that “Wisdom is to understand everything with the help of everything”.
James Gips, Towards the Ethical Robot. Android Epistemology, MIT Press, 1991
Luciano Floridi, Information Ethics, its Nature and Scope, SIGCAS Computers and Society, Volume 36, No. 3, September 2006
Picard, R. and Klein, J. 2002. Computers that recognize and respond to user emotion: theoretical and practical implications. Interact. Comput. 14, 141–169
Richard G. Epstein, 1997. The Case of the Killer Robot, New York: John Wiley and Sons
Richard G. Epstein, Stories and Plays About the Ethical and Social Implications of Artificial Intelligence, Intelligence Fall 2000, MIT Press
Ronald C. Arkin, Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture PART I: Motivation and Philosophy, HRI’08, 2008, Amsterdam, Netherlands
Timothy W. Bickmore, Rosalind W. Picard. Establishing and Maintaining Long-Term Human-Computer Relationships. ACM Transactions on Computer-Human Interaction, Vol. 12, No. 2, June 2005, Pages 293–327
Yueh-Hsuan Weng, Chien-Hsun Chen, Chuen-Tsai Sun, The Legal Crisis of Next Generation Robots: On Safety Intelligence, ICAIL ’07 June 4-8, Palo Alto, CA US
You must be logged in to post a comment.