by Sophia Contreras
In honor of the late Brian O’Connell, Central Connecticut held its tenth annual lecture series in order to honor the professor who passed away in 2008.
According to family member Sarah Cox, O’Connell’s biggest regret before passing was not reaching out to those in the professional computer science industry to share his experiences with his students. As a result, his family and friends took it upon themselves to carry out his desire, beginning the O’Connell Scholarship and lecture series.
Philosophy students Kelly Higgins and Jeton Zhuta were awarded the O’Connell Scholarship, which recognizes students in computer science, engineering and technology, philosophy, law and music majors.
University of Hartford computer science professor Dr. Michael Anderson, along with philosophy professor Dr. Susan Anderson of the University of Connecticut, presented a lecture about machine ethics in honor of O’Connell. Specifically addressing the development of ethics for a machine robot versus having a human decide how to use a machine ethically, Dr. Susan Anderson used the example of a rumba with no human controller, and a manual vacuum to demonstrate how machines and robots can operate ethically independently.
She also emphasized the importance of developing non-threatening machines with ethics to help humans. Dr. Michael Anderson explained the specifics of programming a machine with ethics. Both Andersons’ biggest concern included machine and robot ethics that work will be a part of future elderly assistance robots.
Dr. Wendell Wallach, senior advisor to the Hastings Center, presented the second lecture, which concerned Artificial Intelligence and the responsibility humans and scientist have to keep AI from “slipping beyond our control.” Dr. Wallach explored various disquiet issues concerning scientific advances such as cloning, GMO foods and bio security.
Dr. Wallach cited Sophia the AI Robot, which has recently gained citizenship in BLANK and was created by David Hanson, as one of the advancements in AI technologically. Dr. Wallach mentioned the potential fear of creating a potential AI robot that is smarter than humans and the ability of humans being able to control those machines after the become smarter than us.
Dr. Wallach also referenced the potential development of weaponized AI that would allow AI machines to pick their own target without meaningful human control, potentially harming innocent humans. Dr. Wallach shared his experience in opposing weaponized AI at the Geneva United Nations Conventions.
Students and faculty were able to further discuss the ethics of machines and AI at the end of the event through Q&A and open discussions with the presenters and other colleagues.