SF State launches new certificate in ethical artificial intelligence
Cutting-edge program combines computer science, philosophy and business
If a self-driving car gets into an accident, who’s to blame? The person behind the wheel? The manufacturer? Or the programmer who gave the car the ability to drive itself in the first place?
Artificial intelligence (AI) has the potential to transform our life and work, but it also raises some thorny ethical questions. That’s why a team of professors from three different colleges at San Francisco State University have created a new graduate certificate program in ethical AI for students who want to gain a broader perspective on autonomous decision-making.
The program is one of just a handful focusing on AI ethics nationally and is unique in its collaborative approach involving the College of Business, Department of Philosophy and Department of Computer Science. “The idea is to balance business ethics, philosophy, and AI algorithms and software systems,” explained Professor of Computer Science Dragutin Petkovic, who led the creation of the program.
To complete the certificate, students will take one class from each of those three areas and write a research paper on an issue in AI ethics. Open both to San Francisco State master’s students and to those who want to pursue a stand-alone graduate certificate, the program allows students to tailor the course load to their area of expertise. Whether they’re tech novices or industry professionals, students will walk away from the program with a broad, nuanced view of how AI can be used responsibly and ethically.
As the use and misuse of AI receives more scrutiny, Petkovic says, “We believe that a lot of companies will have to start training people on ethics.”
Students pursuing the certificate will learn about what is and isn’t considered artificial intelligence and will study Petkovic’s specialty, which is explaining how algorithms make decisions and how to make those decisions better. It’s an increasingly difficult question as widely used programs like neural networks are both complicated and opaque — and the stakes are high. When AI is used to pick who receives a loan or who gets hired for a job, a single decision can severely impact people’s lives.
Courses for the certificate will begin this fall with a philosophy class focusing on the idea of responsibility, which will also give some historical context for modern AI and discuss its impacts on labor. To be taught by Associate Professor of Philosophy Carlos Montemayor, who researches consciousness and attention, the class will grapple with a variety of questions. “Where is responsibility assigned? If the algorithm is biased towards certain genders or races, who’s responsible for that? If the algorithm is ‘self-taught,’ a machine-learning algorithm, how are we going to deal with that?” said Montemayor, who helped create the program.
In another course, students will learn about how businesses can act ethically and will consider their responsibility to ensure that technology — for instance, facial recognition — doesn’t interfere with the rights of others.
Associate Dean of the College of Business Denise Kleinrichert, a business ethics expert and the third architect of the program, says that questions of rights and of consent will be crucial to consider for students learning about AI. “There are wonderful things about technology,” she said. “But we also have to be watchful, just like with anything, that we’re not causing harm in some way.”