Danny Weitzner Discusses Cybersecurity and Privacy on C-SPAN
2019-05-01 - 5 minutes readWhat is happening in the world of cybersecurity and privacy? Daniel Weitzner discussed these topics during a recent interview with the Communicators on C-SPAN. You can watch Weitzner’s expert from the C-SPAN video below.
During his interview, Weitzner discussed a variety of topics, including liberal arts. On this subject, Weitzner noted that “liberal arts is always at the center of the way we think about the world because it’s about how people function. Computer science is about how to build great systems that do great things for people. But to understand what those systems should do, how they should interact with people, you need a combination.”
Weitzner also talked about his work teaching students about Internet public policy. Here, he notes that he wants his students to “leave with the understanding that the Internet and all this technology that we use is a work in progress and that we can shape it to meet human needs…I want them to know that they can build things that meet human needs. And number two, I want them to have enough understanding of the law and public policy environment and the larger sociological and ethical context to be a real part of the discussion about what these systems should do — what does it mean to have a system that’s accountable, responsive, explainable…”
As Weitzner points out, we already rely on algorithms in our day-to-day lives. However, the stakes are rising since we are considering deploying tools with automatic decision-making capabilities. As such, we need to ensure that AI algorithms make decisions that are reliable, fair, and treat people with dignity. AI systems can also be used to identify existing biases, for instance racial biases in the existing legal system that are also present in AI systems. “What AI is doing is actually getting us to look at those systems afresh and say: are people being treated fairly? And we now actually have more data to be able to evaluate those questions of fairness. And, ultimately, I think if we do the right thing, if we make the right kind of moves, both in the way that we write our laws and the way that we design this technology, we can actually get out ahead of the game on this, we can give people more confidence that they are treated fairly, we can give people effective avenues for challenging decisions when they ought to be challenged and confidence when they don’t need to be” says Weitzner. AI systems are designed by people and used by people or corporations or organizations. As such, Weitzner notes that “no one should have any doubt that at the end of the day it’s the people or the institutions who have to be responsible for the decisions. Our challenge in the way we deploy AI is to make sure that responsibility means something, that the decisions that are recommended by these really cool new systems have enough information around them that we can feel confident about what they are telling us to do.”
A final topic of conversation touched on passing privacy legislation in the U.S. Here, Weitzner noted that “we should feel a sense of urgency about it, number one because we need to protect our citizens and number two because right now privacy is being defined for the next decade in Brussels, not in Washington D.C. And I’ve always thought the Europeans have made an enormous contribution to global privacy thinking and to taking privacy seriously.” However, “the decisions matter a lot and they ought to be made in our democratic process. It’s not to say that we’re going to come up with radically different answers than Europe comes up with, because I don’t think we will, but I think we’ll come up with answers that are appropriate to the U.S. legal system, that fit our economic requirements, that fit our personality as a country, and we need congress to do that.”