Sara M. Watson: Artificial Intelligence & Smart Journalism

"The biggest problem AI has is that even the engineers can't really explain certain outcomes or certain decisions that go through an artificially intelligent system."

Video abspielen
Sara M. Watson: Artificial Intelligence & Smart Journalism (Video)
Youtube

Wie denkt Künstliche Intelligenz? Das wissen selbst ihre Schöpfer nicht so ganz genau. Aber wenn KI künftig immer wichtiger wird, sollte man zumindest eine Ahnung haben, wie Algorithmen zu ihren Ergebnissen kommen. Und das kann ganz anders sein als die Art und Weise, wie Menschen denken, meint Sara M. Watson, Technikkritikerin und Autorin vom Digital Asia Hub in Hong Kong. Wie können Literatur und Journalismus helfen, eine neue Sichtweise auf KI zu eröffnen?

How is Artificial Intelligence actually thinking? Even their creators often don't really fully understand. But if KI becomes more and more important you should at least have an idea of how algorithms get to results. And they think totally different to how human beings do, says Sara M. Watson, tech critic and writer at the Digital Asia Hub, Hong Kong. How can literature and journalism help to find a new perspective on AI?
 

Das Gespräch wurde am Rande der re:publica 2017 in Berlin aufgezeichnet.
 

Jede Woche neu beim Stifterverband: 
Die Zukunftsmacher und ihre Visionen für Bildung und Ausbildung, Forschung und Technik

Autorin: Corina Niebuhr
Produktion: Webclip Medien Berlin
für den YouTube-Kanal des Stifterverbandes

 

Transkript des Videos

The biggest problem AI has is that even the engineers can't really explain certain outcomes or certain decisions that go through an artificially intelligent system.

In many cases I think engineers have been talking about how they just can't articulate how the output came out. They can try to kind of reverse engineer it or model the kind of logic tree. But in many cases you're talking about layers and layers and layers of processing that happens in order to get to something like image recognition or tagging or kind of classification by these complex machines. So I still think this kind of crux of or societal question of what AI is and what it could be really surfaces around trying to understand it. If we have talked about AI mostly as a way of modelling human thought we've almost remitted ourselves to our imagination of what a machine is capable of. And so if we try to understand the way, you know, a machine is playing chess or Go or winning Jeopardy. You know, we're not really fully understanding the way it's processing because we're trying to understand it the way we think.

People who are writing about this don't know how elese to write about it. But also the story is that, you know, either scientists are telling or, you know, the PR that, PR firms that are around the science or the technology companies that are pushing these stories, you know, are kind of limited in the imaginative framing that they are using. So I'm actually really interested in thinking of other ways of or other means of storytelling. So not just focussing on the kind of grand narrative like where AI is going and this kind of man versus machine scenario in which the machines take over and man loses but rather focussing on other ways of storytelling. So I'm really interested in talking about, you know,  character development or first person narratives to try to understand how an AI thinks. What would that look like to write a story that tries to explain in the voice of an AI what it is like to be an AI. I think that's something that even engineers struggle with in trying to understand and explain how their own technology works because we're talking about systems that are basically, you know, very complex, multi-level decision-making trees that are just in many cases beyond a kind of means of explanation, even to the engineers themselves. So I think there are a lof different ways we can use storytelling to try to either get at something that's really hard to explain or to imagine another intelligence that is not just about creating an intelligence that's just like man but rather an intelligence that is different from man.

I always go back to the question of what are these technology firms optimizing for? And in most cases a plattform like Facebook the main question or the main kind of driver they have is to get you to spend more time om Facebook. And if that's the metric that's, you know, leading towards certain behaviours. It's not about best serving the user, it's about giving more content that will keep you on that site longer. And so I start to ask this question about any technology. And I think it's the kind of ultimate question about technology which is: What are we optimzing for? And I think if we can answer that question then we can get to a very substantial conversation with either the technology firms, the policy makers, the users about what their interests are. And so I've actually kind of been pushing that my underlying big picture question for basically any technology that we're talking about. So in the case of, you know, the newsfeed being, whether or not it's screwed in one direction or another, I think users have to have the opportunity to express any intentions or desires about how the feed is optimized. So as a user I don't get to say, well, you know, I keep seeing all these videos of cats. I would actually like my feed to be slightly more politicized because an election is happening, and I want to see a mix of this and that. I start to picture things like, a really concrete solution to that is to give users control over what the feed filters towards and the ability to change that any given week or any given day just by having kind of either proxy, you know, third parties that can determine like, say, that I just want AP Associated Press to be responsible for whatever news shows up in my newsfeed. But Facebook just hasn't opened that as a possibility, and I can foresee a version of Facebook that allows, you know, multiple different ways of curating the feed. But for now nobody's asking for that, and Facebook really isn't interested in exposing that or kind of opening up that pandora's box of possibilities, right? Like they're just interested in optimizing towards one thing and that's what Facebook has determined as its ultimate reason for being.

I look to places like the Berkman Klein Center for Internet & Society for that because they are really interested in putting policy makers and technologists and users and activists and journalists all together in the same room. And I think what I've learned from hanging out there for the last couple of years is that those conversations can be uncomfortable and they can certainly, you know, get heated at times. But I think getting together round the same table and being in the same space and having a talk to each other face to face actually changes the temper of the conversation a lot in a way that's, you know, there's at least an interest in having a shared conversation instead of being a kind of competitive interface. And, you know, I think also conferences like this one is often where that kind of conversation can happen, when, you know, you can put a tech CEO on the stage and be interviewed by somebody who is, you know, talking of the policy side of things. 

There's a certain type of critic of technology that tends towards, you know, just deconstructing what's going on or pointing at something and saying: This is a neoliberal problem where Silicon Valley is just deciding just, you know, what the standard is, pointing at something as being capitalistic or something like that. And I'm really interested in trying to, you know, those critics are important and interesting but they often shut down conversation, and I'm really interested in having a conversation that brings more people together to the table to share vocabulary and share the kind of framing of the problem, and a lot of that means that you have to kind of be careful about the language that you're using, be careful about the framing that you're using, and also about the questions that you're asking and proposing. But really the goal is to try to get people to kind of work towards something productive and constructive. So I'm also really interested in putting forth not only the kind of deconstruction or the description of the problem but really spending time on what the potential solution is or the way to, you know, have a policy intervention or design change or anything along those lines that actually starts to explore what's possible or what could be or maybe what should be rather than just kind of describing the problem as a problem.