Ansgar Koene - Global AI Ethics & Regulatory Leader EY London

Ep 65: How AI is “manipulating” our behaviour & how we should regulate it

Powered by RedCircle

Welcome to Episode 65!

Our guest for STIMY Episode 65 is Ansgar Koene.

Ansgar Koene is the Global AI Ethics & Regulatory Leader at EY, Senior Research Fellow at Horizon Digital Economy Research Institute at the University of Nottingham, a Trustee at 5Rights and chair of the IEEE Working Group P7003 Standard for Algorithm Bias Considerations.

Today with Ansgar, we’ll be talking about all things artificial intelligence.

PS:

Want to learn about new guests & more fun and inspirational figures/initiatives happening around the world? 

Then use the form below to sign up for STIMY’s weekly newsletter!

You don’t want to miss out!!

Get the latest podcast episodes!

With exclusive alerts on upcoming guests, a chance to pose YOUR questions to them & more

    So This Is My Why podcast

    Powered By ConvertKit

    AI is an inescapable part of life. Whether it’s the songs that Spotify recommends to us or the similar videos that pop up on our YouTube feed after watching one cat video, AI is monitoring & downloading data about us which is used to enhance our experiences on these social platforms. But there are darker elements to this, where it ends up manipulating our behaviour without us even realising it.

    So how can and should AI be regulated? What are the issues surrounding algorithm bias? Have recent legislations like the GDPR helped to better define the boundaries surrounding the use of AI? And what does Ansgar think about some of the current developments e.g. Facebook choosing to stop the use of its facial recognition software?

    If you want to learn more about the current state of AI, and the ethical and regulatory concerns surrounding its use, then this is the episode for you.

    Every model is wrong, but some models are useful.
    Ansgar Koene - Global AI Ethics & Regulatory Leader EY London
    Ansgar Koene
    Global AI Ethics & Regulatory Leader, EY

    Highlights

    • 8:50 Every model is wrong, but some models are useful
    • 16:49 Ethical concerns around use of Twitter
    • 18:12 Issue of consent & privacy
    • 20:56 Types of recommender systems used by online platforms
    • 26:07 Using youth juries in the Unbias Project
    • 27:25 A series of “nudges” that manipulate our behaviour 
    • 29:59: An oversight committee 
    • 31:52 Who should bear editorial responsibility?
    • 36:25 Inherent algorithm bias
    • 40:30 Opening & streamlining access to platforms also restricts your freedom of expression
    • 44:19 How effective current regulation is
    • 49:23 Ansgar’s thoughts on Facebook stopping use of its facial recognition technology
    • 51:31 How effective is the #deleteFacebook movement?
    • 55:22 Why young people feel “disempowered” when using social media
    • 1:00:50 YouTube Kids versus Instagram Kids
    Ansgar Koene - Global AI Ethics & Regulatory Leader EY London

    If you’re looking for more inspirational stories, check out:

    • Esther Wojcicki: Author, Educator & Mother – on her T.R.I.C.K. methodology to raising successful people like her daughters (e.g. Susan Wojcicki, the CEO of YouTube
    • David Grief: Senior Clerk to Essex Court Chambers on what it’s like to nurture the careers of international judges in the UK Supreme Court & other international European courts
    • Ooi Boon Hoe: CEO, Jurong Port – on what it’s like to run one of Singapore’s two major commercial shipping hubs
    • Karl Mak: Founder, Hepmil Media (SGAG, MGAG, PGAG) – on building the largest meme business in Southeast Asia

    If you enjoyed this episode with Ansgar Koene, you can: 

    Leave a Review

    If you enjoy listening to the podcast, we’d love for you to leave a review on iTunes / Apple Podcasts. The link works even if you aren’t on an iPhone. 😉

    Patreon

    If you’d like to support STIMY as a patron, you can visit STIMY’s patron page here

    External Links

    Some of the things we talked about in this STIMY Episode can be found below:

    • Ansgar Koene: LinkedIn, Twitter
    • Subscribe to the STIMY Podcast for alerts on future episodes at Spotify, Apple Podcasts, Stitcher & RadioPublic  
    • Leave a review on what you thought of this episode HERE or the comment section of this post below
    • Want to be a part of our exclusive private Facebook group & chat with our previous STIMY episode guests? CLICK HERE.

    STIMY 65: Ansgar Koene - Global AI Ethics & Regulatory Leader, EY

    Ansgar Koene: It's great to be able to use somebody's conversation on Twitter.

    Twitter at that time, the API was still more open that for linguistic research. But if you were the person who had actually posted those things on Twitter, and you found out that somebody was doing some kind of analysis of you, and maybe they're not doing an analysis of how language is used, but they're doing a psychological analysis of how human behavior is shaped.

    And then you think somebody is building a personal profile of me from things that I posted on Twitter. Do I really feel comfortable about that? Even though I know I'm communicating in a public space, I know that Twitter is visible from all kinds of sides. But it's sort of the analogy that I've taken is it's like you're talking to somebody in a pub or in a cafe or something like that.

    You know that people around you can hear it, but that's different than having somebody who's sitting next to you recording what you're saying. You wouldn't be comfortable with that even though, you know you're in a public space speaking.

    So question about what is our way in which we actually communicate with the people who had generated that data and the ethics of using data for purpose that is different from what the initial person thought that the data was going to.

    Ling Yah: Hey everyone! Welcome to episode 65 of the So This Is My Why podcast.

    I'm your host and producer, Ling Yah, and today's guest is Ansgar Koene. The global AI ethics and regulatory leader at EY and senior research fellow at horizon digital economy research Institute at the University of Nottingham.

    AI is transforming the world, but it also brings with it a whole host of problems, particularly when it comes to regulation. How has the data we can get it and used by AI. When we log on to platforms like Facebook and Spotify, they're tracking our use and providing suggestions of similar content, which is incredible, but it's also been proven to be a way in which these systems nudge us into thinking and behaving a certain way or in other words, manipulating our behavior.

    How do these recommended systems actually work? What other inherent biasness found in AI and what are some of Ansgar's thoughts surrounding some of the most recent issues like Facebook's decision to stop using its facial recognition system.

    We deal with all that and more in this episode.

    And a quick note before we start.

    If you've enjoyed this, so this is my way podcast, I'd love it if you could give a quick review on apple podcasts or any of the other podcasts listening platforms you're using. Reviews and ratings really do help the podcast. And I read every single one of them.

    Now are you ready?

    Let's go.

    A lot of the work that you do today relates to how we are shaped by the kind of information and environment that we're in.

    So I wonder about the environment that you grew up in and how that shaped you and steer you to your current path.

    Ansgar Koene: Surely. And thank you for inviting me to be on this podcast. My youth really started off quite internationally which is also one of the reasons why I sort of don't really identify much with one particular country.

    When I was half year old my parents moved from, the Netherlands to the U S and I was there at a research laboratory. My dad was working in high energy physics. That came a little bit later. This was Brookhaven national laboratory at the U S but we were actually living on campus there.

    So with the international community, and I'm told, I don't remember. I was half year old. My first friend as a, as a baby was a Japanese girl. And so we were just communicating like babies do. But that may have had an influence on me considering that my wife has also Japanese.

    So my dad was sort of early in his career as a researcher academic you'd go through a period of, of being a postdoc researcher, which tends to be more shorter term kinds of projects that you're going to a couple of different tabs. So it was at Brookhaven for a bit then to Canada to Winnipeg one of the coldest spots in Canada in the winter, and then back to the Netherlands.

    I actually went to a European school which is a school system set up by the European union as part of the air system of entries, more, getting people more to think across borders.

    While we were living in the Netherlands, I was in the German section usually it's like seven to nine kids in a class. I was fairly good at math and science kind of things.

    It was great supportive kind of environment, but because it was an international school, not a national one, a bit separated from t he rest of what's going on in the country. and also a bit of a travel everyday going by train 45 minutes.

    So some time to read also and we were reading a lot of science fiction including my parents plus my dad's work in, science at that time. He'd already joined the Dutch branch of CERN. So Nikhef, the Dutch high energy physics, and about three that contributes to the work in CERN, and we also spent one year in Geneva cause he was placed there then.

    When I was looking to graduate in high school, my thinking was completely yet obviously I'm going to do something physics or engineering and so I guess I moved into doing electrical engineering as a degree, not so much from having thought deeply about what are all the actual opportunities that exist, but more from well, I'm fairly good in this kind of direction.

    and it's cool doing engineering things. I was sort of thinking. bit more in the area of robotics at that time. Not so much of applied robotics but more of developmental robotics. So robots that learn from engaging with the environment and in a way can help us be it be a tool for us to study actually how people and animals are shaped by their environment in the way that they then also understand their environment.

    I did my master's project was on fuzzy neural networks. It combines physiologic and artificial neural networks. At that time, artificial neural networks were much smaller than the deep learning systems that we currently have B basically the compute facility wasn't there yet.

    We weren't really using GPU's yet. We weren't using the massively parallel processing kind of systems in this kind of space. So, it was a very small kind of neural network, but the challenges were actually quite similar to what we having now, which is how do we understand what the system has actually learned, especially if you're looking at it like I was from a point of view, how can we use these types of things to learn how humans also learn and engage in basically how the whole interaction between stimuli external stimulation and internal model generation. How that works.

    So that's why the fuzzy logic part came in, which is a way of trying to create rules in the machines that are easier to read.

    So instead of having a rule that says if this parameter reaches 7.6, then you go to that kind of thing and you don't know how to interpret 7.6. It would be if this parameter goes high or if it goes it's in the medium range or something like that, which is more the way in which humans look at kind of things.

    It provides distributions that sort of cover any area. And the idea was to have the newer network actually parameters learn fuzzy rules, and that you would then be able to learn that retooled theory is great. It worked reasonably well. The problem is you would present it with a test set where, you know, as a human, you can describe it with five rules valid to you.

    It's a experiment we're doing to try and make things transparent. The system learns to reproduce the behavior fairly well, but it learns like 50 rules. Can I interpret these 50 different rules for something that I know should be grabbed by just by how can we reduce the complexity of the system?

    So that was actually what a lot of the work was on. And that's very similar to the types of challenges that we see as well now with, the big deep learning systems, it's less the problem that you wouldn't be able to trace a particular kind of relationship from an input to an output. You know, how did it get from this to that?

    But rather you have so many parameters in there. How can you reduce it down to these are actually the relevant ones. This is the thing that really drove that particular decision. The kind of big challenge behind explainability which still goes back to my initial question of how did we learn?

    How did we come to interpret the world in this particular way based on what we experienced before?

    It depends a bit what it is that you're trying to achieve with the particular things, of course.

    You can say the only thing that matters for me is to get a good reliable of outcome, in which case maybe a high level of complexity, isn't that problematic, as long as you can actually prove that it's going to be reliable and that's difficult.

    But if you saying, I actually want to understand what's going on. And often the reason why you want to stand is because that is the way in which to get the proof of how it's going to behave when it comes to a new stimulus. But then you're always in the, but if I reduce complexity, I'm simplifying something and it won't be as perfect.

    There's the saying, every model is wrong, but some models are useful.

    If you say, we want to understand the human bias by simulating it down to the absolute details. So you're just going to recreate a complete human brain, then you're back where you started because it's got the same level of complexity.

    But actually that is sort of the direction that I went into after finishing my masters. My PhD was in computational neuroscience because on the one hand, the state of the technology at that time, and also the, AI was going into a mini winter again. So funding was, was being reduced. There basically wasn't that much pure science space for AI at that particular point.

    I actually went to Utrecht University in the physics department, but doing computational neuroscience was a slightly odd group in the physics department called physics of man physics of humans. Looking at how human perception and behavior relates to the physical stimuli in the world. So I was particularly looking at eye movement control and basically sort of the question of you have six muscles on the eye, but as the eye rotates, the exact way in which the force that a muscle excerpts on the eye and what kind of impact that would have on the movement of the eye changes a bit.

    So you look at how that, physics of the rotation relates to the control signals that you need to have, and then looking at how the brainstem circuit and the human brain actually generates those. So looking at the sensory motor loop.

    One of the reasons why any area where sensory motor and sensory perception, those kinds of things are an area where there's a lot of work in neuroscience because it's easier to know.

    If you're doing something completely cognitive you don't have an objective way of measuring Did it happen or not? you can only rely on what the subject tells you and the subject is making inference and you have no idea how those are working, but if you're looking at something like a sensory motor control, what you can measure, how the eye moved and you know, where the visual stimulus was and those kinds of things.

    So you have at least the ultimate input and ultimate output, objectively, and then you can start working from there.

    To do the work that you were doing. Wouldn't that mean that you need it access to data that say a psychologist would have, which you wouldn't necessarily have?

    Even though we were in the physics department was part of an interdisciplinary group.

    So together with people in the psychology department, in the biology department, in the school of medicine, I guess my work was actually more with the school of medicine. And we were doing things. were running experiments ourselves as well with human subjects. So I was a human subject for tons of experiments which basically consists mostly of sitting in a dark room and looking at a single dot of light and trying not to fall asleep while you're doing task.

    Um, but there was a lot of interdisciplinary work involved and, actually that is one of the things that I enjoyed very much is interdisciplinarity and communicating between disciplines bridging how do people with a different perspective look at. And understanding sort of communicating about the same thing, making sure that when we say a word, we mean the same thing, but just also the underlying concepts that people bring to it in a way that probably also connects to my upbringing in a international school, everybody's coming in there with a different language background, et cetera.

    And you're communicating into, culturally and in a way this is the culture of scientific disciplines and bridging between those. And that's really where I continue to work was always sort of on the intersection between disciplines be it in doing computational neuroscience, psychophysics.

    That's a. Psychology from physics or doing biologically inspired robotics or using robotics to test models of how the human brain works and those kinds of things. It was always on the intersection between things and also gradually starting to look at the intersection between science and things outside of the academic realm.

    After I finished my PhD I went to France and it was working in a medical research lab there and that's actually also where I met my wife in a research lab. And then from there came to the UK for the first time I've been in and out to the UK a couple of times now working in London at UCL, that was a psychology research, and then from there to Kyoto to a robotics lab supporting the work on the humanoid robot a bit similar, obviously more primitive than the current version, but sort of similar to the types of robots that Boston dynamics is doing now, which will very much about being able to dynamically respond to how the stimulii changed in the world in a quick way so that the system doesn't fall over and stuff like that.

    And then from there back to the UK to university of Sheffield where I was working in a psychology department, but the head of particular research lab actually is the head of the Sheffield, robotics unit as well.

    So continuing that intersection kind of thing then from there to tokyo to the Rican Institute which was actually motivated because my wife got involved in Japanese politics. But this was 2010 that we came in and you might remember what happened in 2011 with a big earthquake and the Fukushima nuclear disaster that motivated us to leave in 2012. It just wasn't that fun going to the supermarket and checking where's this grown, is this grown somewhere closer to where there's radiation? Where was this fish caught and those kinds of questions.

    While we were in Tokyo, I started thinking a bit outside of the computational neuroscience kind of space more around also how internet data or the wide level of data of human interactions on the internet, how that can be used through something called computational social science. So studying human interaction or human use of language through people's Twitter feeds.

    So I started playing around with that idea and, and I sort of created a project proposal in that space at the same time. Also looking at data sharing between scientific disciplines because there was a growing Institute around sharing of MRI data and electrophysiological data so that they direct measurement data, but less around sharing of the behavioral data.

    But actually I know behavioral roboticists who are trying to learn from how, to make a robot learn, how to move its own limbs are quite interested in the kind of data that psychologists collected about the babies, how they learn to control their body, but they, publish in different journals.

    Their data's held in different formats and those kinds of things. And so I was actually looking to start up data sharing thing between those. So when we moved back to the UK again. We went to Birmingham into a lab, which was doing work on human robot interaction, handing over objects between a human and a robot.

    And frankly, the way in which most robots will hand over an object back to the human is really annoying because they take a lot of time to make sure that they're very precise in how they hand it over. But if you look out here how humans interact, they actually do much less focusing on the precision because as I moved to you, you will adjust to how I do and we will get there together sort of what's more important is that we do it smoothly that we do it, that I don't end up waiting a couple of seconds before you react.

    But at the same time, as we're looking at that data sharing started sort of a Wiki kind of thing for that. But that ran into a spam problem. I didn't have a lot of resources to maintain the kind of thing.

    And people started spamming it with advertising and things that wasn't supposed to be on there. So in the end I had to close it down but it did help me to bring me to the next step, which was moving more into that computational social science.

    I started running some workshops at the university of Birmingham around that, and actually invited academics from, various part of the UK. But then sort of through that made a connection with the university of Nottingham, with the horizon digital economy research Institute, which is looking at really the impact of digital on society. They were starting up a new project called cosmic citizen centric analysis of social media and that's sort of where I moved to next and was able to focus much more that and where the focus also shifted more into sort of the ethics questions like.

    Okay, it's great to be able to use somebody's conversation on Twitter.

    Twitter at that time, the API was still more open that for linguistic research. But if you were the person who had actually posted those things on Twitter, and you found out that somebody was doing some kind of analysis of you, and maybe they're not doing an analysis of how language is used, but they're doing a psychological analysis of how human behavior is shaped.

    And then you think somebody is building a personal profile of me from things that I posted on Twitter. Do I really feel comfortable about that? Even though I know I'm communicating in a public space, I know that Twitter is visible from all kinds of sides. But it's sort of the analogy that I've taken is it's like you're talking to somebody in a pub or in a cafe or something like that.

    You know that people around you can hear it, but that's different than having somebody who's sitting next to you recording what you're saying. You wouldn't be comfortable with that even though, you know you're in a public space speaking.

    So question about what is our way in which we actually communicate with the people who had generated that data and the ethics of using data for purpose that is different from what the initial person thought that the data was going to.

    Ling Yah: It sounds like the root of that concern is really in the lack of consent that was obtained, which is one of those three mains ethical concerns that you had highlighted in this 2015 paper that I read. The ethics of personalized information filtering, which I thought was really interesting.

    Ansgar Koene: Yes. Yeah. I mean, it started from that, from the consent kind of question and sort of the privacy question.

    And if we put it also in the historical stage, this is 2014 15, which is also where GDPR was still being worked on. And a lot of the question was around big data and data collection AI hadn't really becomes the theme of discussion so much yet.

    It was sort of still the data collection phase of it. And so yes, the, initial thing was more around that consent methods. and we all know that the consent box tick box or I give consent is meaningless because even if you were to read the text, you're not actually thinking about it in the same way, because you are thinking about this piece of data that I'm making available right now.

    You're not thinking about this piece, plus the piece that I'm going to make available tomorrow et cetera, and how those all add up into a bigger Set of knowledge about you, then that single item that you're engaging with, and it's not reasonable to expect a person to make, to think of that when they're just confronted with a tick box.

    And actually I really wanted to do this thing now, and I have other things I want to be doing. So I'll just take yes. So that's why we need a different process in the academic space. It needs to be the ethics review committee before the research project even starts. They need to be thinking about what is the right way of doing it is consent the only way?

    Even if you were to get consent, do we consider this to be ethical research project, and one of the big issues in the academic spaces being consistent as to how we actually do the ethics review across disciplines, because the same research project could be run out of the computer science department, out of the psychology department, the economics department, they all think about human interaction and electronic data differently.

    The psychology department may say, this is a human interaction that we're having right now.

    And so therefore we need to be treating this the same way as if a human was in the lab. The economics department might think of this more than the recording of this as this is archive data. There's no human i nvolved in this interaction directly. So it's just about intellectual property rights, so different ways we're thinking about the ethics.

    So it started with the thinking about the privacy, the consent kinds of questions, but then through working on that, and actually it was at a conference I went to in France where there was a lot of presentations about record recommender systems that I started thinking about closing the loop, that it's not just about the collection of the data, but also how that data then is used to shape the kind of information that we received back.

    And that your experience with the internet, it's going to be different from my experience of the internet as recommender systems keep shaping so-called personalizing, but is it personalizing or is it serving the company that's delivering you the product?

    Ling Yah: There're essentially three types of recommended systems, right?

    And the majority is of hybrid. Can you elaborate a bit for those who don't actually know what recommender systems are?

    Ansgar Koene: So recommender systems is basically any of these systems that are and they're actually necessary that are reducing down the huge amount of data that exists to those things that are most likely to be relevant to you.

    Now whether it is a relevance to you because of things that are of interest to you or its relevance to you, because the person is trying to sell something it's tied to find the best way of targeting people to sell to. It is a business model kind of question, but recommender systems are basically they help you find the potential needles in the haystack by reducing down to what are the most likely relevant things.

    And there's different ways of approaching that one is to track what you are doing. So it's collecting data from you directly. Your previous interactions with the system, or it could be giving a couple of keywords for instance, in this. And so a search engine is in a sense it's also recommended system.

    The other way is to look at other people who are similar to you. What have they shown interest in? So that's likely something that you're going to be interested in. So that's something that social media does a lot. That's where something like Facebook's their social graph comes in.

    It's identifying well, you've been interacting with these things that other person's been interacting with that, well, that means you've probably similar in your interests. So if that other person just now showed an interest in that thing, then you're likely to be interested in a thing. So we will leave that to you, but that's sort of more, a social way of connecting and identifying what is the right kind of a recommendation to give to people.

    Often it is a mix of these. It uses your history of your own interactions and seminary to others of your type. They both have their challenges. Like if you're looking at the history of you if you just joining the system, you don't have a history. So how are we going to start, off this process of, doing it?

    Whereas if you doing the social graph thing, we can ask you at the beginning, import your friends circle from your phone, as we are familiar with, Instagram and stuff like that. We'll be able to start from that kind of thing. But of course it has the other channels that if you don't have a net work, if there aren't many people on the place, then it won't be able to give you any good recommendations, which is where the network effects come into play.

    Where you know, if you have a big social media platform, it will be the attractive place to be. Because it will naturally be able to produce the better recommendations. And obviously we also, your friends are more likely to be there. Whereas you may be creating a new social media platform, which has much better functionality in theory, but because there's nobody there, nobody wants to go with they hair.

    So how are you going to get that process going to get people there? That's difficulty. And that's one of the reasons why relying on just free market competition as a way of making sure that social media platforms operate in the best interest of users. It doesn't really work.

    So, the self-regulation. If the platform is doing things that users don't really like, it users will leave. Doesn't really work that well because there's so many other factors that are making a stick on that platform. The fact that your other friends are already there, those kinds of things.

    That's why there's the growing recognition that this The space needs external regulation from government. And that is sort of also where my work tended towards after starting to look into the ethics kind of side, and then starting to look at the recommender systems and the, feedback loop the bias that can arise within that kind of feedback loop, which is the unbiased project that we started.

    We started also increasingly looking at engaging with policymaking and with industry how there, so, so when we actually propose the unbiased project. We consciously made one part of the project which was the part that I was leading was stakeholder engagement, including. Engaging with policymakers, so responding to parliamentary inquiries writing opinion pieces publishing things in the conversation on editorial responsibility of social media platforms, as well as working with civil society.

    So we started working closely with the five rights foundation. Actually they weren't called that yet, but Yvonne Kidron who's leading that she, she created five arts foundation like a year or so after we started collaborating. But she was already working on the question about how do young people understand the internet and understanding the concerns about privacy, about personal agency on the internet, from the point of view of young people, not having adults tell them, don't do this on the internet. Don't do that because safety or evil people on the internet.

    But actually, what is it that young people think? And what do young people already know about the space? How much do they know that sometimes they know more than the teachers do?

    And we had great conversations with groups of teenagers, 13 to 17 year olds was our focus group. They'd been doing some research on how long does Snapchat retain the data. If you sent a message that was supposed to disappear after 24 hours. Does it really completely disappear from Snapchat or do they keep it for a longer period?

    And those kinds of things, one person had done research by checking Wikipedia and other things. The other person had been reading through the terms and conditions deeply. They came to different conclusions. It's difficult to know who was right.

    Ling Yah: So in the unbiased project, you actually used youth juries.

    What were the findings that you have from these youth juries?

    Ansgar Koene: So youth juries was basically this really about hearing from the young people. The idea is we go to them. We have a group of something like 12 could be, you know, plus minus four people in that group present them with a couple of scenarios.

    Just tell them about which kind of social media are you using. Have you ever noticed how when you looked at one thing on the internet, and then you, afterwards, you go into the social media, you start seeing more ads that are related to that and those kinds of questions. But then from there really let the young people start to take the conversation forward and then what is it that they are concerned about.

    A lot of their concern was around questions of being able to be in control and being annoyed that they're using this thing and they keep getting recommendations that aren't right. Or that. The thing keeps popping up when they're trying to go to sleep. They know that they're supposed to go to sleep now, but it keeps getting notifications and they're having difficulty putting this thing aside.

    So actually they were raising a lot of the kinds of issues around dark patterns that we're seeing now as well. They kind of how the interface is crafted in a way to make you stay on the platform, even when you know that that isn't good for you.

    Ling Yah: That's actually the third ethical concern that was in your paper, right?

    The fact that it's actually they have a series of nudges and it's not as though the action comes out of the blue. It happens over a long period of time.

    Ansgar Koene: Yes. Yes. And that's a huge challenge from a regulatory perspective of how can you identify whether this is a problematic.

    Thing because it's easier to regulate if you say, look for where this particular recommendation or this particular mode of action of the platform has directly caused a problem. But if it is not like that, if it's just a case of, yeah, it gave you a bit of a suggestion towards this and then you, it, and then give you a bit more of a suggestion for that.

    And you gradually end up spending more and more time thinking about particular issues that, you can't pinpoint it. This particular action was the thing that was the problem or at the bigger scale in, in sort of the impact that social media is having on democracy. How it is just gradually pushing groups away from each other to make it more difficult to communicate to people who don't have the same mindset as you do. And those kinds of things, but there isn't really a single cutoff where you can say, ah, this is where they did something bad. So how do you measure that in order to be able to say this is a problematic usage of the technology.

    Ling Yah: You say that the movement of anti-vaxxers, I mean, this manipulation, that's what led to the fact that this was such a huge movement around.

    Even if you were aware that this was happening, how do you know when to step in?

    Ansgar Koene: I think what we're seeing now with the kind of data that is being presented before the various parliaments in the Facebook leaks is clearly the kind of thing where we have problem.

    I mean, we've known from the beginning that if you are in charge of doing the recommendations, you are shaping the kind of information that our people are seeing. And so you should be taking a responsibility of you. Can't just say, well, we didn't create the content. So therefore we don't have responsibility for it.

    We choose whether it goes on the front or on the back of the content list. we choose whether it's in that space, that we've through meticulous psychological research identified as the spot on the screen that you're most likely to be paying attention to.

    And then to see in these, data that they've actually be searched this and they'd come to the conclusion that they are definitely driving people into programmatic space, but they're not going to change it because it would impact their bottom line economically.

    We clearly have a problem and we need an external party to be making those kinds of decisions about party that isn't linked to that question of, does it given money or not the impact that it has on the funding stream?

    Ling Yah: A bit like the oversight committee that Facebook has established then?

    Ansgar Koene: Yes, but with more power because the oversight committee at Facebook, they can only regarding individual posts or individual kinds of actions, they can make a ruling and they can, strongly suggest that Facebook should change their perspective, but they still can actually force Facebook to change anything.

    It needs to be more like what we had in radio and TV and other broadcast media. We have an external regulator that looks at it and says, if you're actually not following these rules, we can take away your license and you won't be allowed to continue broadcasting.

    There needs to be something of that nature that can actually say if the social media platform is knowingly not stopping the spread of fake news, is knowingly pushing people into more extremes, even though they are aware that that can help lead to harm to the people.

    I mean, Facebook was this was now a couple of years ago was also Talking to advertisers telling them based on the data that we collect about people, we can identify that this young person is currently in a more depressed mode and therefore more likely to be, you know, if you send them, give them that certain ad at this moment, they're likely to buy it.

    How can that be considered ethically responsible behavior as a society, it doesn't fit with our societal moral values. And so there, there needs to be an actual capacity to trace these kinds of behaviors, you know, Then being able to do it. It's of course not the same thing as them actually doing it because it's them talking to advertisers about this suggests that they do intend to do it.

    so, there needs to be an oversight capacity that can actually track how these things are being used and monitoring thing this, and being able to directly interfere with our heart.

    Ling Yah: You've been writing about this editorial responsibility beyond this part of social media, since 2016, I can imagine a lot of people were on your site.

    Do you feel like the climate has changed, especially in recent times and people more open to the fact that yes, they play a greater role than we would like to think and acknowledge.

    Ansgar Koene: Certainly the recognition that this is a serious problem has grown a lot.

    I think even in 2016, there were a lot of people that would agree with the core issue. That this is something that should be happening. But it wasn't recognized as how much of an impact it really has. And therefore that this needs to be acted on more strongly.

    And of course there is a kind of tension between different fundamental rights. There's the tension between freedom of speech and freedom of expression and, the rights of being an entrepreneur and creating your own platform and actually, being able to experiment with what is the best tool So, tension still exists, but it is now more clear how significant the impact is on the rights to safety and security and, basic freedom of thought that you're actually in a sense, somewhat restricting a certain level of freedom of speech. Opens up more freedom of thought because people are being less manipulated into how they are shaping their concepts.

    So I think it does moved up more on the agenda as how important it is. and we're seeing, especially in the U S where freedom of speech is, a core fundamental principle. More so, or in a slightly different way than in Europe that the perspectives are a bit changing on that as well.

    I do expect that there will be some kind of a response in the U S at this time. I don't know how many times that Facebook has been before Congress for issues. They accumulate, and I think this time a tipping point has been reached at something will happen, but how much will happen and how exactly that isn't there yet.

    it's an interesting, thing to be observing.

    My own focus has shifted a little bit. Away from that social media platforms and the communication side to the way in which the underlying technology, the AI is impacting on the broader way in which businesses use these tools for lots of things, not just communicate side, but also like the logistics or financial services or those kinds of things.

    My work in standards development which came out of the unbiased project as well, was the route towards how do we bring the findings of that research project into the industry space. Standards is an interesting space because it often gets criticized as very dominated by industry.

    Regulators or legislators or parliaments, they set rules regulators supposed to implement them. And to do that, they point to standards. They say, well, if you are compliant with ISO 27,001, then you are probably good. As far as cybersecurity is concerned. So then it's actually the standard that defines how they are implementing things, but the standard is written to a large part by industry players.

    And so there's the problem of does it undermine the democratic process because we've actually got pictures, like the backroom dealings, the smoky room where the industry guys get together and say, well, they tell us we're supposed to do this, but we'll implement it in that way so that we can get away with this.

    It's definitely true that standards development is dominated by industry participants. But it's not because it is not open to other participants. Actually anybody can join the British standards Institute. You can say, I have been working in this space, I'm an academic or social society person who's been looking at AI. I have an understanding in the space and have an interest in this new standard on algorithmic bias.

    You can join the working group and you can participate in developing it. The challenge is finding the time to do it because it's all voluntary. And that's sort of where the industry has the advantage that it's from the standards development group it's voluntary, but the industry can just appoint a person and say, well, your job is going to be working at the standards, but now we're going to pay you for it.

    But it is in principle open for participants and in the ITU police standard, we did make an effort to get an equal spread of industry academic and civil society. People contributing, especially because it's an ethics related standard. We need not just technical. We need people with the legal people, with psychological picture, with people, with culture social science background, we need all of those perspectives in there to build an appropriate standard. And it's been an interesting journey.

    Ling Yah: How do you determine how diverse your team needs to be? There was one thing I was reading up on the topic of algorithmic bias, which was on COMPASS.

    You know, ProPublica's investigative report and I was shocked that there was an inherent biasness against black defendants and it was determining whether someone was likely to be a reoffender.

    I mean, how do you break out that loop?

    Ansgar Koene: Well, the first thing is identifying that the problem exists, which is what the journalist did in this case.

    Second issue is where does the problem come from? And really it's a nice example because Computer science people who have been looking at this challenge, and they've been looking at it from a sort of mathematical perspective, what is the best equation for defining fear in us? And the problem is it's not actually a mathematical problem because there are many different ways in which you can look at fairness.

    And so in this particular case actually the manufacturer of the algorithm in the complex case was arguing that the system is unbiased because they were looking at accuracy. They were looking at how frequently does the algorithm get it right. And it gets it right for white defendants and for black defendants at the same rate.

    But when it gets it wrong, it gets it wrong differently.

    For white defendants that gets it wrong by judging them less likely to be reoffended. For black defendant when it gets it wrong, it gets it wrong by judging them more likely to re-offend. The kinds of errors that it generates, it is significantly biased, which is the dimension that journalists looked at.

    But the problem is because of the underlying statistics of actual people who, were arrested you cannot make the system unbiased across all dimensions at the same time. And so then it becomes actually a societal question. What do we think is the appropriate way of balancing this algorithm?

    Do we even think that using an algorithm in the justice system is appropriate?

    That's another part of the question. If you're using an algorithm, what is the right definition of fairness? And that is something that needs to be done, especially if we're talking about something and the justice system, which is affecting society at large, this cannot be a decision made by just some company that is selling this system.

    It needs to be something that society has made aware of and can have a voice the debate if that is where democratic process needs to come in. And so we need people within lived experience within this kind of space to be able to flag up what the actual issues are. And that is actually especially the case when the issue Is around, should we even be using an automated process?

    At least some of the transgender people that I've spoken to, they will, for instance, say I don't actually care whether the algorithm is capable of going beyond the binary and saying, not just assigning male or female, but being able to assign something in between. What I fundamentally disagree with is being assigned to something.

    And especially being assigned to something without being asked about it.

    It's back to the question of agency. Who actually has control over what is going on here.

    And so, the other question is not just how can we make the algorithm work?

    Are we looking at this from the right perspective as to sort of the business model behind wanting to deploy an automated system? Is this a domain where optimizing the way in which things operate means making it economically more efficient or is this economical efficiency, maybe not the best thing to do in all spaces.

    And that for instance is actually something we also see back again in things like social media. One of the elements that people have highlighted that would actually make social media better is to make it less efficient in how quickly you can share stuff. Make it so that you cannot just press the share button before you read the article.

    Ling Yah: Or just hide the types of button, not the content itself.

    I've noticed on social media platforms, there are moments where it is difficult to share certain articles or certain posts, but then the sharability element comes back again. And it seems as though they want people to come back on their platform and be using it all the time.

    So it seems like a tug of war that you never really win.

    Ansgar Koene: Yeah. Well, I mean, it's, part of social communication. I think that the things that one wins or loses, it's an ongoing process. And part of the thing is, how do we learn to communicate with other people through continuously engaging in that process?

    And as we're doing one thing for a while, we may feel like, I, don't think this is the right thing to continue doing. And then we try something else. And people need to have that ability to do those kinds of changes. My own thinking actually was one of the reasons why I'm not much of a Facebook or something like that user when we started with the internet people were building their own web pages and everybody could build their own webpage completely differently.

    And you could just, you know, use hyperlinks to connect with teacher that's Tim Berners Lee idea of a free internet. It's use a hyperlink to connect to each other and everybody can connect how they want.

    Facebook streamlines this. It makes it so don't need to learn how to create webpage. And when you want to look for something, everybody's, his Facebook page looks the same.

    But it also means you don't have the freedom to create it in the way that you want. And you must present yourself in the way that Facebook has decided that you should present yourself. And that's true for all of the platforms. there's efficiency by having that templated way of doing things, but it also restricts your freedom of expression.

    And in a sense, and personally, I was more interested in building my own webpage and doing things like that instead of going onto that kind of platform. But of course if everybody has their own kind of page makes more difficult to do the recommend assistance, knowing where to find what in order to do that might be more possible.

    And now with natural language processing, having progressed a lot more. But then we get into still other questions about scraping people's data from outside of your platform. It's the thing that needs to be continuously developing, but in order to get it to properly develop, we need to make sure that it doesn't get locked in to a single organization business.

    It could be a government one that has full control over this, and nobody else is able to actually move away from that. So that's where anti-competitive behavior becomes a big point, which is also what the big tech is currently being challenged on. And it'll be interesting to see how exactly that develops out and how, in the, you there's current work on the digital markets act and the digital services act.

    To what extent are those going to be focusing only on the very big firms or is it going to start impacting also on the smaller ones? Getting that balance is tricky. How do we make sure that we regulate the use of artificial intelligence to make sure we don't get unintended negative consequences, but in a way that it doesn't kill the ability of smaller firms to be able to have that creativity, to do things which is why we need to be thinking.

    Not we need to be thinking about how to make this solid regulation, but we also need to be thinking about how we can support. So how can innovation hubs support this?

    So for instance, if we're saying as part of the ethical development, you need to be engaging with the stakeholders who are going to be impacted by the kind of system a large organization is probably more likely to be able to do that, to have a capacity to reach out to others. Can we use something like an innovation hub to be a place where stakeholder groups can have representatives and then smaller organizations can go there to find the stakeholders to be able to engage with them in a more easy way.

    We need to think creatively, not just on the regulation, but also on how to support compliance with that regulation. And standards play a role in that as well, and continuing to work in that kind of space. But it's an interesting, continuing developing process

    Ling Yah: How effective do you think the current regulation is? I mean, you have obviously the GDPR, but what else can the government do?

    Ansgar Koene: Well I mean, is GDPR effective? I think it has had a huge effect. It hasn't worked perfectly to large extent that's because of insufficient funding support to the regulators that need to be doing the work. It's taken the legal community some time to understand the regulation as well, including judges. They haven't really understood it yet. Some of them but personal data protection is only one part of the, puzzle.

    And like we talked about the network effects and the way in which market competition doesn't really work in that space. And therefore needing things like the digital markets act, the digital service act and equivalence in other regions is coming forward and that needs to push in.

    So that doesn't currently exist yet.

    And we see the problem. It's basically impossible for anybody to compete with Amazon at the moment because of the, market effects. When it comes to the use of AI at the moment, there is no regulations that are specific to that. There's a proposed regulation which is still going to take probably two years until it's finalized.

    But that's not going to be the final one because this is a high-level horizontal vertical legislation. that's looking at is your system posing high risk or not high risk to safety, security and fundamental rights. And the majority of systems will not be considered to be high risk.

    But that doesn't mean that there isn't something about the systems potentially needs to be regulated. It's just that they're not posing a direct risk to human safety. And so there's going to need to be sector specific updates to existing regulation. And often it's not going to be a case that you want to say, we need a new AI regulation, but rather we just need to make sure that existing protections are sufficiently applied as AI is enabling new business models.

    So a nice example of that I think is things like employees as rights when it comes to being monitored, being micromanaged. And these problems are arising because they're being facilitated by AI, but it's not actually an AI problem as such. It's a business model problem. It's just that the AI has facilitated that kind of business model.

    If we're talking about the kinds of protections that people need to get, it, it needs to be part of the rights of the worker to be free. To not be continuously monitored, to be able to arrange their workspace or the way of working in a way that suits them, Because ultimately it doesn't really matter the way in which they're being micromanaged as being done by an algorithm or by a human it's the fact that they're being micromanaged. That's the problem. and that is still sometimes one of the issues with getting this regulatory space right, is clarify, is it the technology that we actually need to be regulating or is it something else?

    Is it more the way in which organizations treat people that needs to be regulated because if you were to regulate it on the basis of technology, you could easily go to the next new technology that comes along and do the same kind of business decision that would have negative consequences, but because the regulation is based on technology, it wouldn't apply.

    So you want to be fair. You want to be identified in correctly, where is what is the actual problem? Which is, again, similar to that earlier thing that we discussed with some of the ethics issues with the use of AI. Is it something that can be where the solution lies in fixing the technology or is the solution actually to step back and say, this is an area where we shouldn't be using technology where having a human interaction is actually fundamental to making this divide

    Maybe it's not that it needs to be a human interaction, but any use of machine learning in the justice system, I think needs to be considered from, is this actually compatible with the fundamentals of how we say the justice system should work? Because machine learning fundamentally is statistics.

    it's looking at statistical patterns. But fundamental of our legal system is that we say every case needs to be treated as an individual. Every individual defendant's case needs to be handled at the individual level. These two things, are they compatible? Can they be made compatible, or is it a case that we need to say, actually machine learning is fundamentally different from the core principle of how we want to do justice.

    Therefore it shouldn't be used in that kind of space. We need to be looking for different tools to support how we run the justice system. These kinds of questions need to be asked and. That's where it's important to have, not just technical people around the table discussing this, because it isn't necessarily a technical issue.

    They actually trying to solve it. It may actually be more of a societal kind of question.

    Ling Yah: There's a recent news on Facebook, they removed a facial recognition and I thought it was very interesting that Facebook's VP of AI Pesenti actually has set that, it's because regulators still trying to play catch up and the regulation is not in place.

    Therefore they are removing it. Even there is a need for that tech.

    Ansgar Koene: I mean, it was definitely interesting to see that they're removing it with the argument of privacy. That they stand for privacy and that's why they're going to remove it. If that was then why didn't they think about that when they introduced it to begin with.

    Because Facebook's attitude, his privacy has dramatically changed and clearly it hasn't changed purely due to self-reflection of it. It has to do with public pressure. Yeah, I mean, they're removing the labeling of the automatic labeling of the faces. I think in a way, part of the whole having agency over things, I think it's a good move.

    I think it's a little bit problematic that faces or images that have previously been automatically labeled. Continue to exist and they're not removing the labels from those. Which means, I don't know how many pictures are out there with my name on them. I don't know.

    And those images are going to be potentially available for other ethically dubious companies like clear view to be scraping, to create their databases where I will show up as a labeled face, even though I've never consented to it.

    And those systems Clearview is selling it to law enforcement in various countries who knows where? It's interesting to see how he made this be one of the first new statements that they come out with under the new branding of their company. I think it signals an attempt to, we're rebranding and we're trying to position ourselves as being different from that previous brand that is currently having so much PR trouble. Please do trust us. We have so much better ways of approaching the questions now.

    I have my doubts that a lot of people are going to buy into this. I think the number of problems that they've got at the moment is so big that a simple rebranding and one statement here, isn't going to make much of a difference.

    And the whole concept of this metaverse I tend to agree with the various publications that are pointing to this as additional surveillance society on steroids, kind of how to get people to spend more time in a place where they can be monitored from all dimensions.

    Ling Yah: It makes me think of, 2018 that Cambridge Analytica and there was that whole delete Facebook movement.

    But the thing is that what was the actual impact that everyone was saying delete Facebook, but Facebook is still being heavily used. Accounts are not deleted. So this could be another delete Facebook movement. And even though agency is an issue, it's clearly not that big of an issue.

    Ansgar Koene: Well, the problem that we had then, and to a certain extent still have, is to say, delete Facebook doesn't work because people are like, yeah, but I still want to be able to communicate with my family who live on a different continent, but they all use Facebook.

    So, and what do you want me to do? Switch to Instagram, which is also owned by Facebook or use WhatsApp, which is owned by Facebook.

    You know, and there was the, attempt to start a social media platform that would be de-centralized diaspora. Conceptually, it was a great idea and I, looked into it. But their big problem was exactly the, how do you start off a new social media platform that nobody's on yet?

    And then with the decentralized model, it made it even a bit more difficult for how to find people who wanted to join it. It's the, how do you get the momentum going to build a new social media platform? So yeah, delete Facebook, but we need to have somewhere else to go.

    If that's not the solution, I think the solution needs to be, it's not all up to the individual. Social media has become a utility. It has become a service like the phone company And that means there needs to be certain rules that apply to it.

    And there is an obligation by the state to serve its citizens, by making sure that these utilities operate in the right kind of way. And I think that is where the difference lies now relative to the Cambridge Analytica time. And I don't think it's necessarily because the evidence is stronger. It is to a certain extent because now it is direct Facebook.

    Whereas the other thing was Cambridge Analytica which had used Facebook that was having the data coming out. But I think the more important thing is it builds on the experience from the Cambridge Analytica issues didn't go away. It may not have led us to a tipping point of really causing those changes in legislation or in pressuring the way in which Facebook operates or social media platforms operate through the shareholder groups or something like that.

    But it did happen and everybody's still aware of that. And now this new thing comes and goes on top of that. And it does add up same as the data points that we've been feeding into the social media platforms, add up over time and create a bigger hole. So in the same way, these, different events, including the smaller events that happened in between, they all add up to create a bigger hole of showing that these platforms and Facebook, especially they're not being managed in the way that they need to be managed and there needs to be external oversight and control over how they're being managed.

    I think one of the problems with Facebook and some of the other big tech companies in the U S is also to do with the level of control that an individual CEO has through things like the way in which shares are being given out the fact that they have a type of shares that doesn't give you any voting powers.

    Most shareholders in Facebook have no voting powers over what Facebook does. Unless mark Zuckerberg decides to do something there's nobody that can push him to do it.

    It's the typical problem of the autocrat. They may start off with a certain thing where you think it's benevolent autocrat, but you have no guarantees that that will stay.

    And they will be increasingly difficult to receive criticism and more likely to deviate from understanding the world that they're interacting with. So that's a core challenge.

    Ling Yah: I expect most of us listening will not ever be autocrats, but just users.

    So I want to just from two different perspectives, I, going back to that unbias project, one of the things that came out of it was the fact that the words that were used by the children were the words like "disempowered" or "sharenting", which I thought were very, very strong powerful words.

    And I just wonder, how do we regulate or assist young people who are using social media. How do we think about?

    Ansgar Koene: Well, one thing is recognizing that young people have certain vulnerabilities and certain rights that are recognized under the UN charter of the rights of children. by the way, everybody up to the age of 18 as a child, not 30. And that means. Limiting certain kinds of things that are, the ways in which data about children's being used.

    The UK is act on, rights of children. the age appropriate design goes quite a long way in this. So you cannot do targeted advertising and children. You cannot collect data about children for the purposes of advertising or for any purposes, other than the direct delivery of the service.

    It also limits it limits the way in which you'd design the platform. So the dark patterns make it addictive. It means you need to be able to communicate what you're doing on the platform with their data in a way that a child will be able to understand. And it recognizes that not all children are the same as a five-year-old and a 15 year old have different interests, have different levels of being able to understand have different things that they consider to be beneficial or negative for them.

    And that should also be taken into account. It is still at an early phase. It only came into force recently, so it, we haven't been tested yet, to what extent it works with the ICO, be able to enforce it? Will the ICL, have the manpower to do it. The funding But there is definitely a lot of interest outside of the UK in this piece of legislation.

    And in taking it on board in either directly or, variant of that quite a number of countries including also the state of California the EU countries outside of Europe and the U S as well. that's one strong thing.

    But the other thing is also thinking about how we actually did to be creating an online space where a child can play and can creatively engage with things. So this is something that a new project that the five rights foundation is doing in the digital future commission which is a safe place to play online to be. And one of the challenges is how can we make it so that you can really be creative?

    at the core is if I'm playing with a pencil and a pen and pencil, I can easily draw outside of the lines. I can go wherever I want. If I'm engaging with a digital tool, it is much more difficult to do something with it that is different from what the designer had originally intended to do, because it'll just give an error. It simply won't cooperate with me in doing that kind of thing.

    So how can we create online spaces that are maximally flexible for a child to be able to express themselves in a way so that we can actually continue to have good development of our ways of understanding the world of always of shaping our minds and, not being confined by the rules that are being dictated.

    it comes into the question, if children are spending so much more time in the digital world is that imposing a way of thinking on to you that says, I must always follow a certain kind of strict rules. These are sort of the, how are the ways in which we are actually engaging with people at a young age, shaping their attitude towards things.

    And which is one of the areas why, for instance, I have a lot of concerns regarding the use of biometrics in schools face recognition to pay for their lunch. Maybe it is easier, but it also instills a sense that having face recognition around is a normal thing. Normalizing an attitude around surveillance.

    Is that something that we want to be doing with children? I don't think so. So we need to be thinking bigger as to not just am I making this particular action more efficient, but what are the consequences of that on society? And that's really at the core of what. I think most of the problems that we've seen with the introduction of these technologies, including Facebook Facebook was introduced as, I want to be able to connect between people and share some messages without thinking through what the possible consequences are.

    And it's the lack of thinking through the consequences. That's really where the challenge is. That's also where a lot of, if we look at things like the standard for algorithm bias, considerations that we're developing in or other standards that are ethics related or other debates around ethics of the air, really at the core of a lot of this is are you aware of what you're doing and how you thought about the broader consequences of it?

    Have you thought about who it is? That's actually engaging with hidden that maybe they're not all the same as you are. Have you thought about the fact that even if you have one intention, further assistance system should be used. It was most likely going to be used for many different things as well.

    We need to be starting to think more long-term big picture around the things that, means thinking outside our particular discipline, outside of our particular intention, to actually we're putting this out into the world.

    Where it's going to interact with stuff.

    Ling Yah: YouTube for kids it's done really, really well. It's really incorporated into the education system. They hear that kind of controversy. So they must have thought these things through and they've covered the challenges that you normally face.

    But then you have things like Instagram, where they wanted to create a kid's version of it. And there was a tremendous controversy, but she was strange to me since you do want to create a platform that is safe for young people.

    Ansgar Koene: I think the Instagram for kids is particularly raised the controversy because Instagram has this addictiveness keep pulling you back to using the platform as a core part of it and a oversharing. It stimulates you to overshare what's going on.

    The core business model of Instagram is the thing that is problematic.

    How can you make a version of that for children that isn't going to take those problems along with it. It's a big question, mark. And I don't really know what the intent is of an Instagram for children. If it isn't to continue to do that kind of thing.

    Ling Yah: Well, thank you so much Ansgar.

    I normally love to wrap up all my interviews with these questions. So the first one is this, do you feel like doing all this work that you are doing, you have found your why?

    Ansgar Koene: Yes, I do. I'm definitely happy with working in this space and I've a sense that I am contributing something useful that is helping to make the world a better place.

    I would hope.

    Ling Yah: What kind of legacy do you want to leave behind?

    Ansgar Koene: Well, I hope that my legacy is that I contributed to thinking about the way in which we use technology in a more holistic way as to how it impacts on society. And that we actually are thinking about creating a better society as opposed to just maximizing profits.

    Ling Yah: What do you think are the most important qualities of a successful person?

    Ansgar Koene: I think one of the most important qualities is that you are actually thinking is what I'm doing useful to the wider world. And therefore being willing to share it and not to feel like I need to control it all the time.

    If I give this thought or this information outward it will be taken by somebody else and therefore I will have lost something. I think generally being willing to share the ideas it will come back to you afterwards, anyway and you should trust yourself that you have so many ideas that you're not going to lose out, just because one idea maybe gets picked up by someone else.

    Ling Yah: And where can people go to connect with you? Find out what you're doing?

    Ansgar Koene: I think probably the best place is to go to my LinkedIn profile. And do you feel free to message me and, reach out on that and I'm happy to start communications from there and then take it offline to other spaces.

    Ling Yah: And that was the end of episode 65.

    The show notes and transcript can be found at www.sothisismywhy.co/65

    if you've enjoyed this episode, please do consider leaving a review on apple podcasts or any other platforms you're listening to this on.

    And stay tuned for next Sunday because we will be meeting an incredibly entrepreneurial lettering artists who has proven time and again, that she's able to kickstart a small passion project, make it grow and go completely viral and build a full career out of it. She's built a tremendously huge following online and is transparent with her numbers and marketing tactics.

    This is not an episode you want to miss. So do stick around and see you next Sunday!

    Do you want exclusive, weekly updates on new STIMY episodes & a chance to submit your questions for upcoming guests? Sign up now!

      Leave a Comment

      Your email address will not be published. Required fields are marked *

      This site uses Akismet to reduce spam. Learn how your comment data is processed.

      Share via
      Copy link
      Powered by Social Snap