Press the Issue a MasterWP Podcast

Open Source and AI

Press the Issue
Press the Issue
Open Source and AI
Loading
/

AI, or artificial intelligence, is a growing field in tech where computer software has been trained to make assumptions about the data its given. In this episode, Topher and Nyasha talk about the morals and ethics of AI and the potential problems of inherent bias built into it. In the conversation, they discuss specific examples of proprietary code causing not only general problems, but very specific problems.

Top Takeaways

  • Open source can be a powerful way to help prevent our biases from creeping into creation, reduce cost, and increase flexibility all around Artificial Intelligence.
  • Somewhat beyond cyber security, we have a responsibility to let people know how AI is being used when and if it is used for health or medical reasons. 
  • There are pros and c ons to federal and state governments becoming involved in AI – they are able to recruit for diversity, but there is a often a lack of transparency when governments become involved in new tech.

 

Monet Davenport:
Welcome to Press the Issue, a podcast for MasterWP, your source for industry insights for WordPress professionals. Get show notes, transcripts, and more information about the show at masterwp.com/presstheissue.

Press the Issue by MasterWP is sponsored by LearnDash. Y...

Monet Davenport:
Welcome to Press the Issue, a podcast for MasterWP, your source for industry insights for WordPress professionals. Get show notes, transcripts, and more information about the show at masterwp.com/presstheissue.

Press the Issue by MasterWP is sponsored by LearnDash. Your expertise makes you money doing what you do, now let it make you money teaching what you do. To create a course with LearnDash, visit learndash.com. Our mission at MasterWP is to bring new voices into WordPress and tech every day. The new MasterWP Workshop series does just that. Our new live and recorded workshops on everything from code, to design, to business turn WordPress fans into WordPress experts. Find the workshop for you at workshops.masterwp.com. Use the code Podcast 10 for a 10% discount.

AI, or artificial intelligence, is a growing field in tech where computer software has been trained to make assumptions about the data its given. In this episode, Topher and Nyasha talk about the morals and ethics of AI and the potential problems of inherent bias built into it. In the conversation, they discuss specific examples of proprietary code causing not only general problems, but very specific problems.

Nyasha Green:
Hey Topher, how are you doing today?

Topher DeRosia:
Hey, I’m doing awesome. It’s great to be here with you.

Nyasha Green:
Yeah, it’s really awesome. This is our first time recording together. I’m really excited.

Topher DeRosia:
Yeah, me too.

Nyasha Green:
So, what I wanted to ask you before we get started is what is your favorite artificial intelligence or AI movie or TV show? Or movies or shows if you can’t just pick one?

Topher DeRosia:
Can I include books?

Nyasha Green:
Yeah, totally.

Topher DeRosia:
Okay. So I actually have a couple. I really enjoyed A.I. with Haley Joel Osment. That was a good movie.

Nyasha Green:
Yes.

Topher DeRosia:
But I also really enjoyed the representation of AI in basically all of Isaac Asimov’s books. His foundation series starts with I, Robot, and the movie doesn’t really capture it well, but R. Daneel Olivaw in that whole series is a wonderful AI. And I learned years after reading it about some of the nods to racism in it, but we can talk about that later if you want.

Nyasha Green:
Yeah, we definitely can. And that’s awesome. Yeah, I also love I, Robot the book. The movie is just so much different. I also think it was a really good movie. I do love the movie as well. I feel like that’s a common thing, it’s just not as good as the book. For me, my favorite… I don’t have a favorite because I love science fiction and I don’t care how many movies they make about morality of AI and robots in the future, I’m going to love it. I grew up watching The Jetsons, Terminator.

Topher DeRosia:
Oh yeah.

Nyasha Green:
Yeah, I love it. And I thought we would have flying cars by now, but that’s okay. But yeah, I love the thought process behind it. I love the writings, I love the movies, I love the media. But if I had to pick one for today, I’m going to say one that I constantly think about is Minority Report. I really-

Topher DeRosia:
I’ve never seen it.

Nyasha Green:
You’ve never seen that? Oh wow. Okay. The gist of the movie is we’re in the future and there’s almost no crime because they have-

Topher DeRosia:
Oh, right.

Nyasha Green:
… oracles and machines that predict crime.

Topher DeRosia:
Predict it.

Nyasha Green:
Yeah. So they-

Topher DeRosia:
I’ve heard about it.

Nyasha Green:
Yeah, they stop people from committing crimes before they do it, which that’s a big red flag. It’s like if someone hasn’t committed a crime, are they really guilty? So I love that movie. Like I said, it’s not my favorite, but it’s one of my tops and it’s something that constantly makes me think about AI immorality and who’s creating this technology and who gets to be the judge, jury, and executioner of this technology.

Topher DeRosia:
Yeah. Did you ever see Judge Dredd?

Nyasha Green:
Yes. I also love Judge Dredd. Yeah.

Topher DeRosia:
I don’t think that was AI, I think he was just a dude.

Nyasha Green:
Yeah, I think he was.

Topher DeRosia:
But it put all the power into that one point. He was the judge, jury, executioner.

Nyasha Green:
Yeah.

Topher DeRosia:
And I think you run into a lot of the same problems.

Nyasha Green:
Yeah, you really do, and you can go beyond AI. We always have that problem in media, especially in comic books as well, who watches the watchmen? Who gets to decide all of this stuff? And I think… I say all that to say, this leads into our discussion today about artificial intelligence. Artificial intelligence, it’s not just something you watch on TV anymore or you see on a movie or you read in a book. As days go by, more and more things are being handled by AI, especially things that have to do with people in our bodies. And so what we wanted to talk about today was the push for AI to go toward more open source, and it’s actually already started happening. A lot of companies are going more toward using the open source more collaborative way of creating their artificial intelligence. I think that’s a great thing and I just want to talk today about the different things about that.

Topher DeRosia:
So why do you think it’s a good thing?

Nyasha Green:
So I think it’s a great thing because first of all, the collaboration aspect of open source. So we work in WordPress, which is an open source technology, and in that sense we get to work, talk, and learn from people from different backgrounds, different cultures, who don’t even live in the same place as us. Like me and you right now, we’re in two different places. With that comes a lot of different perspectives on things and some things we may not have thought about where we’re creating.

As it goes historically, AI has not been as open as we would want it to be and need it to be as people, but the people creating different things with open source, of course they want their products and their creations and their software to be used by people all over. So we come to this stalemate where how do we get this product that we want everyone to use, how do we make it so it’s able to be used by everyone? How do we keep our biases from creeping into what we’re creating and our limited view on the world? But I think it’s really great because open source reduces the cost and increases flexibility and it’s just more transparency because you have multiple people working on different things.

Topher DeRosia:
Yeah. I kind of read into this personally way back in the nineties, open source is becoming more acceptable and a number of people started suggesting that science review catalogs become open source, open, transparent, because it’s still this way as of right now, if you want to know what’s going on in the world of science, you pay hundreds or thousands of dollars to these super secret publications. And the noise was particularly loud around government research. If my tax dollars are paying for this research, I need access to the results, and the same goes for AI. I mean, if the government or the military or anybody is doing research in AI, it’s my money. I need to know what’s going on. What are you doing with it?

Nyasha Green:
Yeah. That’s a good point.

Topher DeRosia:
And making it open is really super important.

Nyasha Green:
It really is. I definitely agree with you. I feel the same way.

Topher DeRosia:
So one of the reasons it’s important is because we need to know as much as we can about possible failures before they happen. Do you have any examples of proprietary fails, things that happen?

Nyasha Green:
Yes.

Topher DeRosia:
That broke the world or whatever.

Nyasha Green:
Yes. There’s so, so many, so many fails that may have been prevented by using more open source, well, going the open source route. And just to… I want to make a disclaimer before I go into that, open source and tech in general is still not as diverse and varied as we need it to be, honestly, because… So open source is just one step, but it is a lot more diverse and you have a greater number of different types of people working on a project and working on code and working on AI, if it is open source. Just an example of some proprietary software fails, just some major ones that blew up, there are actually many, but some major ones that blew up, Google Photos had an incident back in 2015 where their photos were mislabeling Black people as gorillas.

Topher DeRosia:
I remember that.

Nyasha Green:
Yeah. And it was a Black engineer, or a Black person, who worked there who brought it up. And I bring that up to say because people were like, “Well, they did have a Black person working there that caught that, and that’s good.” But it’s like that was one person. What if he didn’t catch it? One person in the room isn’t enough. If they may have had more than one person like him in the room when it was created, maybe that may have been avoided. So that was a big one. OpenAI, and even to this day I’ve heard DALL-E as well, they’re having issues where they’re linking people who appear to be Muslim to violence.

Topher DeRosia:
Linking how?

Nyasha Green:
When you show photos of them, when you type in… when you type in Muslim or you type in violence, [inaudible 00:10:10] comes up.

Topher DeRosia:
Right, yeah.

Nyasha Green:
That’s a big issue and what it tells me as well is people’s microaggressions are seeping into the products they’re building. That’s one thing. And the other thing it’s telling me is that a lot of these people who are having these microaggressions go out into the worldwide web and people are discovering them. They’re not in the room. They’re not in the rooms when this happens. And that leads me to my… not my favorite one. It feels weird to say favorite, but the most interesting one.

Topher DeRosia:
Most triggering one?

Nyasha Green:
Yeah, the most triggering one would be, I’ve written a little bit before, it was just like a spot in one of my articles about Karen Sandler. She is a woman who depends on a specific defibrillator for her heart, and what it does is it shocks her heart if it feels her… I believe if it fills her heartbeats getting too low or too irregular, she gets shocked. She’s had it for a while, and she got pregnant. When she got pregnant, it didn’t take into account that she was pregnant and it would shock her randomly when her heart was perfectly functioning. And of course that was an issue. That was a big issue. And Karen was like, “I want to know what type of code is inside of my body.”

Because this is artificial intelligence, it’s something that’s keeping her alive or helping to keep her alive, and she believed, and I agree with her, that she had a right to know what that code was. And when she went to the people who created the devices, they said, “No.” So she’s been in a legal battle, and this has been for some years, to get that code for herself. It’s a part of her. It’s not just code. It’s not just something she just out of the air wants to know, it’s something that’s in her body. And we have more and more AI that’s going into people’s bodies. We have insulin pumps and things like that that that people have discovered that are able to be hacked. So don’t we have a responsibility to people to let them know what kind of code is going on in their bodies if they want to know?

Topher DeRosia:
Yeah, absolutely.

Nyasha Green:
That’s where open source comes into play. Had that technology been open source, Karen would’ve had access to it and she may have even had the ability to make it better, and her making that better would have made it better for so many other people who use that device and are getting pregnant. So that’s the most interesting one that I have, and I’m following her story and I’m rooting for her, but there’s just so many examples where AI could definitely benefit from the open source route. I mean, the more eyes and ears and people we have on this technology, the more better it is for everyone in the world, every single person.

Topher DeRosia:
I remember when Microsoft Surfaces first came out and they have a thing where if you just walk up to it and it’s on and it recognizes your face, it logs you in. And it made it all the way to market before somebody noticed that it only worked for white people because they’re the only ones with human faces, according to the software.

Nyasha Green:
I did hear about that.

Topher DeRosia:
It was a big deal. I mean, people were freaked out, as they should, but it’s impressive that it made it that far. It speaks to the lack of diversity on the development team and the testing team and every team in the whole process. You know what I mean?

Nyasha Green:
Yeah, it does, and it’s just, again, another example of not having enough people in the room, but again, microaggressions. I don’t think a lot of these companies… I don’t think any of these companies, major ones, are honestly going into coding their AI and saying, “Hey, I want you to discriminate against this group of people.” I don’t think they’re doing that at all, but I do think that it’s picking up microaggressions.

Topher DeRosia:
Yeah. Something I’m really curious about is if the microaggressions are being coded in by hand, by accident, or if we’re really having artificial intelligence and it’s looking around and saying, “Hey, all these people that built me don’t like that guy over there with the dark skin.” You know what I mean? Because that’s going to be a whole nother story. I mean we can try to teach people not to code it in, but if AI becomes really actually intelligent and looks at humanity and decides it likes white people more than Black people, what do you do with that? How do you train that out of an AI?

Nyasha Green:
That is a really, really good question, and that brings up something… I want to go back to what you said earlier, when we were talking about I, Robot and some of Asimov’s work, a lot of it does nod off at racism and things like that, and that’s always been fascinating to me because when we read these books and watch the media, we always see us versus them, and the us is humanity and the them is robots.

But I would love to see something that before it gets to us versus them, it’s in the middle, like exactly what you said. We’ve taught this AI already to be racist or sexist or xenophobic, we’ve already taught it that. What happens in between that, between us and them? Because the world could do with a little bit more diversity and love, but we’re not stopping our technological advances. So I would love to see what’s going on in that in between. I would love to write a book or read… I would actually like to read the book. I would like to read the book or have someone write a script with that because I think that brings up a very, very, very good point.

Topher DeRosia:
Yeah. Next thing I have here to talk about is how do you personally feel about the government getting involved in a push toward open source? Where should they be involved, where should they not be involved?

Nyasha Green:
So this is a little tricky. I’m one of those people where I can see both sides of more government involvement in open source. One thing where I think it would definitely help is diversity. I know when I worked… my first coding job was for a state government and it probably was one of the most diverse jobs I’ve ever had in my life. I met people from all walks of life, and I also met people who look like me. The person who trained me at my job was my gender and the other person was my race. And that’s rare, not just in tech, for me. So it felt good.

And the more people I met that worked for the state and federal government also worked for the federal government in the past. It’s always been very diverse. So the federal and state governments, in my opinion, they do put their money where their mouth is when it comes to recruitments and diversity and I think if the government is getting more into open source, they’ll put more diverse people into those roles. And I think that’s great. That’s the positive of it. The negative of it is we’re just people who are always very scared of government involvement. And when we talk about AI, doesn’t it get a little scary? Big Brother’s always watching you, things like that. How do you feel about it?

Topher DeRosia:
It’s complicated for me. The open source part of my soul, I’ve been an open source fan for 30 years, wants it all to be open source, all the time forever. I don’t think I’m ever going to get that wish. So then what’s the next best thing? I think, like with just about anything, the government can lay down some ground rules and mandates. There are rules about… civil rights rules that apply to the bus, that apply to hiring, that apply to just about everything now, and I think the government could enforce them in a software realm, in an AI realm. Until now, well, recently, software has just been letters and numbers and whatever, but now that… if we really start making machines think for themselves, there has to be some way to communicate to them that there are rules they need to follow. Because I mean, there are racist people who follow civil rights laws because it’s the law.

If we could get an AI to say, “I don’t like Black people, but in the world I operate in, I have to treat them this way,” we’re really stretching the limits of my knowledge of how AI works, so I don’t know if that’s a thing, but that’s all talking about corporate code and the government talking to corporate. I really believe that the government could make some serious AIs themselves, and just like with the science stuff, I think it should all be open source and we should all be able to see what’s inside. And I think if they do a really good job making really good AI and it’s open source, other people are going to fork it and make more and better AI. And just like with Linux taking over the world, I think a good open source AI could dominate all the closed sourced AIs. And so there’s a big world war battle between the AIs and we all die.

Nyasha Green:
Like the Matrix, another good movie about AI.

Topher DeRosia:
Just like the Matrix.

Nyasha Green:
Yeah, we’re getting some good movies in today. I agree. I agree with you. And one theory I saw when I was researching, I’ve been researching about AI and open source for a little while, AI forever, open source recently, and another maybe con to open source and AI that someone brought up was that yeah, we’re letting more people get on code, but what if more of the people who are getting in are putting more bias into the code instead of diversity? That was something I did not think about at all. So when you brought up if this bias has just taught this machine so much negative stuff, how do… but there’s a law to follow and we don’t know if we’ll be able to do that, that might be… I don’t know, that might be something in the next 30 years or the next five, who knows?

But if there’s so much bias being put in and then we’re bringing in more people who are just putting in more bias, what does the machine learn? And that’s a good thing to think about. I’m of the opposite. I think that we can bring… as long as we’re still bringing people in with different points of views, I think we can avoid that pretty much, but I just think that’s something really, really good to think about. If your company is diverse and diversity to you is men and women, but they’re all the same race, or diversity to you is all the same gender of different races, how different is your technology even though it falls under what you believe diversity is to be. So, I think that’s something really, really interesting to think about.

Topher DeRosia:
Cool. I do think about this stuff quite a lot, actually. Kind of every time I see news, such and such a thing is happening in AI, and there’s a lot of conversation on Twitter over the last few years, and it often is just the same conversation. Every time there’s something new, somebody will say, “Oh, AI is here. It’s changing the world.” And somebody will say, “AI is just code written by seven guys in Cleveland. It’s not any smarter than those seven guys. Don’t worry about it.” And then of course, that loops around to the idea that maybe it’s not awesome, maybe it isn’t any smarter than those seven guys in Cleveland, and we’re using it as if it were.

Nyasha Green:
I agree.

Topher DeRosia:
That’s kind of spooky. How far do we trust AI to be right? Who’s testing it? All that stuff, and so I think about it quite a lot.

Nyasha Green:
Yes, and that’s a really great point. It just… Oh my God, it raises such good questions. It makes such good movies, but at the end of the day, I do think open source is a step in that. It’s a step in the good direction because when we have proprietary software, we can see by Karen’s fight that it’s already hard for other people, diverse peoples to get into those places versus open source. You don’t even have to work for these companies for certain companies to access their code, to access their product, to enhance it, to make it better. That’s what a worldwide collaboration and improvement of human society is to me, and I think that’s what open source is and I think that the more companies that go toward that, the better world we’ll have.

Topher DeRosia:
Yeah, that makes a lot of sense. It’s good stuff.

Nyasha Green:
Yeah. It was so great talking to you about this, Topher. I’m really ready to go watch some movies now, but I have a few more hours of work and then I want to watch some movies.

Topher DeRosia:
Well, you and I are soon going to be flying the work camp Asia.

Nyasha Green:
Yes.

Topher DeRosia:
And you will have about 20 hours of flight time to watch movies.

Nyasha Green:
Yeah. Oh my gosh. Yes. Yes, yes, yes.

Topher DeRosia:
So load up your device with whatever you want to watch.

Nyasha Green:
The Terminator series.

Topher DeRosia:
All of them.

Nyasha Green:
All of them.

Topher DeRosia:
And hope the plane is not flying on AI.

Nyasha Green:
Oh my gosh. Nevermind, we will watch just the K drama.

Topher DeRosia:
That’s right. All right, well I will catch you in the next podcast.

Nyasha Green:
You too.

Topher DeRosia:
Bye.

Nyasha Green:
Bye-Bye.

Monet Davenport:
Thank you for listening to this episode. Press the Issue is a production of MasterWP, produced by Allie Nimmons, hosted, edited, and musically supervised by Monet Davenport and mixed and mastered by Teron Bullock. Please visit masterwp.com/presstheissue to find more episodes. Subscribe to our newsletter for more WordPress news at masterwp.com.

Loving Press the Issue? Want more even more content, focused on everything WordPress and open source? Sign up for MasterWP Premium for $99/year and get one bonus podcast episode once a month - directly to your email inbox.